abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
A method includes forming (202, 204, 206) layers of an oxide and a metal on a substrate. For example, the layers may include a metal layer sandwiched between silicon oxide layers. A non-conductive structure such as glass is then bonded (208) to one of the oxide layers. An antenna can then be patterned (210) on the non-conductive structure, and a cavity can be created (212) in the substrate. Another metal layer is deposited (214) on the surface of the cavity, and an iris is patterned (216) in the metal layer to expose the one of the oxide layers. Another metal layer is formed (220) on a second substrate and the two substrates are bonded together to thereby seal the cavity.
CLAIMSWhat is claimed is:1. A method of forming a sealed cavity, the method comprising:forming a first oxide layer on a first substrate;forming a first metal layer on a surface of the first oxide layer opposite the first substrate; forming a second oxide layer on a surface of the first metal layer opposite the first oxide layer;bonding a non-conductive structure to a surface of the second oxide layer opposite the first metal layer;patterning an antenna on the non-conductive structure;creating a cavity in the first substrate to the first oxide layer;depositing a second metal layer on a surface of the cavity;patterning an iris in the second metal layer to expose the second oxide layer; and forming a metal layer on a second substrate; andbonding the second substrate to the first substrate to thereby seal the cavity.2. The method of claim 1, further comprising, before creating the cavity, forming a third oxide layer on the surface of the second oxide layer and over the antenna.3. The method of claim 2, further comprising removing the third oxide layer after creating the cavity.4. The method of claim 1, further comprising depositing and patterning an electronic bandgap structure on the non-conductive structure.5. The method of claim 1, wherein creating the cavity comprises wet etching the cavity.6. The method of claim 5, wherein the wet etching uses at least one of potassium hydroxide (KOH) and tetramethylammonium hydroxide (TMAH) as a wet etchant.7. The method of claim 1, wherein bonding the second substrate to the first substrate comprises depositing a eutectic alloy on a surface of at least one of the first and second substrates.8. The method of claim 1, wherein bonding the non-conductive structure to the surface of the second oxide layer comprises bonding a glass sheet to the first oxide layer.9. The method of claim 1, wherein the first and second substrates comprise semiconductor wafers.10. The method of claim 1, wherein bonding the non-conductive structure comprises glass and the first and second substrates comprise semiconductor wafers.11. A device, comprising:a first substrate that includes a cavity;a first oxide layer on a surface of the first substrate;a first metal layer on a surface of the first oxide layer opposite the first substrate;a second oxide layer on a surface of the first metal layer opposite the first oxide layer; a non-conductive structure bonded to a surface of the second oxide layer opposite the first metal layer;a first antenna patterned on a surface of the non-conductive structure opposite the second oxide layer; anda second substrate bonded to the first substrate to thereby seal the cavity;wherein the cavity extends from an interface between the first and second substrates to the second oxide layer.12. The device of claim 12, further comprising an electronic bandgap structure on a surface of the non-conductive structure.13. The device of claim 11, wherein the cavity includes dipolar molecules.14. The device of claim 13, wherein the dipolar molecules are water molecules and the cavity has a pressure of less than 0.15 mbars.15. The device of claim 11, wherein the non-conductive structure comprises at least one of glass, ceramic, and silicon.16. The device of claim 11, wherein the first substrate comprises at least one of a semiconductor wafer, a ceramic, and a metal, and wherein the second substrate comprises at least one of a semiconductor substrate, a ceramic, and a metal.17. The device of claim 11, wherein the non-conductive structure comprises glass and each of the first and second substrates comprise a semiconductor substrate.18. The device of claim 11, further comprising an amplifier, a filter, a signal generator, and a second antenna patterned on the surface of the non-conductive structure opposite the second oxide layer, wherein:the signal generator is coupled to the first antenna and is configured to generate a transmit signal to the first antenna; the amplifier is coupled to the second antenna and is configured to generate an error signal based on a receive signal from the second antenna and the transmit signal; andthe filter is coupled to the amplifier and the signal generate, and is configured to generate a control output signal, based on the error signal, to adjust a frequency of the transmit signal generated by the signal generator.19. A device, comprising:a first semiconductor substrate that includes a cavity;a first oxide layer on a surface of the first semiconductor substrate;a first metal layer on a surface of the first oxide layer opposite the first semiconductor substrate;a second oxide layer on a surface of the first metal layer opposite the first oxide layer; a glass sheet bonded to a surface of the second oxide layer opposite the first metal layer; first and second antennas patterned on a surface of the glass sheet opposite the second oxide layer;a second semiconductor substrate bonded to the first semiconductor substrate to thereby seal the cavity; anda transceiver electrically coupled to the first and second antennas and configured to inject a transmit signal into the cavity through the first antenna, generate an error signal based on the transmit signal and a receive signal from the second antenna, and dynamically adjust a frequency of the transmit signal based on the error signal;wherein the cavity contains dipolar molecules and has an internal pressure of less than 0.15 mbars.20. The device of claim 19, wherein the transceiver includes:a signal generator coupled to the first antenna and configured to generate the transmit signal;an amplifier coupled to the second antenna and configured to generate the error signal; and a loop filter coupled to the amplifier and the signal generator, wherein the loop filter is configured to, based on the error signal, generate a control output signal to the signal generator.
HERMETICALLY SEALED MOLECULAR SPECTROSCOPY CELLWITH DUAL WAFER BONDINGBACKGROUND[0001] Various applications may include a sealed chamber formed in a semiconductor structure. In one particular application, a chip-scale atomic clock may include a selected vapor at a low pressure in a sealed chamber. Forming such structures can be a challenge.SUMMARY[0002] In one embodiment, a method includes forming layers of an oxide and a metal on a substrate. For example, the layers may include a metal layer sandwiched between silicon oxide layers. A non-conductive structure, such as glass, is then bonded to one of the oxide layers. An antenna can then be patterned on the non-conductive structure, and a cavity can be created in the substrate. Another metal layer is deposited on the surface of the cavity, and an iris is patterned in the metal layer to expose the one of the oxide layers. Another metal layer is formed on a second substrate and the two substrates are bonded together to thereby seal the cavity. The method also may include the deposition or bonding of further dielectric and metal layers and their subsequent patterning on the topmost surface to improve the radio frequency (RF) performance of antenna, transmission line structures, and electromagnetic bandgap structures.[0003] In another embodiment, a device includes a first substrate that includes a cavity. The device also includes a first oxide layer on a surface of the first substrate, a first metal layer on a surface of the first oxide layer opposite the first substrate, and a second oxide layer on a surface of the first metal layer opposite the first oxide layer. The device further includes a non-conductive structure bonded to a surface of the second oxide layer opposite the first metal layer, a first antenna patterned on a surface of the non-conductive structure opposite the second oxide layer, and a second substrate bonded to the first substrate to thereby seal the cavity. The cavity in this embodiment extends from an interface between the first and second substrates to the second oxide layer.[0004] In yet another embodiment, a device includes a first semiconductor substrate in which a cavity has been formed. The device also includes a first oxide layer on a surface of the first semiconductor substrate, a first metal layer on a surface of the first oxide layer opposite the first semiconductor substrate, and a second oxide layer on a surface of the first metal layer opposite the first oxide layer. The device further includes a glass sheet bonded to a surface of the second oxide layer opposite the first metal layer, first and second antennas patterned on a surface of the glass sheet opposite the second oxide layer, a second semiconductor substrate bonded to the first semiconductor substrate to thereby seal the cavity, and a transceiver electrically coupled to the first and second antennas and configured to inject a transmit signal into the cavity through the first antenna. The cavity contains dipolar molecules and has an internal pressure of less than 0.15 mbars, for example. The transceiver is configured also to generate an error signal based on the transmit signal and a receive signal from the second antenna and dynamically adjust a frequency of the transmit signal based on the error signal.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIGS. 1A-1I illustrate a sequence of processing operations in one embodiment to form a hermetically sealed cavity.[0006] FIG. 2 illustrates a method flow chart to form a hermetically sealed cavity in accordance with various embodiments.[0007] FIG. 3 shows a cross-sectional view of the hermetically sealed cavity of various embodiments.[0008] FIG. 4 shows a block diagram for a clock generator in accordance with various embodiments.DETAILED DESCRIPTION OF EXAMPLE EMBODFMENTS[0009] In this description, the term "couple" or "couples" means either an indirect or direct wired or wireless connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections. Also, in this description, the recitation "based on" means "based at least in part on." Therefore, if X is based on Y, then X may be a function of Y and any number of other factors.[0010] The described embodiments include techniques to fabricate a hermetically sealed cavity in a substrate. A structure containing a substrate with the cavity may be used in numerous applications. One illustrative use is as a millimeter wave chip scale atomic clock. The cavity may contain dipolar molecules (e.g., water molecules) at a relatively low pressure. For some embodiments, the pressure may be approximately 0.1 mbarr for water molecules. If argon molecules were used, the pressure may be several atmospheres. The hermetically sealed cavity may contain selected dipolar molecules at a pressure chosen to optimize the amplitude of a signal absorption peak of the molecules detected at an output of the cavity. An electromagnetic signal may be injected through aperture into the cavity. Through closed-loop control, the frequency of the signal is dynamically adjusted to match the frequency corresponding to the absorption peak of the molecules in the cavity. The frequency produced by quantum rotation of the selected dipolar molecules may be unaffected by circuit aging and may not vary with temperature or other environmental factors.[0011] A variety of materials and manufacturing operations can be employed. One illustrative method may include forming layers of an oxide and a metal on a substrate. For example, the layers may include a metal layer sandwiched between silicon oxide layers. A non-conductive structure, such as glass, is then bonded to one of the oxide layers. An antenna can then be patterned on the non-conductive structure, and a cavity can be created in the substrate. Another metal layer is deposited on the surface of the cavity, and an iris is patterned in the metal layer to expose the one of the oxide layers. Another metal layer is formed on a second substrate and the two substrates are bonded together to thereby seal the cavity.[0012] FIGS. 1A-1I illustrate a sequence of process steps to fabricate a hermetically sealed cavity in accordance with an embodiment. At FIG. 1A, a first oxide layer 102 is formed on a first substrate 120. A first metal layer 104 is formed on a surface of the first oxide layer 102 opposite the first substrate 120. The first metal layer 104 may comprise copper or another suitable metal. A second oxide layer 106 is formed on a surface of the first metal layer 104 opposite the first oxide layer 102. The oxide layers may comprise silicon oxide and layers 102-106 may be formed in accordance with any suitable semiconductor process operations. The substrate 120 is a semiconductor substrate (e.g., silicon) in some embodiments, but can be other than a semiconductor substrate in other embodiments, such as a ceramic material or a metallic cavity.[0013] At FIG. IB, a non-conductive structure 108 is bonded to a surface of the second oxide layer 106 opposite the first metal layer 104. In one example, the non-conductive structure comprises glass (e.g., 130 micrometers thick), but can include other types of materials such as ceramic or silicon in other embodiments. The process to bond the non-conductive structure 108 to the second oxide layer 106 may comprise an anodic, fusion, eutectic solder, transition liquid phase (TLP), cofiring, or other suitable bonding processes. [0014] FIG. 1C illustrates that an antenna 110 has been patterned on a surface of the non- conductive structure 108. The antenna 110 comprises a conductive material such as copper or gold and an electrical signal can be provided to the antenna or received from the antenna. In some embodiments, one antenna is used to both transmit and receive signals. In other embodiments, a pair of antennas is patterned on the non-conductive structure 108, and one antenna is used to inject a signal into the cavity and another antenna is used to receive a signal from the cavity. In such examples, the antennas may be located at or near opposite ends of the cavity. FIG. ID illustrates that an oxide layer 115 is formed on a surface of the non-conductive structure 108. Oxide layer 115 also covers the antenna 110 and functions to protect the antenna during subsequent process operations. FIG. ID also illustrates that a cavity 125 has been created in the substrate 120. The cavity 125 may be wet etched into the substrate 120 using a suitable wet etchant such as potassium hydroxide (KOH) or tetramethylammonium hydroxide (TMAH). Alternatively, the cavity 125 can be formed by way of reactive-ion etching (RTE), deep reactive-ion etching (DRTE), or isotropic etching. The cavity 125 is etched from the surface 126 of the substrate 120 opposite the first oxide layer 102 to the first oxide layer 102, thereby exposing a portion of the first oxide layer 102. FIG. IE illustrates that another metal layer 130 has been deposited on a surface of the substrate 120 opposite the first oxide layer 102. The metal layer 130 also is deposited in the cavity 125 as shown and may be sputter deposited (e.g., 40 nm TaN per micrometer of copper).[0015] FIG. IF illustrates that an iris 140 is created in the metal layer 130 within the cavity 125. The iris 140 is patterned (e.g., by wet etching, dry etching, liftoff, etc.) in the metal layer 130 and exposes at least a portion of the second oxide layer 130. The iris 140 permits the RF energy from the incident radio frequency (RF) signal provided by the antenna 110 is able to penetrate through the iris 140 and into the cavity 125, and back out again through another iris formed in the cavity and associated with another antenna (described above).[0016] FIG. 1G shows a second substrate 150 and a metal layer 152 formed thereon. The substrate 150 may comprise the same or different material as substrate 120. In one example, the substrate 150 comprises a semiconductor substrate such as a silicon wafer, but can be other than a semiconductor material in other examples. FIGS. 1H and II illustrate that bonding structures 155 are deposited and patterned on either or both of the substrates 120 and 150. In one example, the bonding structures comprise a gold, aluminum, silicon or other types of material that form an alloy when heated to a suitable temperature. FIG. II illustrates the resulting device, which includes a hermetically sealed cavity. Dipolar molecules (e.g., water molecules) may be trapped inside the cavity 125 and at an internal pressure of less than approximately 0.15 mbars (e.g., 0.1 mbars).[0017] The flow chart of FIG. 2 illustrates a method in accordance with an example. The operations may be performed in the order shown, or in a different order. Further, the operations may be performed sequentially, or two or more of the operations may be performed concurrently.[0018] At 202, the method includes forming a first oxide layer on a first substrate (e.g., a semiconductor substrate such as a wafer). The illustrative method then includes (204) forming a first metal layer (e.g., copper) on a surface of the first oxide layer opposite the first substrate. At 206, the method includes forming a second oxide layer on a surface of the first metal layer opposite the first oxide layer. Accordingly, a metal layer is created sandwiched between to oxide layers.[0019] At 208, the method includes bonding a non-conductive structure (e.g., glass) to a surface of the second oxide layer opposite the first metal layer, and at 210 patterning an antenna (e.g., antenna 110) on the non-conductive structure. A cavity is then created at 212 (e.g., by a wet etching process) in the substrate. The cavity extends from one surface of the substrate to the opposing surface (and thus to the first oxide layer).[0020] At 214, a second metal layer is deposited in the interior surface of the cavity and on a surface of the first substrate outside the cavity. At 216, the method includes patterning an iris in the second metal layer to expose the second oxide layer. At 218, a metal layer is formed on a second substrate (e.g., another semiconductor wafer) and the first and second substrates are then bonded together at 220 to thereby seal the cavity. In one embodiment, the substrates are bonded via eutectic bonds, or other suitable bonding techniques.[0021] FIG. 3 shows a cross-sectional view of a structure in accordance with the described embodiments. The structure may comprise a millimeter wave chip scale atomic clock. Substrate 120 is shown bonded to substrate 150 with a hermetically sealed cavity 125 formed in the substrate 120 and sealed at least in part by substrate 150. The non-conductive structure (e.g., glass) 108 is shown bonded to the substrate 120. A launch structure 295 may comprise the antenna 110 described above and also a transmission line, and electromagnetic energy is permitted to pass through the non-conductive structure 108 from the launch structure 295 into the cavity 125. An electronic bandgap (EBG) structure 290 also is shown deposited and patterned on a surface of the non-conductive structure 108. In operation, the EBG structure 290 attenuates electromagnetic wave coupling along the outer surface of the non-conductive structure 108 between the antennas. The EBG stmcture 290 helps to force the energy from the input signal received through an antenna (e.g., antenna 110) into the cavity 125. Layer 104 provides a common ground plane for all RF structures external to the cavity 125. In addition, it limits propagation of waves travelling in layer 120. The dimensions of the waveguide, antenna, EBG, and size and positioning of the iris 140 are all design considerations based on the chosen molecular species inside the cavity and the wavelength of the interrogation waveform within the cavity. The required bandwidth of the structure depends upon the fabrication tolerances achievable in manufacturing.[0022] FIG. 4 shows a block diagram for a clock generator 500 in accordance with various embodiments. The clock generator 500 is a millimeter wave atomic clock that generates a reference frequency based on the frequency of quantum rotation of selected dipolar molecules contained in a hermetically sealed cavity 102 formed in semiconductor material. The reference frequency produced by quantum rotation of the selected dipolar molecules is unaffected by circuit aging and does not vary with temperature or other environmental factors.[0023] The clock generator 500 of FIG. 4 includes a vapor cell 505 formed in this example from substrates as described above. The cell 505 includes a cavity 508 with a sealed interior enclosing a dipolar molecule material gas, such as water (H20) or any other dipolar molecule gas at a relatively low gas pressure inside the cavity 125. Suitable electrical dipolar material gases include water, acetonitrile (CH3CN) and hydrogen cyanide (HCN). As shown in FIG. 6, the clock generator 500 further includes a transceiver 600 with a transmit output 633 for providing an electrical transmit signal (TX) to the vapor cell 505, as well as a receiver input 638 for receiving an electrical input signal (RX) from the vapor cell 525. The rotational transition vapor cell 525 does not require optical interrogation, and instead operates through electromagnetic interrogation via the transmit and receive signals (TX, RX) provided by the transceiver 600.[0024] The sealed cavity 508 includes a conductive interior cavity surface, as well as first and second non-conductive apertures 515 and 517 formed in the interior cavity surface for providing an electromagnetic field entrance and an electromagnetic field exit, respectively. In one example, the apertures 515, 517 magnetically couple into the TE10 mode of the cavity 508. In other examples, the apertures 515, 517 excite higher order modes. First and second conductive coupling structure 520 and 525 are formed on an outer surface of the vapor cell 505 proximate the first and second non-conductive aperture 515 and 517, respectively. The coupling structures 520, 525 may be the antenna(s) described above and may comprise a conductive strip formed on a surface of one of the substrates forming the cell 505. Each coupling structure 520, 525 may overlie and cross over the corresponding non-conductive aperture 515, 517 for providing an electromagnetic interface to couple a magnetic field in to (based on the transmit signal TX from the transceiver output 633) the cavity 508 or from the cavity to the transceiver RX input 638 The proximate location of the conductive coupling structures 520, 525 and the corresponding non-conductive apertures 515, 525 advantageously provides electromagnetically transmissive paths through the second or upper substrate 106, which can be any electromagnetically transmissive material.[0025] The transceiver circuit 600 in certain implementations is implemented on or in an integrated circuit (not shown), to which the vapor cell 505 is electrically coupled for transmission of the TX signal via the output 633 and for receipt of the RX signal via the input 638. The transceiver 600 is operable when powered for providing an alternating electrical output signal TX to the first conductive coupling structure 520 for coupling an electromagnetic field to the interior of the cavity 508, as well as for receiving the alternating electrical input signal RX from the second conductive coupling structure 525 representing the electromagnetic field received from the cavity 508. The transceiver circuit 600 is operable for selectively adjusting the frequency of the electrical output signal TX in order to reduce the electrical input signal RX by interrogation to operate the clock generator 500 at a frequency that substantially maximizes the molecular absorption through rotational motor state transitions, and for providing a reference clock signal REF CLK at the frequency of the TX output signal.[0026] In certain examples, the transceiver 600 includes a signal generator 602 with an output 633 electrically coupled with the first conductive coupling structure 520 for providing the alternating electrical output signal TX, and for providing the reference clock signal REF CLK at the corresponding transmit output frequency. The transceiver 600 also includes a lock-in amplifier circuit 606 with an input 638 coupled from the second conductive coupling structure 525 for receiving the RX signal. The lock-in amplifier operates to provide an error signal ERR representing a difference between the RX signal and the electrical output signal TX. In one example, the lock-in amplifier 606 provides the error signal ERR as an in-phase output, and the error signal ERR is used as an input by a loop filter 604 to provide a control output signal (CO) to the signal generator 602 for selectively adjusting the TX output signal frequency to maintain this frequency at a peak absorption frequency of the dipolar molecular gas inside the sealed interior of the cavity 508. In some examples, the RF power of the TX and RX loop is controlled so as to avoid or mitigate stark shift affects.[0027] The electromagnetic coupling via the non-conductive apertures 520, 525 and corresponding conductive coupling structures 515, 517 facilitates electromagnetic interrogation of the dipolar gas within the cell cavity 508. In one example of operation, the clock generator 500 operates with the signal generator 602 transmitting alternating current (AC) TX signals at full transmission power at various frequencies within a defined band around a suspected quantum absorption frequency at which the transmission efficiency of the vapor cell 505 is minimal (absorption is maximal). For example, the quantum absorption frequency associated with the dipolar water molecule is 183.31 GHz. When the system operates at the quantum frequency, a null or minima is detected at the receiver via the lock-in amplifier 606, which provides the error signal ERR to the loop filter 604 for regulation of the TX output signal frequency via the control output CO signal provided to the signal generator 602. The rotational quantum frequency of the dipolar molecule gas in the vapor cell cavity 508 is generally stable with respect to time (does not degrade or drift over time), and is largely independent of temperature and a number of other variables.[0028] In one embodiment, the signal generator 602 initially sweeps the transmission output frequency through a band known to include the quantum frequency of the cell 505 (e.g., transitioning upward from an initial frequency below the suspected quantum frequency, or initially transitioning downward from an initial frequency above the suspected quantum frequency, or other suitable sweeping technique or approach). The transceiver 600 monitors the received energy via the input 638 coupled with (e.g., electrically connected to) the second conductive coupling structure 525 in order to identify the transmission frequency associated with peak absorption by the gas in the cell cavity 508 (e.g., minimal reception at the receiver). Once the quantum absorption frequency is identified, the loop filter 604 moves the source signal generator transmission frequency close to that absorption frequency (e.g., 183.31 GHz), and modulates the signal at a very low frequency to regulate operation around the null or minima in the transmission efficiency representing the ratio of the received energy to the transmitted energy. The loop filter 604 provides negative feedback in a closed loop operation to maintain the signal generator 602 operating at a TX frequency corresponding to the quantum frequency of the cavity dipolar molecule gas.[0029] In steady state operation, the lock-in amplifier 606 and the loop filter 604 maintain the transmitter frequency at the peak absorption frequency of the cell gas. In one version, the loop filter 604 provides proportional-integral-derivative (PID) control using a derivative of the frequency error as a control factor for lock-in detection and closed loop regulation. At the bottom of the null in a transmission coefficient curve, the derivative is zero and the loop filter 604 provides the derivative back as a direct current (DC) control output signal CO to the signal generator 602. This closed loop operates to keep the signal generator transmission output frequency at the peak absorption frequency of the cell gas using lock-in differentiation based on the RX signal received from the cell 508. The REF CLK signal from the signal generator 602 is the TX signal clock and can be provided to other circuitry such as frequency dividers and other control circuits requiring use of a clock.[0030] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
Embodiments of the present invention provide an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. The apparatus includes a calculator, a first-in-first-out storage, a synchronization pulse generator, a fill level information provider and a feedback path. The calculator is clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain. The first-in-first-out storage is configured to take over an input data value in synchronization with the first clock domain and to provide an output data value in synchronization with the second clock domain and in response to a current synchronization pulse. The synchronization pulse generator is clocked with the clock of the second clock domain and configured to generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. The fill level information provider is configured to provide a fill level information describing a fill level of the first-in-first-out storage. The feedback path is configured for feeding back the fill level information to the calculator that is further configured to adjust the synchronization pulse cycle duration information based on the fill level information.
CLAIMS In the claims: 1. An apparatus for synchronizing a data handover between a first clock domain and a second clock domain, the apparatus comprising: a calculator clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain; a first-in-first-out storage configured to receive an input data value in synchronization with the first clock domain and provide an output data value in synchronization with the second clock domain and in response to a current synchronization pulse; a synchronization pulse generator clocked with the clock of the second clock domain and configured to generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information; a fill level information provider configured to provide a fill level information describing a fill level of the first-in-first-out storage; and a feedback path configured to feed back the fill level information to the calculator; wherein the calculator is configured to adjust the synchronization pulse cycle duration information based on the fill level information. 2. The apparatus according to claim 1 , wherein the first-in-first-out storage is configured to receive the synchronization pulse cycle duration information in synchronization with the first clock domain and provide the synchronization pulse cycle duration information in synchronization with the second clock domain and in response to the current synchronization pulse. 3. The apparatus according to claim 2, wherein the synchronization pulse generator is configured to receive the synchronization pulse cycle duration information from the first-in-first-out storage in synchronization with the second clock domain and in response to the current synchronization pulse. 4. The apparatus according to claim 1 , wherein the calculator is configured to provide the synchronization pulse cycle duration information such that the synchronization pulse cycle duration information represents a number of clocks inthe second clock domain between the current synchronization pulse and the subsequent synchronization pulse. 5. The apparatus according to claim 4, wherein the first-in-first-out storage is configured to receive the synchronization pulse cycle duration information and the input data in one clock cycle of the first clock domain, and provide the synchronization pulse cycle duration information and the output data in one clock cycle of the second clock domain; and wherein the synchronization pulse cycle duration information describes the number of clocks in the second clock domain for which the output data is valid. 6. The apparatus according to claim 4, further comprising a counter clocked with the clock of the second clock domain and configured to count the clocks in the second clock domain and provide a counter reading; wherein the synchronization pulse generator is configured to generate the subsequent synchronization pulse based on the counter reading such that the subsequent synchronization pulse is located at the temporal position described by the number of clocks in the second clock domain that is represented by the synchronization pulse cycle duration information, and wherein the counter reading is set to an initial value in response to the generation of the subsequent synchronization pulse. 7. The apparatus according to claim 6, wherein the apparatus is configured to set the counter reading to the number of clocks in the second clock domain represented by the synchronization pulse cycle duration information in response to the current synchronization pulse, and wherein the counter is configured to count down the counter reading from the set counter reading in synchronization with the clock of the second clock domain; and wherein the synchronization pulse generator is configured to compare the counter reading with a predefined number and generate the subsequent synchronization pulse when the predefined number is equal to the counter reading. 8. The apparatus according to claim 6, wherein the counter is configured to count up the clocks in the second clock domain from the set counter reading in synchronization with the clock of the second clock domain; and wherein the synchronization pulse generator is configured to compare the counter reading with the number of clocks in the second clock domain represented by the synchronization pulse cycle duration information and generate the subsequentsynchronization pulse when the counter reading is equal to the number of clocks in the second clock domain represented by the synchronization pulse cycle duration information. 9. The apparatus according to claim 1 , wherein the fist-in-first-out storage comprises a plurality of storage cells, wherein the first-in-first-out storage is configured to receive the input data value into a storage cell of the plurality of storage cells indicated by a write pointer value, and wherein the fist-in-first-out storage is configured to provide the output data value from an other storage cell of the plurality of storage cells indicated by a read pointer value; and wherein the fill level information provider comprises a first register for sampling the write pointer value and a second register for sampling the read pointer value, wherein the fill level information provider is configured to combine the sampled write pointer value and the sampled read pointer value in order to obtain a fill level value describing the fill level of the first-in-first-out storage and provide the fill level information such that the fill level information represents the fill level value, 10. The apparatus according to claim 9, wherein the fill level information provider is configured to sum or average a plurality of fill level values in order to obtain a summed or averaged fill level value describing an average fill level of the first-in-first- out storage, and to provide the fill level information such that the fill level information represents the summed or averaged fill level value. 11. The apparatus according to claim 1 , wherein the clock of the second clock domain is modulated, and wherein the calculator is configured to adjust the synchronization pulse cycle duration information based on modulation data describing the modulation of the clock of the second clock domain. 12. The apparatus according to claim 1 , wherein the calculator comprises a controller configured to regulate the synchronization pulse cycle duration information to bring the fill level information towards a predetermined target fill level information. 13. The apparatus according to claim 12, wherein the calculator is configured to combine an output value of the controller with a frequency ratio value describing a frequency ratio between a clock frequency of the second clock domain and a clock frequency of the first clock domain to obtain the synchronization pulse cycle duration information. 14. The apparatus according to claim 1 , further comprising: a first data processor clocked with the clock of the first clock domain and configured to process an input information such that the input data value is provided in synchronization with the first clock domain for the first-in-first-out storage; and a second data processor clocked with the clock of the second clock domain and configured to receive the output data value in synchronization with the second clock domain and in response to the synchronization pulse from the first-in-first-out storage, and further configured to process the output data value such that an output information is provided in synchronization with the second clock domain. 15. An apparatus for synchronizing a data handover between a first clock domain and a second clock domain, the apparatus comprising: a calculator clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain; a first-in-first-out storage configured to receive the synchronization pulse cycle duration information and an input data value in synchronization with the first clock domain and provide the synchronization pulse cycle duration information and an output data value in synchronization with the second clock domain and in response to a current synchronization pulse; a synchronization pulse generator clocked with the clock of the second clock domain and configured to receive the synchronization pulse cycle duration information from the first-in-first-out storage and generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information; a fill level information provider configured to provide a fill level information describing a fill level of the first-in-first-out storage; and a feedback path configured to feed back the fill level information to the calculator; wherein the calculator is configured to adjust the synchronization pulse cycle duration information based on the fill level information. 16. The apparatus according to claim 14, wherein the calculator is configured to provide the synchronization pulse cycle duration information such that the synchronization pulse cycle duration information represents a number of clocks in the second clock domain between the current synchronization pulse and the subsequent synchronization pulse. 17. The apparatus according to claim 15, further comprising a counter clocked with the clock of the second clock domain and configured to count the clocks in the second clock domain and to provide a counter reading; wherein the synchronization pulse generator is configured to generate the subsequent synchronization pulse based on the counter reading such that the subsequent synchronization pulse is located at the temporal position described by the number of clocks in the second clock domain that is represented by the synchronization pulse cycle duration information, and wherein the counter reading is set to an initial value in response to the generation of the subsequent synchronization pulse. 18. The apparatus according to claim 14, wherein the calculator comprises a controller configured to regulate the synchronization pulse cycle duration information to bring the fill level information towards a predetermined target fill level information. 19. The apparatus according to claim 18, wherein the calculator is configured to combine an output value of the controller with a frequency ratio value describing a frequency ratio between a clock frequency of the second clock domain and a clock frequency of the first clock domain to obtain the synchronization pulse cycle duration information. 20. The apparatus according to claim 15, further comprising: a first data processor clocked with the clock of the first clock domain and configured to process an input information such that the input data value is provided in synchronization with the first clock domain for the first-in-first-out storage; and a second data processor clocked with the clock of the second clock domain and configured to receive the output data value in synchronization with the second clock domain and in response to the synchronization pulse from the first-in-first-out storage, and further configured to process the output data value such that an output information is provided in synchronization with the second clock domain. 21. An apparatus for synchronizing a data handover between a first clock domain and a second clock domain, the apparatus comprising: means for calculating clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain;means for first-in-first-out storing configured to receive an input data value in synchronization with the first clock domain and provide an output data value in synchronization with the second clock domain and in response to a current synchronization pulse; means for generating synchronization pulses clocked with the clock of the second clock domain and configured to generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information; means for providing a fill level information configured to provide a fill level information describing a fill level of the first-in-first-out storage; and means for feeding back the fill level information to the calculator; wherein the means for calculating is configured to adjust the synchronization pulse cycle duration information based on the fill level information. 22. A method for synchronizing a data handover between a first clock domain and a second clock domain, the method comprising: providing in the first clock domain a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain; receiving taking over an input data value in synchronization with the first clock domain and providing an output data value in synchronization with the second clock domain and in response to a current synchronization pulse with a first-in-first-out storage; generating in the second clock domain the synchronization pulse such that the synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information; providing in the second clock domain a fill level information describing a fill level of the first-in-first-out storage; and feeding back the fill level information to the first clock domain to adjust the synchronization pulse cycle duration information based on the fill level information. 23. The method according to claim 22, wherein the synchronization pulse cycle duration information is provided such that the synchronization pulse cycle duration information represents, in the form of a coded numeric value, a number of clocks in the second clock domain. 24. The method according to claim 23, comprising counting the clocks of the second clock domain and providing a counter reading, wherein the synchronizationpulse is generated based on the counter reading such that a temporal position of the synchronization pulse is adjusted based on the synchronization pulse cycle duration information, and wherein the counter reading is set to an initial value in response to a generation of the synchronization pulse. 25. The method according to claim 22, further comprising: processing an input information such that the input data value is provided in synchronization with the first clock domain for the first-in-first-out storage; receiving the output data value in synchronization with the second clock domain and in response to the synchronization pulse from the first-in-first-out storage; and processing the output data value such that an output information is provided in synchronization with the second clock domain. 26. A computer program having a program code stored on a non-transitory storage medium, for performing a method for synchronizing a data handover between a first clock domain and a second clock domain when the computer program is running on a computer or microprocessor, wherein the method comprises: providing in the first clock domain a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain; receiving an input data value in synchronization with the first clock domain and providing an output data value in synchronization with the second clock domain and in response to a current synchronization pulse with a first-in-first-out storage; generating in the second clock domain the synchronization pulse such that the synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information; providing in the second clock domain a fill level information describing a fill level of the first-in-first-out storage; and feeding back the fill level information to the first clock domain to adjust the synchronization pulse cycle duration information based on the fill level information. 27. Apparatus for synchronizing a data handover between a first clock domain and a second clock domain, the apparatus comprising: a memory used from the first clock domain and used from the second clock domain, wherein each address of the memory is associated with at least one dataword and a validity word which describes a validity time of the at least one data word; wherein the apparatus is configured to determine the validity word based on a fill level of the memory. 28. Apparatus according to claim 27, wherein the apparatus is configured to increment or decrement a write access pointer when at least one new data word corresponding to an actual address has been written. 29. Apparatus according to claim 27, wherein the apparatus is configured to increment or decrement a read access pointer describing where at least one data word corresponding to a new address is read, when the validity time of a previously read at least one data word has expired. 30. Apparatus according to claim 27, wherein the apparatus is configured to compute the fill level on the basis of a write access pointer and a read access pointer. 31. Apparatus according to claim 27, wherein the apparatus is configured to compute the validity word such that the fill level stays within a predefined boundary. 32. Apparatus according to claim 31 , wherein the boundary is chosen such that a simultaneous read and write of a same address is avoided. 33. Apparatus according to claim 27, wherein the apparatus comprises a timer circuit operated by the first clock or the second clock wherein the timer circuit is configured to receive the validity word from the memory to initiate a read operation together with an incrementation or decrementation of a read access pointer after a time period derived from the validity word. 34. Apparatus according to claim 27, wherein the apparatus comprises: a calculator clocked with the clock of the first clock domain and configured to provide a synchronization pulse timing information describing a temporal position of a synchronization pulse at a clock of the second clock domain;a synchronization pulse generator clocked with the clock of the second clock domain and configured to generate the synchronization pulse in dependence on the synchronization pulse timing information; a phase information provider clocked with the clock of the second clock domain and configured to provide a phase information describing a phase relation between the synchronization pulse and the clock of the first clock domain; and a feedback path for feeding back the phase information to the calculator; wherein the calculator is configured to adjust the synchronization pulse timing information based on the phase information provided on the feedback path. 35. Apparatus according to claim 34, wherein the apparatus is configured to read the memory in response to the synchronization pulse.
APPARATUS FOR SYNCHRONIZING A DATA HANDOVER BETWEEN A FIRST CLOCK DOMAIN AND A SECOND CLOCK DOMAIN FIELD [0001] Embodiments of the present invention relate to an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. Some embodiment of the present invention relate to a FIFO (FIFO = First- In-First-Out) based synchronization mechanism for fractional sample rate converters (FSRC). BACKGROUND [0002] A synchronization of two clock domains for data handover is used in a variety of applications such as in sample rate converters (SRC) and fractional sample rate converters (FSRC). SUMMARY [0003] Embodiments of the present invention provide an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. The apparatus comprises a calculator, a first-in-first-out storage, a synchronization pulse generator, a fill level information provider and a feedback path. The calculator is clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain. The first-in- first-out storage is configured to take over an input data value in synchronization with the first clock domain and to provide an output data value in synchronization with the second clock domain and in response to a current synchronization pulse. The synchronization pulse generator is clocked with the clock of the second clock domain and configured to generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. The fill level information provider is configured to provide a fill level information describing a fill level of the first-in-first-out storage. The feedback path is configured for feeding back the fill level information to the calculator that is further configured to adjust the synchronization pulse cycle duration information based on the fill level information.[0004] Some embodiments of the present invention provide an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. The apparatus comprises a calculator, a first-in-first-out storage, a synchronization pulse generator, a fill level information provider and a feedback path. The calculator is clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain. The first-in- first-out storage is configured to take over the synchronization pulse cycle duration information, an input reload value and an input increment value in synchronization with the first clock domain, and provide the synchronization pulse cycle duration information, an output reload value and an output increment value in synchronization with the second clock domain and in response to a current synchronization pulse. The synchronization pulse generator is clocked with the clock of the second clock domain and configured to receive the synchronization pulse cycle duration information from the first-in-first-out storage and generate the subsequent synchronization pulse such that the subsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. The fill level information provider is configured to provide a fill level information describing a fill level of the first-in-first-out storage. The feedback path is configured for feeding back the fill level information to the calculator that is further configured to adjust the synchronization pulse cycle duration information based on the fill level information. [0005] Further embodiments of the present invention provide a method for synchronizing a data handover between a first clock domain and a second clock domain. In a first step, a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain is provided in the first clock domain. In a second step, an input data value is taken over in synchronization with the first clock domain and an output data value is provided in synchronization with the second clock domain and in response to a current synchronization pulse with a first-in-first-out storage. In a third step, the synchronization pulse is generated in the second clock domain such that the synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. In a fourth step, a fill level information describing a fill level of the first-in-first-out storage is provided. In a fifth step, the fill level information is fed back to the first clock domain to adjust the synchronization pulse cycle duration information based on the fill level information.[0006] An aspect of the present disclosure provides an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. The apparatus comprises a memory used from the first clock domain and used from the second clock domain, wherein each address of the memory is associated with at least one data word and a validity word which describes a validity time of the at least one data word. The apparatus is configured to determine the validity word based on a fill level of the memory. BRIEF DESCRIPTION OF THE DRAWINGS [0007] Embodiments of the present invention are described herein making reference to the appended drawings. [0008] Fig. 1 shows a block diagram of an apparatus for synchronizing a data handover between a first clock domain and a second clock domain according to an embodiment of the present invention. [0009] Fig. 2 shows a block diagram of the apparatus for synchronizing the data handover between the first clock domain and the second clock domain shown in Fig. 1 further comprising a first data processor and a second data processor. [0010] Fig. 3 shows a block diagram of an apparatus for synchronizing a data handover between a low frequency clock domain and a high frequency clock domain according to an embodiment of the present invention. [0011] Fig. 4 shows a block diagram of an apparatus for synchronizing a data handover between a first clock domain and a second clock domain according to an embodiment of the present invention. [0012] Fig. 5 shows in a diagram exemplary timings of the first clock domain and the second clock domain of the apparatus shown in Figs. 3 and 4. [0013] Fig. 6 shows a block diagram of a memory layout of the first-in-first-out storage according to an embodiment of the present invention. [0014] Fig. 7 shows a block diagram of the apparatus for synchronizing the data handover between the first clock domain and the second clock domain shown in Fig. 4, wherein the calculator further comprises a controller.[0015] Fig. 8 shows a block diagram of the first-in-first-out storage, the reload counter 111 and the fill level information provider 112 according to an embodiment of the present invention. [0016] Fig. 9 shows a block diagram of the fill level information provider according to an embodiment of the present invention. [0017] Fig. 10 shows a flow chart of a method for synchronizing a data handover between a first clock domain and a second clock domain according to an embodiment of the present invention; and [0018] Fig. 11 shows a block schematic diagram of an apparatus for synchronizing a data handover between a first clock domain and a second clock domain, according to an aspect of the present disclosure. [0019] Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals. DETAILED DESCRIPTION [0020] In the following description, a plurality of details are set forth to provide a more thorough explanation of embodiments of the present invention. However, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described hereinafter may be combined with each other, unless specifically noted otherwise. [0021] Fig. 1 shows a block diagram of an apparatus 100 for synchronizing a data handover between a first clock domain 102 and a second clock domain 104 according to an embodiment of the present invention. The apparatus 100 comprises a calculator 106, a first-in-first-out storage 108, a synchronization pulse generator 110, a fill level information provider 112 and a feedback path 114. The calculator 106 is clocked with the clock clk-i of the first clock domain 102 and configured to provide a synchronization pulse cycle duration information 116 describing a temporalposition of synchronization pulses 1 18_n at a clock clk2 of the second clock domain 104. The first-in-first-out storage 108 is configured to take over an input data value in synchronization with the clock cl^ of the first clock domain 02 and provide an output data value 122 in synchronization with the clock clk2 of the second clock domain 104 and in response to a current synchronization pulse 1 18_n (n = 1 ). The synchronization pulse generator 1 10 is clocked with the clock clk2 of the second clock domain 104 and configured to generate the subsequent synchronization pulse 1 18_n (n = 2) such that the subsequent synchronization pulse 1 18_n (n = 2) is located at the temporal position described by the synchronization pulse cycle duration information 1 16. The fill level information provider 1 12 is configured to provide a fill level information 124 describing a fill level of the first-in-first-out storage 108. The feedback path 1 14 is configured for feeding back the fill level information 124 to the calculator 106 that is further configured to adjust the synchronization pulse cycle duration information 1 16 based on the fill level information 124. [0022] In embodiments, the data handover between the first clock domain 102 and the second clock domain 104 is realized by the first-in-first-out storage 108, e.g. by an asynchronous first-in-first-out storage that is clocked with the clock clki of the first clock domain 102 and the clock clk2 of the second clock domain 104, where the clock clk2 of the second clock domain is equal to or greater than the clock clki of the first clock domain 102 or vice versa. Moreover, the first-in-first-out storage 108 is configured to provide the output data value 122 (only) in response to synchronization pulses 1 18_n in order to realize a synchronized data handover between the first clock domain 102 and the second clock domain 104. [0023] The synchronization pulses 1 18_n are generated by the synchronization pulse generator 1 10 at the temporal position described by the synchronization pulse cycle duration information 116. Since the synchronization pulse generator 1 10 is clocked with the clock clk2 of the second clock domain 104, the synchronization pulses 1 18_n can only be generated at clocks (e.g. rising or falling clock edges) of the second clock domain 104. Hence, the synchronization pulses 1 18_n are located at specific clocks (e.g. specific rising or falling clock edges) of the second clock domain 104, the specific clocks being defined by the synchronization pulse cycle duration information 1 16. [0024] By feeding back the fill level information 1 14 to the calculator 106, the synchronization pulse cycle duration information 116 can be adjusted such that thefill level of the first-in-first-out storage 108 is maintained within a predetermined area thereby providing a synchronized data handover having an almost constant latency. [0025] For example, the first-in-first-out storage 108 can comprise a plurality of storage cells, where the fill level of the first-in-first-out storage 108 is maintained within a predetermined area being defined by a range of plus/minus one or two storage cell(s), i.e. the fill level of the first-in-first-out storage 108 may vary (only) in the range of plus/minus one or two storage cell(s) thereby avoiding an over- or under-run of the first-in-first-out storage 108 and hence providing a data handover having an almost constant latency (see Fig. 6). [0026] In other words, the apparatus 100 is able to provide a constant (or almost constant) fill level of the first-in-first-out storage 108 and hence a constant (or almost constant) latency of the data synchronization mechanism. Moreover, synchronization can be kept up also in the case of variations of the clock clki (or clock frequency f-i) of the first clock domain 102 or the clock clk2 (or clock frequency f2) of the second clock domain 104. Furthermore, the apparatus 100 allows the implementation of fractional sample rate converters (FSRC) having an interpolation ratio greater than or equal to one (f2/fi≥ 1 ). [0027] Fig. 2 shows a block diagram of the apparatus 100 for synchronizing the data handover between the first clock domain 102 and the second clock domain 104 shown in Fig. 1 further comprising a first data processor 126 and a second data processor 128. The first data processor 126 is clocked with the clock clki of the first clock domain 102 and configured to process an input information 130 such that the input data value 120 is provided in synchronization with the first clock domain 102 for the first-in-first-out storage 108. The second data processor 128 is clocked with the clock clk2 of the second clock domain 104 and configured to receive the output data value 122 in synchronization with the second clock domain 104 and in response to the current synchronization pulse 1 18_n (n = 1 ) from the first-in-first-out storage 108, and process the output data value 122 such that an output information 132 is provided in synchronization with the clock clk2 of the second clock domain 104. [0028] In embodiments, the first data processor 126 can be referred to as a data source, where the second data processor 128 can be referred to as a data sink due to the first data processor 126 providing the input data value 120 for the first-in-first- out storage 108 and the second data processor 128 receiving the output data value 122 from the first-in-first-out storage 108.[0029] In the following, features of the apparatus 100 for synchronizing a data handover between the first clock domain 102 and the second clock domain 104 are described making reference to an exemplary embodiment. In other words, in the following, a synchronization mechanism for signal processing blocks which incorporate a data handover between two different clock domains is described where the clock frequency fi of the first clock domain 102 (data source) is lower than the clock frequency f2 of the second clock domain 104 (data sink). Hence, subsequently, in one embodiment the first clock domain 102 can be referred to as low frequency clock domain where the second clock domain 104 can be referred to as high frequency clock domain. Thereby, the ratio of clock frequencies (f i /f 2) can be arbitrary and/or greater than one. Moreover, the reload rate of data in the high frequency clock domain 128 (data sink) may have on average the same rate as the clock frequency of the low frequency clock domain 102 (data source). Naturally, features of the following description are also applicable to the apparatus 100 for synchronizing the data handover between the first clock domain 102 and the second clock domain 104 shown in Figs. 1 and 2. [0030] Furthermore, the synchronization pulse 118_n can be referred to as a reload signal or reload pulse since the second data processor 128 can be configured to receive, or in other words, to reload the output data value 122 in response to a synchronization pulse 1 18_n. [0031] Fig. 3 shows a block diagram of an apparatus 100 for synchronizing a data handover between a first (or low frequency) clock domain 102 and a second (or high frequency) clock domain 104 according to an embodiment of the present invention. In other words, Fig. 3 shows the structure of a signal processing system having a synchronized data handover. [0032] The apparatus 100 comprises a data source 126, a data sink 128 and a synchronization stage 140, where the first-in-first-out storage 108, the synchronization pulse generator 110 and the fill level information provider 1 12 shown in Figs. 1 and 2 can be implemented in the synchronization stage 140. [0033] Alternatively, the first-in-first-out storage 108 and the fill level information provider 1 12 can be implemented in the synchronization stage 140, where the calculator 106 is implemented in the data source 126 and the synchronization pulse generator 1 10 is implemented in the data sink 128. In that case, the data source 126can be configured to provide the input data value 120 in synchronization with the first (or low frequency) clock domain 02 for the synchronization stage 140, where the data sink 128 can be configured to receive the output data value 22 from the synchronization stage 140 in synchronization with the second (or high frequency) clock domain 104 and in response to a current synchronization pulse 1 18_n (n = 1 ). Moreover, the data source 126 can be configured to provide a write enable signal 142 for the synchronization stage 140, where the data sink 128 comprising the synchronization pulse generator 110 can be configured to provide the synchronization pulses 1 18_n (or read enable signals) for the synchronization stage 140. [0034] The apparatus 100 according to the concept of the present invention even works when the ratio between the clock frequency of the first (or high frequency) clock domain 104 and the clock frequency of the second (or low frequency) clock domain (fhigh fiow) becomes small, e.g. greater than or equal to one, two or three. Moreover, even in the case of fractional frequency ratios (fhigh/fiow) and/or a modulated (time varying) clock clkhigh of the second (or high frequency) clock domain 104, the synchronization pulse 1 18_n (or data reload signal of the data sink) can be synchronized properly with the clock clk|0W of the first (or low frequency) clock domain 102. Thereby, it can be guaranteed that the output data value 122 is not provided to the second data processor 128 before a new input data value 120 is provided by the first data processor 126, or in other words, that the reload of data in the data sink 128 does not occur before new reload values are delivered by the data source 126. In addition, the apparatus 100 shown in Fig. 3 is able to provide a data handover from the data source 126 to the data sink 128 having a constant (or almost constant) latency. [0035] In contrast to known solutions that simply use an asynchronous first-in- first-out storage 108 (or memory) for synchronization, the apparatus 100 according to the concept of the present invention comprises a synchronization stage 140 with a calculator 106, a first-in-first-out storage 108, a synchronization pulse generator 1 10 and a fill level information provider 1 12. The apparatus 00 is able to provide a constant (or almost constant) synchronization latency, or in other words, a data handover between the first (or low frequency) clock domain 102 and the second (or high frequency) clock domain 104 having a constant (or almost constant) latency. Hence, the latency may not depend on the fill level of the first-in-first-out storage 108 and hence not on the startup of the synchronization. Moreover, in closed loop systems, such as PLLs (PLL = phase locked loop), a constant latency is desired inorder to get a defined loop response. Even when the read rate of data on the data sink port differs from the write rate at the data source port of the first-in-first-out storage 108, the fill level of the first-in-first-out storage 108 will not drift away, i.e. the latency of the signal processing block will not change. Thereby, an under- or overrun of the first-in-first-out storage 108 is avoided even in the case of a long term rate mismatch. [0036] Fig. 4 shows a block diagram of an apparatus 100 for synchronizing a data handover between a first (or low frequency) clock domain 102 and a second (or high frequency) clock domain 104 according to an embodiment of the present invention. The apparatus 100 comprises a calculator 106, a first-in-first-out storage 108, a reload counter 1 1 1 , a data source 126 and a data sink 128. [0037] In some embodiments, the calculator 106 comprises a numerically controlled oscillator 107 (NCO). Furthermore, the reload counter 1 1 1 comprises the synchronization pulse generator 110 shown in Figs. 1 and 2. Moreover, in one embodiment the first-in-first-out storage 108 comprises an asynchronous first-in-first- out storage. In addition, the first-in-first-out storage comprises the fill level information provider 1 12 shown in Figs. 1 and 2. [0038] As shown in Fig. 4, in one embodiment the first-in-first-out storage 108 is configured to take over the synchronization pulse cycle duration information 1 6 in synchronization with the first (or low frequency) clock domain 102 and to provide the synchronization pulse cycle duration information 116 in synchronization with the second (or high frequency) clock domain 104 and in response to a current synchronization pulse 1 18_n (n = 1 ). The synchronization pulse generator 1 10 is configured to receive the synchronization pulse cycle duration information 1 16 from the first-in-first-out storage 108 in synchronization with the second (or high frequency) clock domain 104 and in response to a current synchronization pulse 1 18_n (n = 1 ). [0039] Moreover, the calculator 106 is configured to provide the synchronization pulse cycle duration information 1 16 such that the synchronization pulse cycle duration information 1 16 represents a number of clocks in the second (or high frequency) clock domain between the current synchronization pulse 1 18_n (n = 1) and the subsequent synchronization pulse 1 18_n (n = 2). In other words, the synchronization pulse cycle duration information 116 may define the number of clocks in the second (or high frequency) clock domain between subsequentsynchronization pulses (e.g. 1 18_n (n = 1 ) and 1 18_n (n = 2) ) and hence the period of the synchronization pulses 1 18_n. Moreover, the calculator 106 can be configured to adjust the temporal position of the subsequent synchronization pulse 1 18_n (n = 2) by increasing or decreasing the number of clocks in the second (or high frequency) clock domain 1 16 in order to keep a predetermined fill level of the first-in- first-out storage 108. [0040] In some embodiments, the first-in-first-out storage 108 is configured to take over the synchronization pulse cycle duration information 1 16 and the input data 120 in one clock cycle of the first (or low frequency) clock domain 102, and provide the synchronization pulse cycle duration information 116 and the output data value 122 in one clock cycle of the second (or high frequency) clock domain 104. In that case, the synchronization pulse cycle duration information 1 16 may describe the number of clocks in the second (or high frequency) clock domain 104 for which the output data value 122 is valid. [0041] According to the concept of the present invention, in one embodiment the fill level of the first-in-first-out storage 108 is fed back into the numerically controlled oscillator 107. The numerically controlled oscillator 107 calculates the validity of each input data value 120 (or FIFO entry) in terms of numbers of clocks in the second (or high frequency) clock domain 1 16, or in other words, in terms of clock cycles of the second (or high frequency) clock signal (fhigh)- The number of clocks in the second (or high frequency) clock domain 1 16 (validity value) are stored together with the input data value 120 (actual data) in the first-in-first-out storage 108. When a first-in- first-out storage cell is read, the number of clocks in the high frequency clock domain 1 16 (validity value) is loaded into the reload counter 1 11 which is decremented in each clock cycle of the second (or high frequency) clock domain by a predefined number, e.g. one. On a counter under-run, the validity of the output data value 122 (current data) expires and the next output data value 122 is read from the first-in- first-out storage 108. Naturally, alternative implementations of the reload counter 1 1 1 are possible, e.g. in which the counter value is incremented. [0042] The apparatus 100 according to the concept of the present invention enables the system to maintain a constant (or almost constant) fill level of the first-in- first-out storage 108 and hence a constant (or almost constant) latency of the data synchronization mechanism. In addition, by using the apparatus 100 synchronization is maintained also in case of variations of the read and/or write rate of the first-in- first-out storage 108.[0043] Moreover, the apparatus 100 allows the implementation of (fractional) sample rate converters (FSRC) with an interpolation ratio greater than or equal to one (fhigh fiow≥ 1)- This is possible due to the separation of data write access of the data source 126 and data read access of the data sink 128 in the address space of the first-in-first-out storage 108 and not by separation in time. [0044] In some embodiments, the fill level of the first-in-first-out storage 108 used for data handover between the first (or low frequency) clock domain 102 and the second (or high frequency) clock domain 104 is controlled by a control loop comprising the numerically controlled oscillator 107 (see Fig. 7). Consequently, the numerically controlled oscillator 107 is configured to use the fill level information 124 describing the actual fill level of the first-in-first-out storage 108 as a feedback signal and compute a correction value of the numerically controlled oscillator 107 input for each input data value 120 put into the first-in-first-out storage 108. With this correction value, the fill level of the first-in-first-out storage 108 is controlled indirectly by changing the number of clocks in the second (or high frequency) clock domain 116 and hence an average value of the reload counter 111 and accordingly changing the first-in-first-out storage 108 read rate. [0045] This can be necessary during the start-up phase of the apparatus 100 (or signal processing blocks) in order to maintain a certain fill level of the first-in-first-out storage 108. When the clock rate of the first (or low frequency) clock domain 102 or the clock rate of the second (or high frequency) clock domain 104 has a (transient or permanent) frequency deviation, this mechanism (or control loop) can be used to correct the frequency ratio (fhigh fiow) in the numerically controlled oscillator 107. Hence, the apparatus 100 can be used in applications where the reload of low frequency data on the high rate is done continuously with the average low data rate (e.g. fractional sample rate converter with integrating output). [0046] Moreover, the apparatus 100 according to the concept of the present invention can be implemented even when the ratio between the clock frequency of the first (or low frequency) clock domain 102 (data sink frequency) and the clock frequency of the second (or high frequency) clock domain 104 (data source frequency) becomes low, e.g. greater than or equal to one, two or three, as it is required for fractional sample rate converters used in wide band polar modulators, such as LTE 20 (LTE = Long Term Evolution). Even when the frequency ratio (fhigh fiow) becomes low, e.g. close to one, there is enough clearance for thesynchronization pulse 1 18_n which has not to be positioned in time between two clock edges, e.g. rising or falling clock edges, of the first (or low frequency) clock domain 102, as it will become clear from the discussion below. [0047] Fig. 5 shows in a diagram exemplary timings of the first (or low frequency) clock domain 102 and the second (or high frequency) clock domain 104 of the apparatus 100 shown in Figs. 3 and 4. Thereby, in Fig. 5, from top to bottom are shown the timings 134 of the clock clk|0W of the first (or low frequency) clock domain 102; the timings 136 of the clock clkhigh of the second (or high frequency) clock domain 104; and the timings 138 of the synchronization pulse 1 18_n (n = 1 ) to 1 18_n (n = 1 1). In Fig. 5, the first (or low frequency) clock domain 102 is exemplarily clocked with a clock frequency of 312MHz, where the second (or high frequency) clock domain 104 is exemplarily clocked with a clock frequency of 700MHz. Naturally, the following description is also applicable for other clock frequencies of the first clock domain 102 and/or the second clock domain 104. [0048] In contrast to known solutions where the synchronization pulse 1 18_n (or reload pulse) has to be placed with enough clearance between two clock edges, e.g. rising or falling clock edges, of the first (or low frequency) clock domain 102 in order to avoid setup- and/or hold-violations in the data transfer from the first (or low frequency) clock domain 102 to the second (or high frequency) clock domain 104, the apparatus 100 enables a setup- and hold-violation free data transfer even for frequency ratios (fhigh W lower than three. Moreover, no uncertainty in sampling the position of the synchronization pulse 1 18_n (or reload pulse) is introduced. In addition, a jitter of the synchronization pulse 1 18_n (or reload pulse) is avoided which otherwise could be introduced by an integer delta-sigma modulated count cycles of the reload counter. Furthermore, even when the clock of the second (or high frequency) clock domain 104 is modulated, as it is the case in PLLs in polar modulators (PLL = Phase Locked Loop), no uncertainty is introduced. [0049] The apparatus 100 according to the concept of the present invention is advantageous for the implementation of, for example, fractional sample rate converters in wide band polar modulators. These modulators need fractional sample rate converters for interpolation of AM (AM = amplitude modulator) and PM (PM = phase modulator) signals from a signal rate of several 100MHz (e.g. 312MHz as depicted in Fig. 5) to the modulated RF frequency (RF = radio frequency) in the GHz range, e.g. 1 GHz, 10 GHz or 100 GHz.[0050] The required depth of the synchronization first-in-first-out storage 108 may depend on the maximum timing jitter of the synchronization pulses 1 18_n (or reload signals). The timing jitter of the synchronization pulses 1 18_n (or reload signals) may depend on the modulation data and the sequence of numbers of clocks in the second (or high frequency) clock domain 116 (or reload count sequence of the numerically controlled oscillator 107). Thereby, it has to made sure that there is no access to the same storage cell (memory position) of the first-in-first-out storage 108 at the same time. Hence, the first-in-first-out storage 108 may have a depth of at least four storage cells (or registers). One storage cell (or register) for the write access and one storage cell (or register) for the read access and one storage cell (or register) in front and after the read address as a guard for accidental read and/or write access to the same storage cell (or register). [0051] Fig. 6 shows a block diagram of a memory layout of the first-in-first-out storage 08 according to an embodiment of the present invention. The fist-in-first-out storage 108 comprises a plurality of storage cells 140_0 to 140_3, wherein the first- in-first-out storage 108 is configured to take over the input data value 120 into a storage cell (e.g. 140_0) of the plurality of storage cells 140_0 to 140_3 indicated by a write pointer value 142, and wherein the first-in-first-out storage 108 is configured to provide the output data value 122 from an other storage cell (e.g. 140_2) of the plurality of storage cells 140_0 to 140_3 indicated by a read pointer value 144. [0052] As shown in Fig. 6, the fill level of the first-in-first-out storage 108 may vary in the range 146 of plus/minus one storage cell (e.g. 140_1 to 140_3). In other words, the variations of the read address indicated by the read pointer value 144 (relative to the write address indicated by the write pointer value 142) may vary in the range of plus/minus one storage cell (e.g. 140_1 to 140_3) due to the timing jitter of the synchronization pulse 1 18_n (or reload signal). Thereby, an over- or under-run of the first-in-first-out storage 108 can be avoided and hence a data handover between the first (or low frequency) clock domain 102 and the second (or high frequency) clock domain 104 having an almost constant latency can be provided. [0053] In some embodiments, the first-in-first-out storage 108 may have a depth of at least four (storage cells) due to the necessity of two guard storage cells (memory addresses). Thereby, an average fill level of the first-in-first-out storage 108 will be two. Therefore, the delay (or latency) introduced by the synchronization first- in-first-out storage 108 will be in average two clock periods of the first (or low frequency) clock domain 102.[0054] Fig. 7 shows a block diagram of the apparatus 100 for synchronizing the data handover between the first (or low frequency) clock domain 102 and the second (or high frequency) clock domain 104 shown in Fig. 4, wherein the calculator 106 further comprises a controller 150. In other words, Fig. 7 depicts the structure of the complete first-in-first-out storage 108 based fractional sample rate converter with feedback of the fill level to the numerically controlled oscillator 107. [0055] In one embodiment the controller 150 is configured to regulate the synchronization pulse cycle duration information 1 16 to bring the fill level information 124 towards a predetermined target fill level information. For example, the synchronization pulse cycle duration information 1 16 represents a number of clocks in the second (or high frequency) clock domain 1 16, where the controller 150 is configured to regulate the number of clocks in the second (or high frequency) clock domain 1 16 such that the fill level of the first-in-first-out storage 108 is maintained within a predetermined area, thereby providing a data handover having an almost constant latency. [0056] Moreover, in one embodiment the calculator 106 is configured to combine an output value 152 of the controller 150 with a frequency ratio value 154 describing a frequency ratio between the clock frequency of the second (or high frequency) clock domain 104 and the clock frequency of the first (or low frequency) clock domain 102 in order to obtain the synchronization pulse cycle duration information 1 16. In other words, the controller 152 is configured to correct the frequency ratio value 154 describing the ratio between the frequency of the second (or high frequency) clock domain 104 and the frequency of the first (or low frequency) clock domain 02 that is fed into the numerically controlled oscillator 107. Thereby, the frequency ratio value 154 can comprise an integer and/or a fractional part. [0057] For example, the frequency ratio value 154 fed into the numerically controlled oscillator 107 can be increased or decreased by adding an output value 152 of the controller 150 to the frequency ratio value 154 by means of a first adder 156. In addition, a modulation data value 160 describing the modulation data can be added to the frequency ratio value 154 fed into the numerically controlled oscillator 107 by means of a second adder 162. Moreover, the second adder 162 can be coupled to an output of a multiplexor 164 that is configured to provide at its output, based on a binary control signal, either the modulation data value 160 present at its first input or a reference value (e.g. zero) present at its second input.[0058] In addition, the calculator 106 can comprise a feedback control loop 170. The feedback control loop 170 can comprise the controller 150, a first adder 172 an input 174 for a desired fill level of the first-in-first-out storage 108 and an input 176 for the fill level information 124. The fill level information 124 present at the input 176 is subtracted from the desired fill level information present at the input 174 and fed into the controller 150 by means of the first adder 172. [0059] The fill level of the first-in-first-out storage 108 is controlled by the number of clocks in the high frequency clock domain 116 (count value) for the generation of the synchronization pulses 118_n (reload signals). The number of clocks in the high frequency clock domain 116 (count value) is generated in the numerically controlled oscillator 107 which is clocked with the clock of the first (or low frequency) clock domain 102, or in other words, which is operated on the low frequency clock. Hence, a feedback of the fill level information 124 to the first (or low frequency) clock domain 102 is necessary. The actual fill level of the first-in-first-out storage 108 is processed in the feedback controller 150 which can be implemented in the numerically controlled oscillator 107 and corrects the frequency ratio value 154 temporarily in order to establish the desired fill level of the first-in-first-out storage 108. In regular operation, the feedback control loop 170 of the fill level is only active when the frequency ratio (fhigh/fiow) is disturbed. It is possible to disable the control loop 170 or to define a depth-zone for the fill level information 124 (or feedback value) where no control action takes place. This will minimize the interaction of the feedback controller 150. [0060] Fig. 8 shows a block diagram of the first-in-first-out storage 108, the reload counter 1 1 and the fill level information provider 112 according to an embodiment of the present invention. In other words, Fig. 8 shows a possible implementation of the first-in-first-out storage 108 and the reload signal 118_n generation. [0061 ] The first-in-first-out storage 108 comprises a plurality of storage cells 140_0 to 140_3, wherein the first-in-first-out storage 108 is configured to take over or receive the input data value 120 into a storage cell of the plurality of storage cells 140_0 to 140_3 indicated by a write pointer value 142, and wherein the first-in-first- out storage 108 is configured to provide the output data value 122 from another storage cell of the plurality of storage cells 140_0 to 140_3 indicated by a read pointer value 144. In the example of Fig. 8, the first-in-first-out storage 108comprises four storage cells 140_0 to 140_3. Naturally, the first-in-first-out storage 108 can comprise more than four storage cells. [0062] As shown in Fig. 8, in some embodiments, the input data value 120 can comprise an input reload value 120_1 and an input increment value 120_2. In that case, the first data processor 126 (data source) is configured to process the input information 130 such that the input reload value 120_1 and the input increment value 120_2 are provided in synchronization with the first (or low frequency) clock domain for the first-in-first-out storage 108. The first-in-first-out storage 108 can be configured to take over, e.g. in one clock cycle of the first (or low frequency) clock domain 102, the input reload value 120_1 , the input increment value 120_2 and the synchronization pulse cycle duration information 116, and provide an output reload value 122_1 , an output increment value 122_2 and the synchronization pulse cycle duration information 116 in synchronization with the second (or high frequency) clock domain 104 and in response to a current synchronization pulse 118_n (n = 1). Moreover, the second data processor 128 (data sink) is configured to receive the output reload value 122_1 and the output increment value 122_2 in synchronization with the second (or high frequency) clock domain 104 and in response to a current synchronization pulse 118_n (n = 1) from the first-in-first-out storage 108, and process the output reload value 122_1 and the output increment value 122_2 such that an output information 132 is provided in synchronization with the second (or high frequency) clock domain. [0063] For example, the second data processor 128 (data sink) can comprise an integrator configured to provide the output reload value 122 1 as output information 132 in response to the current synchronization pulse 118_n (n = 1) (or reload signal) and increment the previous output information by the output increment value 122_2 at each subsequent clock of the second (or high frequency) clock domain 104. [0064] As shown in Fig. 8, in some embodiments, the write pointer value 142 can be generated by a first gray counter 180 that is clocked with the clock clk|0W of the first (or low frequency) clock domain 102. The first gray counter 80 can be configured to count in synchronization with the first (or low frequency) clock domain 102 and provide a gray coded counter reading as write pointer value 142. For example, when the first-in-first-out storage 108 comprises four storage cells, the first gray counter 180 is configured to count from zero to three using the gray code in synchronization with the first (or low frequency) clock domain 102 and restart counting after having counted from zero to three.[0065] Moreover, the first gray counter 180 in one embodiment comprises an input for a write enable signal 180, where the first gray counter 180 is configured to count in synchronization with the first (or low frequency) clock domain 102 based on the write enable signal 180. Furthermore, the first gray counter 180 can have an input for a reset signal 182, where the first gray counter 180 is configured to reset its counter reading to an initial value in dependence on the reset signal 182. [0066] In one embodiment the output of the first gray counter 180 is coupled to a demultiplexer 184 that is configured to activate one of a plurality of signal lines 186JD to 186_3 at its output based on the write pointer value 142 present at its input. Thereby, each signal line of the plurality of signal lines 186_0 to 186_3 is coupled to one storage cell of the plurality of storage cells 140_0 to 140_3 of the first-in-first-out storage 108. Moreover, each signal line 186_0 to 186_3 is coupled to the corresponding storage cell of the plurality of storage cells 140_0 to 140_3 by means of an or-block 188 such that the corresponding signal line 186_0 to 186_3 is activated based on the write enable signal 180 that is coupled to the or-block 188. [0067] The first-in-first-out storage 108 can comprise a second gray counter 190. The output of the second gray counter 190 is coupled to an multiplexer 200 that is configured to provide at its output, based on the read pointer value 144 present at its control terminal, the output reload value 120 1 , the output increment value 122_2 and the synchronization pulse cycle duration information 1 16 stored in one storage cell of the plurality of storage cells 140_0 to 140_3 of the first-in-first-out storage 108. [0068] The apparatus 100 can comprise a counter 202 clocked with the clock clkhigh of the second (or high frequency) clock domain and configured to count the clocks in the second (or high frequency) clock domain 104 and provide a counter reading 204. Thereby, the synchronization pulse generator 1 10 is configured to generate the subsequent generation pulse 1 18_n (n = 2) based on the counter reading 204 such that the subsequent synchronization pulse 1 18_n (n = 2) is located at the temporal position described by the number of clocks in the second clock domain that is represented by the synchronization pulse cycle duration information 1 16, wherein the counter reading 204 is set to an initial value in response to the generation of the subsequent synchronization pulse 1 18_n (n = 2). [0069] Moreover, in one embodiment the apparatus 100 is configured to set the counter reading 204 to the number of clocks in the second (or high frequency) clockdomain 104 represented by the synchronization clock cycle duration information 1 16 in response to the current synchronization pulse 1 18_n (n = 1 ), and count down the counter reading 204 from the set counter reading in synchronization with the clock of the second (or high frequency) clock domain 104. Thereby, the synchronization pulse generator 1 10 is configured to compare the counter reading 204 with a predefined number and generate the subsequent synchronization pulse 118_n (n = 2) when the predefined number is equal to the counter reading 204. [0070] For example, the counter 202 can comprise a multiplexer 204, a register 206 and an adder 208. Note that the above listed blocks of the counter 202 are clocked with the clock clkhigh of the second (or high frequency) clock domain 104. [0071] An output of the register 206 for sampling the counter reading 204 can be coupled to the adder 208. The adder 208 can be configured to add a predefined value, e.g. one, to the sampled counter reading. An output of the adder 208 and the input 210 for the number of clocks in the second (or high frequency) clock domain 1 16 are coupled to inputs of the multiplexer 204. The multiplexer 204 is configured to provide at its output the number of clocks in the second (or high frequency) clock domain 1 16 in response to a synchronization pulse 1 18_n and the incremented counter reading otherwise. The output of the multiplexer 204 is coupled to an input of the register 206 for sampling the counter reading 205 in synchronization with the second (or high frequency) clock domain 104. [0072] In one embodiment the synchronization pulse generator 110 comprises a comparator 210 configured to compare the counter reading 205 with the predefined number and generate the subsequent synchronization pulse 118_n (n = 2) when the predefined number is equal to the counter reading 204. Alternatively, the synchronization pulse generator 110 can comprise a comparator 210 and a register 212. In that case, the comparator 210 is configured to compare the counter reading 204 with the predefined number (e.g. zero) and generate the subsequent synchronization pulse 1 18_n (n = 2) when the predefined number is equal to the counter reading 204, where the register 2 2 is configured to delay the subsequent synchronization pulse 1 18_n (n = 2) by one high frequency clock cycle. [0073] As shown in the embodiment of Fig. 8, the synchronization pulse generator 1 10 and the counter 202 is implemented in the reload counter 1 1. In addition, the reload counter 1 1 1 comprises a register 214 for sampling a reset signal 216 and a or-block 218. An output of the register 214 is coupled to a reset input 194of the second gray counter 190 and to an input of the or-block 218. A second input of the or-block 218 is coupled to the output of the register 212 of the synchronization pulse generator 1 10. The output of the or-block 218 is coupled to a control terminal of the multiplexer 204 of the counter 202 such that the multiplexer 204 of the counter 202 is configured to provide at its output the number of clocks in the second (or high frequency) clock domain 1 6 in response to a synchronization pulse 1 18_n or in response to the reset signal 216 sampled by the register 214 of the reload counter 1 1 1. [0074] Fig. 9 shows a block diagram of the fill level information provider 1 12 according to an embodiment of the present invention. Or in other words, Fig. 9 depicts an implementation of the first-in-first-out storage 108 fill level detector. [0075] The fill level information provider 1 12 can comprise a first register 230 for sampling the write pointer value 142 and a second register 232 for sampling the read pointer value 144. Thereby, the fill level information provider 1 12 in such an embodiment is configured to combine the sampled write pointer value 234 and the sampled read pointer value 236 in order to obtain a fill level value 238 describing the fill level of the first-in-first-out storage 108 and provide the fill level information 124 such that the fill level information represents the fill level value 238. [0076] Alternatively, the fill level information provider 1 12 comprises a first synchronization cell 238 having the first register 230 and a third register 240, and a second synchronization cell 242 having the second register 232 and a fourth register 244. The first and third register 230 and 240 of the first synchronization cell 238, and the second and fourth register 232 and 244 of the second synchronization cell 242 are clocked with the clock of the first (or low frequency) clock domain 02. In that case, the second synchronization cell 242 is configured to synchronize the read pointer value 144 from the second (or high frequency) clock domain 104 into the first (or low frequency) clock domain 102, thereby delaying the read pointer value 144 by two clock cycles of the first (or low frequency) clock domain 102. The first synchronization cell 238 can delay the write pointer value 142 also by two clock cycles of the first (or low frequency) clock domain 102. [0077] Moreover, in one embodiment the fill level information provider 1 12 comprises a first gray to binary converter 246 and a second gray to binary converter 248. The first gray to binary converter 246 is configured to convert the sampled gray coded write pointer value 234 into a binary coded write pointer value 250, where thesecond gray to binary converter 248 is configured to convert the sampled gray coded read pointer value 236 into a binary coded read pointer value 252. [0078] The width of the binary coded write pointer value 250 and the binary coded read pointer value 252 depends on the number of storage cells of the first-in-first-out storage 108. In the case of a first-in-first-out storage 108 with four storage cells 140_0 to 140_3, the binary coded write pointer value 250 and the binary coded read pointer value 252 can have a width of two bits. [0079] Furthermore, the binary coded read pointer value 252 can be subtracted from the binary coded write pointer value 250 by means of an adder 254, thereby providing at the output of the adder 254 the fill level value 238. The fill level value 238 can have also a width of two bits. [0080] Moreover, the fill level information provider 1 12 can further be configured to sum or average a plurality of fill level values 238 in order to obtain a summed or averaged fill level value 276 describing an average fill level of the first-in-first-out storage 108, and provide the fill level information such the fill level information represents the summed or averaged fill level value 276. [0081] For example, as shown in Fig. 9, four consecutive fill level values 238 can be averaged in order to simplify the synchronization of the fill level information 124 into the first (or low frequency) clock domain 102. This averaging is sufficient for tracking the fill level of the first-in-first-out storage 108 because (in general) the frequency deviations are small compared to the read/write rate of the first-in-first-out storage 108. Hence, the fill level will not change rapidly. [0082] In order to sum a plurality of fill level values 238, the fill level information provider 1 12 can further comprise a fifth register 260, a sixth register 262, a seventh register 264, a second adder 266, a multiplexer 268 and a counter 270. The counter 270, e.g. a 2-bit counter, can be configured to count in synchronization with the clock of the first (or low frequency) clock domain and provide a counter reading value 272 describing the current counter reading. Furthermore, the counter 270 is configured to provide a control signal 174 when the counter reading is equal to a predefined number (e.g. four). The fifth register 260 is configured to sample the fill level value 238 (e.g. having a width of two bit) present at its input in synchronization with the clock of the first (or low frequency) clock domain 102. Furthermore, an output of the fifth register 260 is coupled to a first input of the multiplexer 268 and to a first input ofthe adder 266, where the second input of the multiplexer 268 is coupled to an output of the adder 266. Thereby, the adder 266 is configured to add to the fill level value 238 sampled by the fifth register 260 a previous sum of fill level values 278 present at its second input in order to obtain a current sum of fill level values. The multiplexer 268 is configured to provide at its output, based on the control signal 274, either the fill level value 238 as current sum of fill level values or the current sum of fill level values provided by the adder. The output of the multiplexer 268 is coupled to an input of the sixth register 262 that is configured to sample the current sum of fill level values (e.g. having a width of four bits) in synchronization with the clock of the first (or low frequency) clock domain 102. An output of the sixth register 262 is coupled to an input of the seventh register 264 and to the second input of the adder 266. The seventh register 264 is configured to re-sample the current sum of fill level values 278 present at its input in synchronization with the clock of the first (or low frequency) clock domain 102 and in response to the control signal 274 provided by the counter 270 and provide the re-sampled current sum of fill level values as summed fill level value 276. The summed fill level value 276 and the counter reading value 272 provided by the counter 270 is fed back to the calculator 106 by the feedback path 1 14 as fill level information 124. Note that the above listed and in Fig. 9 shown blocks can be clocked with the clock of the first (or low frequency) clock domain 102. [0083] Fig. 10 shows a flow chart of a method for synchronizing a data handover between a first clock domain and a second clock domain according to an embodiment of the present invention. While the description below describes the method as a series of steps, various steps may be performed in a different order or concurrently with one another. In addition, not all steps may be necessary to accomplish the present invention. In a first step 300, a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain is provided in the first clock domain. In a second step 302, an input data value is taken over in synchronization with the first clock domain and an output data value is provided in synchronization with the second clock domain and in response to a current synchronization pulse with a first-in-first- out storage. In a third step 304, the synchronization pulse is generated in the second clock domain such that the synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. In a fourth step 306, a fill level information describing a fill level of the first-in-first-out storage is provided in the second clock domain. In a fifth step 308, the fill level information isfed back to the first clock domain to adjust the synchronization pulse cycle duration information based on the fill level information. [0084] In some embodiments, the synchronization pulse cycle duration information can be provided such that the synchronization pulse cycle duration information represents, in the form of a coded numeric value, a number of clocks in the second clock domain. [0085] Moreover, the method for synchronizing a data handover between a first clock domain and a second clock domain can further comprise the step of counting the clocks in the second clock domain and providing a counter reading, wherein the synchronization pulse is generated based on the counter reading such that a temporal position of the synchronization pulse is adjusted based on the synchronization pulse cycle duration information, and wherein the counter reading is set to an initial value in response to a generation of the synchronization pulse. [0086] In addition, the method for synchronizing a data handover between a first clock domain and a second clock domain can further comprise the steps of processing an input information such that the input data value is provided in synchronization with the first clock domain for the first-in-first-out storage; receiving the output data value in synchronization with the second clock domain and in response to the synchronization pulse from the first-in-first-out storage; and processing the output data value such that an output information is provided in synchronization with the second clock domain. [0087] Further embodiments of the present invention provide an apparatus for synchronizing a data handover between a first clock domain and a second clock domain. The apparatus comprises a means for calculating, a means for first-in-first- out storing, a means for generating synchronization pulses and a means for providing a fill level information. The means for calculating is clocked with the clock of the first clock domain and configured to provide a synchronization pulse cycle duration information describing a temporal position of synchronization pulses at a clock of the second clock domain. The means for first-in-first-out storing is configured to take over an input data value in synchronization with the first clock domain and provide an output data value in synchronization with the second clock domain and in response to a current synchronization pulse. The means for generating synchronization pulses is clocked with the clock of the second clock domain and configured to generate the subsequent synchronization pulse such that thesubsequent synchronization pulse is located at the temporal position described by the synchronization pulse cycle duration information. The means for providing a fill level information is configured to provide a fill level information describing a fill level of the first-in-first-out storage. The means for feeding back is configured to feed back the fill level information to the calculator. Thereby, the means for calculating is configured to adjust the synchronization pulse cycle duration information based on the fill level information. [0088] Fig. 1 1 shows a block schematic diagram of an apparatus for synchronizing a data handover between a first clock domain and a second clock domain, according to an aspect of the present disclosure. [0089] The apparatus 1 100 is configured to synchronize a data handover between a first clock domain 1102 and a second clock domain 1104. The apparatus 1 100 comprises a calculator 1 106 (which may be equivalent to the calculator 106), a first-in-first-out storage 1 108 (which may be equivalent to the first-in-first-out storage 108), a synchronization pulse generator 11 10 (which may be equivalent to the synchronization pulse generator 110), a fill level information provider 1 1 12 (which may be equivalent to the fill level information provider 1 12) and a feedback path 1 1 14 (which may be equivalent to the feedback path 114). Moreover, the apparatus 1 100 also comprises a phase information provider 1 1 15. [0090] The apparatus 1 100 is configured to receive an input data value 1 120 (or a sequence of input data values 1 120) and to provide an output data value 1 122 (or a sequence of output data values 1 122). Moreover, the apparatus 1100 receives a first clock signal clk1 and a second clock signal clk2. [0091] The first-in-first-out memory 1 108 receives the input data values 1 120, wherein the first clock signal clk1 may, for example, determine a timing at which the input data values 1120 are input into the first-in-first-out memory. Moreover, the second clock signal clk2 may determine, for example, in combination with a synchronization pulse, a timing according to which output data values 1 122 are read out from the first-in-first-out memory 1 108. [0092] The synchronization pulse generator 11 10, which is typically operated on the basis of the second clock signal clk2, is configured to provide a synchronization pulse which determines a time at which an output data value 122 is read out from the first-in-first-out memory 1 108, or a time at which an output data value 1 122provided by the first-in-first-out memory 1 108 is taken over into a circuit which is operated based on the second clock signal clk2. The synchronization pulse generator 1 10 receives a synchronization pulse cycle duration information 1 1 16, which may be considered as a validity word, from the calculator 1 106. The synchronization pulse cycle duration information 1 1 16 may, for example, describe a time interval between times at which subsequent output data values 1 122 are taken over from the first-in-first-out memory 1 108 into a circuit operated based on the second clock signal clk2. Accordingly, the synchronization pulse cycle duration information 1 1 16 carries information about the validity time of a data word (or a plurality of data words) stored in the first-in-first-out memory. For example, the synchronization pulse cycle duration information 1 1 16 may be such that a validity word describes a validity time of at least one data word, wherein each address of the first-in-first-out memory 1 108 is associated with at least one data word. Accordingly, there may be an association between a validity data word (of the synchronization pulse cycle duration information) and one or more data words stored in the forst-in- first-out memory 1 108. [0093] The calculator 1 106 may, for example, use multiple input information items to determine the synchronization pulse cycle duration information 1 1 16 (i.e., the validity words). For example, the calculator 1106 may receive a fill level information 1 1 14 from the fill level information provider 1 112, wherein the fill level information 1 1 14 describes a fill level of the first-in-first-out memory 1108. Accordingly, the calculator 1 106, which is preferably operated on the basis of the first clock signal clk1 , provides the synchronization pulse cycle duration information such that the fill level of the first-in-first-out memory 1 108 is kept within a predetermined limit or is brought towards a target fill level value. Moreover, the calculator 1 106 may be configured to receive a phase information 1 117 from the phase information provider 1 1 15, wherein the phase information 1 1 17 may, for example, describe a phase relation between the synchronization pulse 1127 provided by the synchronization pulse generator 1 1 10 and the first clock signal clk1 (i.e., the clock of the first clock domain 1 102). In other words, the phase information 11 17 may be fed back from the phase information provider 1 1 15 to the calculator 1 106, and the calculator 1 106 may be configured to adjust the synchronization pulse cycle duration information (which may be considered as a synchronization pulse timing information) based on the phase information 1 1 17 provided by the feedback path. [0094] Accordingly, the calculator 1 106 may consider both the fill level of the first- in-first-out memory 1 108 and a phase relation between the synchronization pulse1 127 and the first clock signal clk1 to provide the synchronization pulse cycle duration information 1 116. Thus, the time interval between times at which output data values are taken over to the circuit of the second clock domain 1 104 from the first-in-first-out memory 1 1 18 (i.e., the validity time of the data values) is adapted or dynamically adjusted (in a feedback manner) both to maintain a desired fill level of the first-in-first-out memory 1 108 and to obtain a desired phase relationship between the synchronization pulse signal 1 127 (which may, for example, trigger a takeover of an output data value 1 122 from the first-in-first-out memory 1 08 to a circuit operated based on the second clock signal clk2) and the first clock signal clk1. [0095] According to an aspect of the present disclosure, the synchronization pulse generator 1 1 10 may be applied efficiently to provide the phase information 1 1 17. For example, the phase information 1 1 17 may be based on (or equal to) a count value reached by a counter of the synchronization pulse generator 1 1 10 at a time of an edge (or at a time determined by an edge) of the first clock signal clk1 . Moreover, a periodicity (for example, a reload value applied to the counter when a certain minimum count value or a maximum count value is reached by the counter) may be determined by the synchronization pulse cycle duration information. [0096] Thus, the apparatus 1 100 may achieve a synchronization between the first clock domain 1 102 and the second clock domain 1 104, such that an underflow or overflow of the first-in-first-out memory 1 108 is avoided and such that the timing of the synchronization pulse 1 127 is well-adapted to the timing of the first clock signal clk1 , such that, for example, violations of the setup-and-hold times or the like are avoided. [0097] It should be noted here that the functionality of the circuit 1 00 may naturally be modified over a wide range. For example, the data transfer may also take place from the second clock domain to the first clock domain in some embodiments. Alternatively, bidirectional data flow is also possible. [0098] Moreover, the concept described herein with respect to other embodiments may naturally be implemented in the apparatus 1 100. [0099] Also, different concepts of generating the fill level information 1 14 and the phase information 1 1 17 are usable, as long as a proper timing without underflows or overflows of the first-in-first-out memory 1 108 and without any other timing violations is achieved.[00100] The synchronization pulse cycle duration information, which may be considered as a validity word, may be associated with one or more data words of the first-in-first-out buffer in different ways. For example, the synchronization pulse cycle duration information may be stored in the first-in-first-out memory 1 108 in some implementations. Alternatively, however, the synchronization pulse cycle duration information may be exchanged between the first clock domain and the second clock domain separate from the first-in-first-out memory 1 108, wherein, nevertheless, it is preferred to have an association between a memory address of the first-in-first-out buffer 1 108 and a corresponding synchronization pulse cycle duration information 1 1 16. [00101] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus. [00102] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. [00103] Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.[00104] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. [00105] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. [00106] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer. [00107] A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. [00108] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. [00109] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. [00110] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. [00111] A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver .[00112] In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus. [00113] The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.
A method for generating a digital signal pattern at M outputs involves retrieving an instruction from memory comprising a first set of bits identifying a first group of N outputs that includes fewer than all of the M outputs, and a second set of N bits each corresponding to a respective output included in the identified first group of N outputs. For each of the M outputs that is included in the identified first group of N outputs, the signal at the output is toggled if the one of the N bits corresponding to that output is in a first state and is kept in the same state if the one of the N bits corresponding to that output is in a second state. For each of the M outputs that is not included in the identified first group of N outputs, the signal at that output is kept in the same state.
CLAIMS 1. A method for generating a digital signal pattern at M outputs, comprising steps of: (a) retrieving a first instruction from memory comprising a first set of bits identifying a first group of N outputs that includes fewer than all of the M outputs, and a second set of N bits each corresponding to a respective output included in the first group of N outputs identified by the first set of bits included in the first instruction; (b) upon the occurrence of a first event, for each of the M outputs that is included in the first group of N outputs identified by the first set of bits included in the first instruction, toggling the signal at the output if the one of the N bits corresponding to that output is in a first state and keeping the signal at the output in the same state if the one of the N bits corresponding to that output is in a second state; and (c) for each of the M outputs that is not included in the first group of N outputs identified by the first set of bits included in the first instruction, keeping the signal at that output in the same state upon the occurrence of the first event. 2. The method of claim 1, wherein the first instruction consists of a first number of bits, and the method further comprises steps of: (d) retrieving a second instruction from memory that includes more than the first number of bits; (e) based on the second instruction, identifying particular ones of the M outputs for which the signals thereon are to be toggled upon the occurrence of a second event; and (f) upon the occurrence of the second event, toggling the signals on the particular ones of the M outputs that were identified by the second instruction. 3. The method of claim 2, wherein the second instruction includes twice as many bits as the first instruction. 4. The method of claim 2 or 3, further comprising steps of: (g) retrieving a third instruction from memory comprising a first set of bits identifying a group of X outputs that includes fewer than all of the M outputs, and a second set of X bits each corresponding to a respective output included in the group of X outputs identified by the first set of bits included in the third instruction, wherein X is not equal to N; (h) upon the occurrence of a third event, for each of the M outputs that is included in the group of X outputs identified by the first set of bits included in the third instruction, toggling the signal at the output if the one of the X bits corresponding to that output is in the first state and keeping the signal at the output in the same state if the one of the X bits corresponding to that output is in the second state; and (i) for each of the M outputs that is not included in the group of X outputs identified by the first set of bits included in the third instruction, keeping the signal at that output in the same state upon the occurrence of the third event. 5. The method of claim 4, wherein the number of bits included in the second instruction is an integer multiple of the number of bits included in the first instruction, and the number of bits included in the third instruction is an integer multiple of the number of bits include in the first instruction. 6. The method of claim 5, wherein the third instruction includes twice as many bits as the first instruction, and the second instruction includes twice as many bits as the third instruction. 7. The method of any of claims 1 -6, further comprising steps of: (d) retrieving a second instruction from memory comprising a first set of bits identifying a second group of N outputs that includes fewer than all of the M outputs and is different than the first group of N outputs, and a second set of N bits each corresponding to a respective output included in the second group of N outputs identified by the first set of bits included in the second instruction; (e) upon the occurrence of a second event, for each of the M outputs that is included in the second group of N outputs identified by the first set of bits included in the second instruction, toggling the signal at the output if the one of the N bits corresponding to that output is in the first state and keeping the signal at the output in the same state if the one of the N bits corresponding to that output is in the second state; and (f) for each of the M outputs that is not included in the second group of N outputs identified by the first set of bits included in the second instruction, keeping the signal at that output in the same state upon the occurrence of the second event. 8. The method of any of claims 1-6, further comprising steps of: (d) retrieving a second instruction from memory comprising a first set of bits identifying a group of X outputs that includes fewer than all of the M outputs, and a second set of X bits each corresponding to a respective output included in the group of X outputs identified by the first set of bits included in the second instruction, wherein X is not equal to N; (e) upon the occurrence of a second event, for each of the M outputs that is included in the group of X outputs identified by the first set of bits included in the second instruction, toggling the signal at the output if the one of the X bits corresponding to that output is in the first state and keeping the signal at the output in the same state if the one of the X bits corresponding to that output is in the second state; and (f) for each of the M outputs that is not included in the group of X outputs identified by the first set of bits included in the second instruction, keeping the signal at that output in the same state upon the occurrence of the second event. 9. The method of claim 8 or 9, wherein the second instruction includes twice as many bits as the first instruction. 10. The method of any of claims 1 -9, further comprising a step of: indicating that the first event has occurred upon determining that a particular number of clock cycles have elapsed following retrieval of the first instruction. 11. The method of claim 10, further comprising a step of: identifying the particular number of clock cycles that are to elapse following receipt of the first instruction before indicating that the first event has occurred based upon a third set of bits included in the first instruction. 12. The method of claim 10, further comprising a step of: identifying the particular number of clock cycles that are to elapse following receipt of the first instruction before indicating that the first event has occurred based the content of a register. 13. The method of claim 12, further comprising a step of: identifying the register based upon at least one bit included in the first instruction. 14. An apparatus for generating a digital signal pattern at M outputs, comprising: a circuit configured and arranged to retrieve at least instructions of a first type from memory and to control the toggling of signals at the M outputs in response thereto, wherein each of the instructions of the first type comprises a first set of bits identifying a first group of N outputs that includes fewer than all of the M outputs, and a second set of N bits each corresponding to a respective output included in the first group of N outputs identified by the first set of bits included in the first instruction, the circuit being further configured and arranged to process each retrieved instruction of the first type such that, for each of the M outputs that is included in the first group of N outputs identified by the first set of bits included in the instruction, the signal at the output is toggled if the one of the N bits corresponding to that output is in a first state and is kept in the same state if the one of the N bits corresponding to that output is in a second state, and, for each of the M outputs that is not included in the first group of N outputs identified by the first set of bits included in the instruction, the signal at that output is kept in the same state. 15. The apparatus of claim 14, wherein the circuit is further configured and arranged to retrieve instructions of a second type, which comprise more bits than the instructions of the first type, from memory and to control the toggling of signals at the M outputs in response thereto, the circuit being configured to process each instruction of the second type to identify, based on the instruction, particular ones of the M outputs for which the signals thereon are to be toggled, and to toggle the signals on the particular ones of the M outputs that were identified by the second instruction. 16. The apparatus of claim 14 or 15, wherein the circuit is configured and arranged to retrieve and process instructions of the second type that include twice as many bits as instructions of the first type. 17. The apparatus of any of claims 14-17, wherein the circuit is further configured to retrieve instructions of a third type from memory and to control the toggling of signals at the M outputs in response thereto, each instruction of the third type comprising a first set of bits identifying a group of X outputs that includes fewer than all of the M outputs, and a ' second set of X bits each corresponding to a respective output included in the identified group of X outputs, wherein X is not equal to N, the circuit being further configured and arranged to process each retrieved instruction of the third type such that, for each of the M outputs that is included in the group of X outputs identified by the first set of bits included in the instruction, the signal at the output is toggled if the one of the X bits corresponding to that output is in the first state and is kept in the same state if the one of the X bits corresponding to that output is in the second state, and, for each of the M outputs that is not included in the group of X outputs identified by the first set of bits included in the instruction, the signal at that output is kept in the same state. 18. The apparatus of claim 17, wherein the circuit is configured and arranged to retrieve and process instructions of the second type that include a number of bits that is an integer multiple of the number of bits included in instructions of the first type, and to retrieve and process instructions of the third type that include a number of bits that is an integer multiple of the number of bits included in instructions of the first type. 19. The apparatus of claim 17 or 18, wherein the circuit is configured and arranged to retrieve and process instructions of the third type that include twice as many bits as instructions of the first type, and to retrieve and process instructions of the second type that include twice as many bits as instructions of the third type. 20. The apparatus of any of claims 14-19, wherein the circuit is further configured to retrieve instructions of a second type from memory and to control the toggling of signals at the M outputs in response thereto, each instruction of the second type comprising a first set of bits identifying a group of X outputs that includes fewer than all of the M outputs, and a second set of X bits each corresponding to a respective output included in the identified group of X outputs, wherein X is not equal to N, the circuit being further configured and arranged to process each retrieved instruction of the second type such that, for each of the M outputs that is included in the group of X outputs identified by the first set of bits included in the instruction, the signal at the output is toggled if the one of the X bits corresponding to that output is in the first state and is kept in the same state if the one of the X bits corresponding to that output is in the second state, and, for each of the M outputs that is not included in the group of X outputs identified by the first set of bits included in the instruction, the signal at that output is kept in the same state. 21. The apparatus of any of claims 14-20, wherein the circuit further comprises a timer configured to generate a toggle event signal a particular number of clock cycles following receipt of an instruction of the first type and is configured to control the toggling of the signals at the M outputs upon generation of the toggle event signal. , 22. The apparatus of claim 21 , wherein the circuit is configured to determine the particular number of clock cycles the timer is to wait before generating a toggle event signal after receiving an instruction of the first type based upon the content of each instruction of the first type. 23. The apparatus of claim 21, wherein the circuit is configured to determine the particular number of clock cycles the timer is to wait before generating a toggle event signal after receiving an instruction of the first type based upon the contents of a register. 24. The apparatus of any of claims 14-23 , wherein the circuit comprises a decoder configured to interpret the instructions of the first type and provide control signals to a channel select circuit based thereupon, the channel select circuit being configured to control toggling of signals at each of the M outputs in response to the received control signals. 25. The apparatus of any of claims 14-24, further comprising a memory having at least the first instructions stored therein. 26. A method for generating a digital signal pattern at M outputs, comprising steps of: (a) retrieving a first instruction from memory that consists of N bits; (b) based on the first instruction, identifying first ones of the M outputs for which the signals thereon are to be toggled upon the occurrence of a first event; (c) upon the occurrence of the first event, toggling the signals on the first ones of the M outputs that were identified by the first instruction; (d) retrieving a second instruction from memory that includes more than N bits; (e) based on the second instruction, identifying second ones of the M outputs for which the signals thereon are to be toggled upon the occurrence of a second event; and (f) upon the occurrence of the second event, toggling the signals on the second ones of the M outputs that were identified by the second instruction. 27. The method of claim 26, further comprising steps of: (g) indicating that the first event has occurred upon determining that a first number of clock cycles have elapsed following retrieval of the first instruction; and (h) indicating that the second event has occurred upon determining that a second number of clock cycles have elapsed following retrieval of the second instruction. 28. The method of claim 27, further comprising a step of: determining the first number of clock cycles that are to elapse following receipt of the first instruction before indicating that the first event has occurred based upon the content of a register. 29. The method of claim 28, further comprising a step of: identifying the register based upon at least one bit included in the first instruction. 30. The method of any of claims 26-29, wherein the second instruction includes twice as many bits as the first instruction.
VARIABLE INSTRUCTION WIDTH SOFTWARE PROGRAMMABLE DATAPATTERN GENERATORRELATED APPLICATIONS This application relates to the subject matter disclosed in each of (1) U.S. ProvisionalApplication Ser. No. 60/906,000, filed March 9, 2007 ("the OOO application"), (2) U.S. Patent Application Ser. No. 11/818,449, filed June 14, 2007 ("the '449 application), and (3) U.S. Patent Application Ser. No. 11/818,452, filed June 14, 2007 ("the '452 application). The entire contents of each of the OOO, '449, and '452 applications are incorporated herein by reference.BACKGROUNDCharge coupled devices (CCDs) are used in a large variety of digital imaging applications. There are a number of different manufacturers of such devices and each manufacturer typically has numerous models. The large variety of CCDs and the continuously evolving CCD control requirements have caused challenges in designing the analog front end/CCD controller circuits that will have significant longevity in the market place. This problem is ameliorated to a large extent by the software programmable pattern generator described in the '000, '449, and '452 applications, incorporated by reference above. That software programmable pattern generator utilizes a compact and flexible assembly programmable Reduced Instruction Set Computer (RISC) that is optimized for generating high precision timing pulses and low power control functions. The architecture has a variable bit wide instruction set that includes: vector toggling instructions, jump instructions, conditional instructions, arithmetic instructions, and load/store instructions. The pattern generator can fetch and execute one instruction per clock cycle, and is parameter scalable to allow for easy optimization in different applications.To allow every chip output to be set simultaneously at a pixel clock resolution, a large number of bits may be stored in parallel within the program memory, with each bit in a vector word corresponding to an output pin that can be selectively toggled, depending on the state of the bit. In the case of Analog Device's model number ADDI9000, this meant that every instruction was "64" bits wide. An advantage of this model was in the simple control and design logic required. We have since recognized, however, that the use of such large instructions consumes a significant amount of memory, thus imposing limits on the utility of the timing generator for certain applications. SUMMARYAccording to one aspect of the present invention, a method for generating a digital signal pattern at M outputs involves retrieving a first instruction from memory comprising a first set of bits identifying a first group of N outputs that includes fewer than all of the M outputs, and a second set of N bits each corresponding to a respective output included in the first group of N outputs identified by the first set of bits included in the first instruction. For each of the M outputs that is included in the first group of N outputs identified by the first set of bits included in the first instruction, the signal at the output is toggled if the one of the N bits corresponding to that output is in a first state and is kept in the same state if the one of the N bits corresponding to that output is in a second state. For each of the M outputs that is not included in the first group of N outputs identified by the first set of bits included in the first instruction, the signal at that output is kept in the same state.According to another aspect of the invention, an apparatus for generating a digital signal pattern at M outputs comprises a circuit configured and arranged to retrieve at least instructions of a first type from memory and to control the toggling of signals at the M outputs in response thereto, wherein each of the instructions of the first type comprises a first set of bits identifying a first group of N outputs that includes fewer than all of the M outputs, and a second set of N bits each corresponding to a respective output included in the first group of N outputs identified by the first set of bits included in the first instruction. The circuit is further configured and arranged to process each retrieved instruction of the first type such that, for each of the M outputs that is included in the first group of N outputs identified by the first set of bits included in the instruction, the signal at the output is toggled if the one of the N bits corresponding to that output is in a first state and is kept in the same state if the one of the N bits corresponding to that output is in a second state, and, for each of the M outputs that is not included in the first group of N outputs identified by the first set of bits included in the instruction, the signal at that output is kept in the same state.According to another aspect, a method for generating a digital signal pattern at M outputs involves retrieving a first instruction from memory that consists of N bits, and retrieving a second instruction from memory that consists of fewer than N bits. Based on the first instruction, first ones of the M outputs are identified and the signals on those outputs are toggled. Based on the second instruction, second ones of the M outputs are identified and signals on those outputs are toggled. BRIEF DESCRIPTION OF THE DRAWINGSFig. l is a functional block diagram illustrating various components of a digital pattern generator (DPP) that may operate together to control the generation of a digital signal pattern at its outputs; Fig. 2 is a flowchart illustrating an example of an execution flow that may be used to generate a pattern of pulses on the outputs of the DPP;Figs. 3 and 4 illustrate the format and content of several examples of toggle instructions that may be employed in some embodiments; andFig. 5 is a block diagram illustrating an example of hardware that may be employed by the channel control circuit of the DPP to enable the use of toggle instructions of various lengths and types.DETAILED DESCRIPTIONThis disclosure is directed to improvements to certain components and features of the system disclosed in the '449 and '452 applications (incorporated by reference above). Familiarity with the entirety of the disclosure of the '449 and '452 applications will thus be assumed. For ease of understanding, to the extent practicable this disclosure will use the same reference numerals as those used in the '449 and '452 applications to describe similar components and features. It should be appreciated, moreover, that the components and features in this disclosure that are similarly named or that are designated using the same reference numerals as the components or features described in the '449 and '452 applications may be used in the system described in the '449 and '452 applications in the same or similar manner as such similarly named or labeled components and features are used therein. That only certain key components of the system disclosed in the '449 and '452 applications are re-described herein should not be understood to mean that such components and features are incompatible in any way with the new or modified components or features disclosed herein. Rather, it is simply for conciseness that only those components and features of the system disclosed in the '449 and '452 applications that are directly impacted or modified by this disclosure are re-described herein.Fig. 1 is similar to Fig. 3 of the '449 and '452 applications. The only pertinent difference between the two figures is the addition of toggle control lines 310 to the diagram of Fig. 1. The purpose of these additional control lines will be explained in more detail below. This figure is a functional block diagram illustrating various components of a digital pattern processor (DPP) (like the DPP 102 described in the '449 and '452 applications - incorporated by reference above) that may operate together to control the generation of a digital pattern of signals at a group of outputs. As shown, the DPP may comprise a program sequencer 106, a synchronous timer 114, a program memory 108, channel control circuitry 118, and output pads 120.As illustrated, the program sequencer 106 may comprise an instruction decoder 302 and program sequencer logic 304 that together are responsible for fetching instructions from the memory 108, decoding the fetched instructions, and controlling the synchronous timer 114 and channel control circuitry 118 so as to appropriately generate a pattern of digital signals at the outputs 120. In the example shown, the synchronous timer 114 comprises a toggle counter 306 and a comparator 308. The comparator 308 may, for example, determine when the toggle counter 114 has reached a specified "toggle count" value. The toggle counter 306 may, for example, comprise a sixteen-bit free-running clock cycle counter. An illustrative example of an execution flow that may be employed by these components to generate a pattern of pulses by toggling the signals at the outputs 120 and/or forcing the signals at the outputs 120 to particular values is discussed below in connection with Fig. 2. Fig. 2 is identical to Fig. 8 of the '449 and '452 applications. This figure is a flowchart illustrating an example of an execution flow 800 that may be used to generate a digital signal pattern on the outputs 120. In the example shown, at steps 802 and 804, an instruction is fetched from the program memory 108 and decoded for execution. If, at a step 806, it is determined that the instruction is a toggle instruction, then the flow 800 proceeds to a step 808, where it waits until the comparator 308 has determined that the toggle counter 306 has reached a toggle count value. As discussed in more detail below, the toggle count value may be either included in the toggle instruction itself or may be read from a register of the DPP (e.g., one of the general purpose registers R0-R7 identified in Table lin the '449 and '452 applications). When the toggle count value is to be read from a register, either the same register may be referenced each time a particular type of toggle instruction is received or one or more bits may be included in the toggle instruction that identify the register that is to be referenced. As used herein, a "toggle instruction" refers to any instruction that is responsible for determining the state of one or more of the outputs 120 and is thus intended to encompass not only instructions that cause the signals at particular outputs to "toggle" (i.e., to change from one state to another) but also instructions that force the signals at particular outputs 120 to particular values (sometimes referred to herein as "force vector" instructions) and thus may or may not actually cause the output signals to toggle, depending on the initial state of each such signal. Once the toggle counter 306 has reached the specified toggle count, the flow proceeds to a step 810, where certain outputs 120 of the DPP are simultaneously toggled or forced to particular values in the manner specified by the instruction. The flow then returns to the steps 802 and 804 where the next program instruction is fetched and decoded. If, at the step 806, it is determined that the fetched instruction is not a toggle instruction, then the routine proceeds to a step 812, where the instruction is carried out to as to control the program flow in the manner specified. (Examples of the manner in which particular toggle instructions and program flow instructions may be configured and carried out in various embodiments are described in detail in the '449 and '452 applications and thus will not be repeated here). Accordingly, by employing the configuration and functionality illustrated in Figs. 1 and 2, the toggle counter 306 and a custom toggle instruction set may be used to keep the DPP in lock step execution to allow the generation of a digital signal pattern in the manner specified by the instruction set. Advantageously, in the example shown, the flow is capable of toggling or forcing the values of the signals on all output pins on any given clock cycle. In some embodiments, a single instruction may be defined for toggling or forcing the values of all of the output bits simultaneously.As noted in the '449 and '452 applications, one application of the DPP may be as a timing generator for an image sensor. Examples of environments in which such a timing generator may operate are described in U.S. Pat. No. 6,512,546, U.S. Pat. No. 6,570,615, and U.S. Patent Application Publication No. 2006/0077275 Al , each of which is incorporated herein by reference in its entirety.Fig. 3 shows several examples of program instruction configurations that may be used in various embodiments of the DPP disclosed herein, as well as in the '449 and '452 applications, including two examples of "short" toggle instruction formats 314 and 316 that were not disclosed in the '449 and '452 applications. In some embodiments, the program memory 108 that is employed may have a fixed width that is segmented into several sections. In the example of Fig. 3, for instance, the program memory is "64" bits wide and is segmented into four sections WO, Wl, W2, and W3. The format of the "long" toggle instruction 312 may be just like that of the toggle instructions described in the '449 and '452 applications and may be used in a similar manner. Advantageously, the short toggle instructions 314 and 316 may be used (in the manner described below) in circumstances in which it is necessary to toggle only a particular subset of the bits of the output vector. In the example of Fig. 3, for instance, the 32-bit short toggle instruction 314 may be used to toggle any or all of the bits within a particular byte (i.e., a set of eight bits) of the output vector, and the 16-bit short toggle instruction 316 may be used to toggle any or all of the bits within a particular nibble (i.e., a set of four bits) of the output vector. For the 32-bit short toggle instructions 314, a group of three byte select bits 314a may be used to identify the group of eight output bits that is to be toggled as indicated by the bits in the byte field 314b. Similarly, for the 16-bit short toggle instructions 316, a group of four nibble select bits 316a may be used to identify the group of four output bits that is to be toggled as indicated by the bits in the nibble field 316b.Although the instructions 312, 314, 316 in the illustrated example are eight bytes, four bytes, and two bytes wide, respectively, it should be appreciated instructions of different lengths and relative sizes could additional or alternatively be employed. In some embodiments, for example, the short toggle instructions may be two and four bytes long, respectively, just as in the primary example described herein, but the long toggle instructions may be ten rather than eight bytes wide, with the two extra bytes containing additional bits of the vector field. Such a configuration would allow the generation of a digital pattern on "57" output pins, rather than on only "41" pins as in the primary example described herein.To simply the implementation of hardware components in the system, it may be useful to align the longer instructions in memory so as to allow each instruction to be fetched in a single memory access. For example, if a memory including one thousand lines of sixty four bits is employed, each 64-bit instruction may be aligned so that it starts at the beginning of a line, rather than wrapping from one line to another. It may also be advantageous to align the 32-bit instructions in the above example so that they also do not wrap around from one memory line to another. For instructions that are aligned in such a manner, appropriate instructions may be inserted into the program code that cause the program counter to be incremented by a specific amount to account for the adjusted alignment (e.g., by skipping over one or more of the sections Wl, W2, W3 of the memory line, which may simply remain unused).In some embodiments, it can be advantageous to use instructions having lengths that are integer multiples of one another. In one of the examples above, for instance, the length of the 32-bit short toggle instruction is twice (or a power of two) greater than the length of the 16-bit short toggle instruction, and length of the 64-bit long toggle instruction is twice (or a power of two) greater than the length of the 32-bit short toggle instruction. The use of such "power of two" differences between instruction lengths may, for example, simply the process of fetching and decoding of instructions. For instance, in some embodiments, the mechanism used for fetching may only have to choose between incrementing the program counter by "1," "2," or "4," which in binary becomes "001," "010," and "100," respectively.Fig. 4 is a chart showing several examples of specific toggle instructions of the above- described types that may be employed in certain embodiments. In the chart, the numbers "0" to "31 " in the row labeled "INSTRUCTION" correspond to the respective bits in the depicted instruction words. For example, the numbers "0" to "6" in the "INSTRUCTION" row of Fig. 4 correspond to the 7-bit operational codes ("opcodes") of the toggle instructions 312, 314, 316 of Fig. 3. For the long (i.e., 64-bit or longer) toggle instructions in the chart, it should be understood that, although not specifically depicted, the bits "32" to "63" (or higher) would be "vector bits" just like the bits "23" to "31 " in those examples.The opcode in each instruction may identify not only whether the instruction is a "toggle instruction," as opposed to one of the other types of instructions described in the '449 and '452 applications, e.g., a program flow instruction, a load/store instruction, an arithmetic instruction, etc., but also the particular length and content of the toggle instruction. For example, the opcode may indicate whether the instruction is a long toggle instruction 312(which may be either an instruction to toggle certain bits or instruction to force certain bits to particular values), a 32-bit short toggle instruction 314, or a 16-bit short toggle instruction.In the examples of Fig. 4, the assertion of bits "1" and "2" of the opcode indicates that the instruction is a toggle instruction. The assertion of bit "0" in addition to bits "1" and "2" indicates that the toggle instruction is a "force vector" instruction. The assertion of bit "3" in addition to bits "1" and "2" indicates that the toggle instruction is "short" (i.e., either "32" bits or "16" bits) rather than "long" (i.e., "64" bits or more). The assertion of both of bits "4" and "5" in addition to bits " 1 , "2," and "3" indicates that the short toggle instruction is " 16" bits long rather than "32" bits long. (Because the "clear" and "relative" options are never simultaneously asserted for a short toggle instruction 314, the assertion of both such bits may be used for this purpose).As shown, the long toggle instructions 312 and the 32-bit short toggle instructions 314 may also each include an "immediate count" field. This field may, for example, be used to identify the "toggle count" value that the toggle counter 306 must reach for an output event (e.g., a toggling of specified output bits or forcing of output bits to particular values) to occur. Alternatively, some or all of the same bits may be used to identify a particular register (e.g., one of the general purpose registers R0-R7 identified in Table 1 of the '449 and '452 applications) that contains the toggle count value that is to be used for such a purpose. In the examples shown in Fig. 4, the assertion of bit "6" in an instruction opcode indicates that the toggle count value is to be determined from the bits in the "immediate count" field (i.e., the bits labeled "I"), rather looking to bits "7" to "9" (i.e.., the bits labeled as "RM") to identify the register containing the toggle count value. In the illustrative example shown, the 16-bit toggle instruction does not include either an "immediate count" field or a set of bits 5 identifying a register. Instead, the DPP knows to look by default to a specific register (e.g., the general purpose register RO identified in Table 1 of the '449 and '452 applications) for the toggle count value that is to be used when such an instruction is received.Fig. 5 shows an example of channel control circuitry 118 that may be employed in the DPP to facilitate the implementation of short toggle instructions in addition to long toggle 0 instructions. Although the details of only channel select circuit 1180 associated with the output pads 12Oo will now be described, it should be appreciated that the other channel select circuits 1181 to 118N associated with the other output pads 12O1 to 120N, respectively, may include the same or similar circuitry. As shown, in the illustrated example, the channel select circuit 118o includes three multiplexers 324, 326, 328, four AND gates 330, 332, 334, 336, 5 three inverters 338, 340, 342, an XOR gate 344, and a flip-flop 346.In the illustrated example, the channel control circuitry 118 includes a separate circuit 118o, 118i, 118N for each nibble (i.e., group of four bits) that is provided at a respective group of four output pads 12O0, 12O1, 12ON of the DPP. As shown, each of the channel control circuits 1180, 118[chi], 118N may be provided with toggle control signals 310 from the decoder 0 302, as well as a "toggle match" signal from the comparator 308 of the synchronous timer 114. Vector data from a particular part of the instruction being executed is also supplied to each channel control circuit 1180, 118], 118N as indicated by blocks 318, 320, and 322. For <M> example, with reference to Figs. 3 and 4, each block 322 may be provided with bits "11" to "14" of every executed instruction (which, for 16-bit short toggle instructions 316, 5 corresponds to the "nibble" field 316b in Fig. 3), each block 320 may be provided with either bits "24" to "27" or bits "28" to "31" of every executed instruction that includes such bits (which, for 32-bit short toggle instructions 314, corresponds to one half of the "byte" field 314b in Fig. 3), and each block 318 may be provided with a different group of four bits from the "vector field" of every executed long instruction that includes such bits. 0 A sufficient number of channel select circuits 1180, 1181 , 118N may be employed to provide four different bits from the long instruction vector field (e.g., bits "23" to "63" in the example of Fig. 4) to the respective blocks 318 of such circuits. For example, the bits provided to the block 318 of the circuit 1180 may correspond to bits "23" to "26" of a received instruction, the bits provided to the same block of the circuit 118i may correspond to the bits "27" to "30" of the received instruction, and so on. Because each channel control circuit 1180, 118i, 118N may be permanently associated with and responsible for driving a respective group of four output pads 12O0, 120i, 12ON, the same group of bits from each instruction word may be provided to the same blocks 318, 320, 322 of a particular channel control circuit 1180, 118j, 118N every time a new instruction is decoded.In the example shown, when the short toggle line 310b is low (indicating that the decoded instruction is not a short toggle instruction 314, 316), the multiplexer 326 is controlled (via the inverter 340) to provide the contents of the block 318 to one of the inputs of the AND gate 334. (If the opcodes shown in Fig. 4 are employed, then the decoder 306 may simply provide bit "3" of the opcode as the control signal on the short toggle line 310b). If a toggle match signal is received from the synchronous timer 114 when the circuit is in such a state (and the toggle/force line 310c is high), then the signals at the outputs 12O0 will be caused to toggle in the manner specified by the bits in the block 318. It should be appreciated that all of the other channel control circuits 1 IS1 to 118N may similarly selectively cause the signals on their corresponding output pads 12Oi to 12ON toggle at the same time when a toggle match signal is received from the synchronous timer 114, thus causing all of the outputs of the DPP to toggle at the same time as indicated in the vector field of the received instruction.In the illustrated example, the short toggle width select line 310a from the decoder 302 controls the multiplexer 324 to select either the four bits from the block 320 or the four bits from block 322 as an input to the multiplexer 326. As noted above, the four bits from the block 322 may be selected when a 16-bit toggle instruction is being processed, and the four bits from the block 320 may be selected when a 32-bit toggle instruction is being processed. (If the opcodes shown in Fig. 4 are employed, then the decoder 306 may generate an appropriate control signal on the select line 310a, for example, simply by performing a logical AND operation on bits "4" and "5" of the opcode of the received instruction.)The "nibble select line" for each channel select circuit (e.g., nibble select lineo 31Od for channel select circuit 1180) may be asserted when the decoder 302 determines (e.g., by examining the bits in the nibble select field 316a or the byte select field 314a) that the particular output nibble for which the channel control circuit is responsible has been selected for toggling. With reference to Figs. 3 and 4, for example, the nibble select line0 310d may be asserted if either (1) the nibble select bits 316a (i.e., bits "7" to "10" in Fig. 4) in a 16-bit short toggle instruction 316 identify the particular output nibble for which the channel control circuit 118o is responsible, or (2) the byte select bits 314a (i.e., bits "20" to "23" in Fig. 4) in a 32-bit short toggle instruction 314 identify an output byte containing the particular output nibble for which the channel control circuit 1180 is responsible. Thus, for 16-bit short toggle instructions 316 (which can select one or more bits within only a single nibble for toggling), the nibble select line of only a single channel control circuit 1180, 118i, 118N will be asserted. For 32-bit short toggle instructions 314 (which can select one or more bits within only a single byte for toggling), the nibble select lines of only the two channel control circuits 1180, 118i, 118N responsible for driving the bits of the selected output byte will be asserted.As shown in Fig. 5, the short toggle select line 310b and nibble selecto line 31Od from the decoder 302 may together control the multiplexer 326 (via AND gates 330, 332 and inverters 338, 340) to select one of: (1) the four bits from the block 318, (2) the selected four bits from the multiplexer 324, and (3) a set of four zeros. If the decoded instruction is a toggle instruction 312, 314, 316, then the selected one of these three inputs will determine how the four output bits for which the channel select circuit 1180 is responsible are to be toggled (unless the signal on the toggle/force line 310c indicates that the toggle instruction is a force vector instruction) upon receipt of a toggle match signal from the synchronous timer 114.When the toggle/force line 310 is low, the inverter 342 supplies a high signal to one of the inputs of the AND gate 336. Thus, when a toggle match signal is received from the synchronous timer 114, the AND gate 336 causes the multiplexer 328 to select the long vector nibble block 318, rather than the output of the XOR gate 344, as the input to the flip- flop 346, and thus causes the values of the long vector nibble block 318 to be forced upon the output pads 12O0 rather than allowing the four bits from the multiplexer 326 to determine how the outputs should be toggled. (If the opcodes of Fig. 4 are employed, then the decoder 306 may simply provide bits "3" and "0" of the opcode as the control signals on the short toggle line 31 Ob and the toggle/force line 31 Oc, respectively).In the example circuit shown, receipt of a toggle match signal will cause the AND gate 334 to provide the four bits from the multiplexer 326 to one of the inputs of the XOR gate 344. The XOR gate 344, in turn, causes the four bits held by the "Q" output of the flip- flop 346 to be toggled as specified by those four bits (provided the toggle/force line 310c is high). If the nibble selecto line 31Od is low when the short toggle select line 310b is high (indicating that the instruction is either a 16-bit short toggle instruction 316 or a 32-bit short toggle instruction 314) and the toggle/force select line 310c is also high, then the multiplexer 326 provides four zeros to the input of the AND gate 334, thus causing the outputs of that particular nibble to maintain their current state, and not be toggled, when the toggle match signal is received. If, however, the nibble selecto line 31Od is high (indicating that the decoder has determined that the particular output nibble for which the channel control circuit 1180 is responsible has been selected for toggling) when the short toggle select line 31 Ob and toggle/force select line 310c are both high, then the multiplexer 326 provides the four output bits of multiplexer 324 to the input of the AND gate 344, thus resulting in the output bits of that particular output nibble being toggled as indicated by those bits when the toggle match signal is received.In some embodiments, a pattern generation program may be written using only "long" toggle instructions (several examples of such programs were disclosed in the '449 and '452 applications, incorporated by reference above) and the determination of which long toggle instructions can be converted into either 16-bit or 32-bit short toggle instructions can be left to the timing generator assembler (TGASM). For example, any toggle instructions that require the toggling of one or more bits from only a single byte may be compressed into a 32- bit toggle instruction. Similarly, any toggle instructions that require the toggling of one or more bits from only a single nibble may be compressed into 16-bit toggle instructions. The TGASM may also automatically align the remaining longer instructions in memory and insert appropriate "align" instructions in the code so as to ensure that each such instruction can be fetched in a single memory access.Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The invention is limited only as defined by the following claims and the equivalents thereto.
A digital system and method of operation is provided in which several processors (440, 450) are connected to a shared memory resource (460). Translation lookaside buffers (TLB) (400, 402) are connected to receive a request address (404a-n) from each respective processor. Each TLB has a set of entries that correspond to pages of address space. Each entry provides a set of task memory attributes (TMA) (412a-n) for the associated page of address space. Task memory attributes are defined by a task control block associated with a currently executing task. For each memory transfer request, the TLB accesses an entry corresponding to the request address and provides a translated physical memory address and a task memory attribute value associated with that requested address space page. Functional circuitry (470) performs pre /post-processing on data that is being transferred between a processor and the memory in accordance with the task memory attribute value provided by the TLB with each memory transfer request. Thus, data accessed at the same address by different tasks on a same processor or on different processors can be pre-processed or post-processed in a manner defined by a task control block. Such pre/post-processing may include compression/decompression, encryption/decryption, or formatting, for example. <IMAGE>
1. A method for transferring data between a storage resource and an initiator resource, comprising: associating a task memory attribute (TMA) value with a program task; executing the program task; providing the TMA value with a data transfer request from the initiator resource; and transferring a data item between the initiator resource and the storage resource in a manner indicated by the TMA value. 2. The method of Claim 1, further comprising the steps of: associating a task identification value with the program task; providing the task identification value with the data transfer request; and wherein the step of transferring a data item is responsive to both the TMA value and the task identification value. 3. The method of any preceding Claim, further comprising the steps of: storing a translated page address value in an entry location in a memory management unit (MMU) by selecting the translated address value from a page table; storing the TMA value with the translated page address value in the MMU entry location by obtaining the TMA value from a task control block associated with the program task; and using the MMU entry location to provide a translated address and the TMA value with the data transfer request. 4. The method according to Claim 3, further comprising the steps of: storing a first task identification value associated with a first program task in a first MMU entry location, wherein the first MMU entry location also holds a first translated address value and a first TMA value associated with the first program task; executing a second program task that uses the first translated page value; and creating a different MMU entry for the second program task by storing a second task identification value and a second TMA value associated with the second task along with the first translated page value in a second MMU entry location. 5. The method according to any preceding Claim, wherein the step of associating a TMA value with a program task comprises identifying at least a first address range and a second address range used by the program task, and associating a first TMA value with the first address range and a second TMA value with the second address range. 6. The method according to any preceding Claim, wherein the step of transferring comprises the steps of: retrieving the data item from the storage resource in response to the data transfer request; pre-processing the data item in a manner indicated by the TMA value; and providing the pre-processed data item to the initiator resource. 7. The method according to any preceding Claim, wherein the step of transferring comprises the steps of: providing the data item from the initiator resource; post-processing the data item in a manner indicated by the TMA value; and storing the post-processed data item in the storage resource in response to the data transfer request. 8. The method according to any preceding Claim, wherein the step of transferring a data item encrypts or decrypts the data item using a key value included within the TMA value. 9. The method according to any preceding Claim, wherein the step of transferring a data item performs a data format transformation. 10. The method according to any preceding Claim, wherein the step of transferring a data item performs data compression or data decompression. 11. The method according to any preceding Claim, wherein the step of transferring a data item at a selected address in the storage resource is performed in a first manner in response to a first TMA value for a first task, and wherein the step of transferring a data item at the selected address is performed in a second manner in response to a second TMA value for a second task. 12. A digital system comprising: an initiator resource connected to a storage resource, the initiator resource operable to provide a data transfer request to the storage resource; and attribute circuitry connected to the initiator resource, the attribute circuitry operable to provide a task memory attribute (TMA) value with each data transfer request, wherein for each data transfer request the attribute circuitry provides a TMA value that is in accordance with a program task being executed at the time each data transfer request is initiated. 13. The digital system of Claim 12, further comprising a transformation circuit connected between the storage resource and the initiator resource in a manner that data transferred between the initiator resource and the storage resource in response to a data transfer request can be transformed by the transformation circuit, wherein the transformation circuit is responsive to the TMA value provided with each data transfer request. 14. The digital system according to any of Claims 12-13, further comprising a memory management unit (MMU) having a plurality of entry locations for holding a plurality of translated page address values and comprising the attribute circuitry, wherein each MMU entry location is operable to be loaded with a translated page address value and a TMA value. 15. The digital system according to any of Claims 12-14 being a personal digital assistant, further comprising: a display, connected to the initiator resource via a display adapter; radio frequency (RF) circuitry connected to the initiator resource; and an aerial connected to the RF circuitry.
FIELD OF THE INVENTION This invention generally relates to microprocessors, and more specifically to improvements in access and data transfer to storage resources, systems, and methods of making. DESCRIPTION OF THE BACKGROUND ART Microprocessors are general-purpose processors that provide high instruction throughputs in order to execute software running thereon, and can have a wide range of processing requirements depending on the particular software applications involved. Many different types of processors are known, of which microprocessors are but one example. For example, Digital Signal Processors (DSPs) are widely used, in particular for specific applications, such as mobile processing applications. DSPs are typically configured to optimize the performance of the applications concerned and to achieve this they employ more specialized execution units and instruction sets. Particularly in applications such as mobile telecommunications, but not exclusively, it is desirable to provide ever-increasing DSP performance while keeping power consumption as low as possible. To further improve performance of a digital system, two or more processors can be interconnected. For example, a DSP may be interconnected with a general-purpose processor in a digital system. The DSP performs numeric intensive signal processing algorithms while the general-purpose processor manages overall control flow. The two processors communicate and transfer data for signal processing via shared memory. A direct memory access (DMA) controller is often associated with a processor in order to take over the burden of transferring blocks of data from one memory or peripheral resource to another and to thereby improve the performance of the processor. Modular programming builds a computer program by combining independently executable units of computer code (known as modules), and by tying modules together with additional computer code. Features and functionality that may not be provided by a single module may be added to a computer program by using additional modules. The design of a computer programming unit known as a task (or function) is often accomplished through modular programming, where a specific task is comprised of one module and the additional computer code needed to complete the task (if any additional code is needed) . However, a task may be defined as broadly as a grouping of modules and additional computer codes, or, as narrowly as a single assembly-type stepwise command. A computer program may be processed (also called "run" or "executed") in a variety of manners. One manner is to process the computer code sequentially, as the computer code appears on a written page or on a computer screen, one command at a time. An alternative manner of processing computer code is called task processing. In task processing, a computer may process computer code one task at a time, or may process multiple tasks simultaneously. Various tasks may operate on a set of data stored in memory. The various tasks may be executed on various processors that have shared access to the memory. Accordingly, there is needed a system and method for managing task processing that takes into account resource capabilities and capacity, and other task processing needs. SUMMARY OF THE INVENTION Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims. In accordance with a first embodiment of the invention, a method is provided for transferring data between a storage resource and an initiator resource. A task memory attribute (TMA) value is associated with a program task and the task is executed. During execution of the task, a data transfer request is initiated from the initiator resource by providing an address value and the TMA value. A data item is then transferred between the initiator resource and the storage resource in a manner indicated by the TMA value. Data accessed at the same address by different tasks on a same processor or on different processors can be pre-processed or post-processed in a manner specified by the TMA value. Such pre/post-processing may include compression/decompression, encryption/decryption, or formatting, for example. In another embodiment, a task identification value is also associated with the program task and provided with each data transfer request. In this case, pre/post processing of a data item being transferred is responsive to both the TMA value and the task identification value. In another embodiment, a digital system is provided that has an initiator resource connected to a storage resource; the initiator resource is operable to provide a data transfer request to the storage resource. Attribute circuitry is connected to the initiator resource and is operable to provide a task memory attribute (TMA) value with each data transfer request. For each data transfer request, the attribute circuitry provides a TMA value that is in accordance with a program task being executed at the time each data transfer request is initiated. A transformation circuit is connected between the storage resource and the initiator resource in a manner that data transferred between the initiator resource and the storage resource in response to a data transfer request can be transformed by the transformation circuit. The transformation circuit performs pre/post processing on the data being transferred in responsive to the TMA value provided with each data transfer request. BRIEF DESCRIPTION OF THE DRAWINGS Particular embodiments in accordance with the invention will now be described, by way of example only, and with reference to the accompanying drawings in which like reference signs are used to denote like parts and in which the Figures relate to the digital system of Figure 1 and in which: Figure 1 is a block diagram of a digital system that includes an embodiment of the present invention in a megacell core having multiple processor cores; Figure 2A and 2B together is a more detailed block diagram of the megacell core of Figure 1; Figure 3 is a block diagram illustrating a shared translation lookaside buffer (TLB) and several associated micro-TLBs ( mu TLB) included in the megacell of Figure 2; Figure 4 is a block diagram of a digital system similar to Figure 1 illustrating a functional unit that is responsive to task memory attributes; Figure 5 is a block diagram of a digital system similar to Figure 1 illustrating a functional unit that is responsive to task memory attributes and task-ID values; Figure 6 is a combined timing diagram and flow diagram illustrating how task memory attributes are loaded into a memory management unit in the above systems; Figure 7 is a block diagram of a digital system similar to that of Figure 1 illustrating a cloud of tasks that are scheduled for execution on the various processors of the digital system; and Figure 8 is a representation of a telecommunications device incorporating an embodiment of the present invention. Corresponding numerals and symbols in the different figures and tables refer to corresponding parts unless otherwise indicated. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Although the invention finds particular application to Digital Signal Processors (DSPs), implemented, for example, in an Application Specific Integrated Circuit (ASIC), it also finds application to other forms of processors. An ASIC may contain one or more megacells which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library. Figure 1 is a block diagram of a digital system that includes an embodiment of the present invention in a megacell core 100 having multiple processor cores. In the interest of clarity, Figure 1 only shows those portions of megacell 100 that are relevant to an understanding of an embodiment of the present invention. Details of general construction for DSPs are well known, and may be found readily elsewhere. For example, U.S. Patent 5,072,418 issued to Frederick Boutaud, et al, describes a DSP in detail. U.S. Patent 5,329,471 issued to Gary Swoboda, et al, describes in detail how to test and emulate a DSP. Details of portions of megacell 100 relevant to an embodiment of the present invention are explained in sufficient detail herein below, so as to enable one of ordinary skill in the microprocessor art to make and use the invention. Referring again to Figure 1, megacell 100 includes a control processor (MPU) 102 with a 32-bit core 103 and a digital signal processor (DSP) 104 with a DSP core 105 that share a block of memory 113 and a cache 114, that are referred to as a level two (L2) memory subsystem 112. A traffic control block 110 receives transfer requests from a host processor connected to host interface 120b, requests from control processor 102, and transfer requests from a memory access node in DSP 104. The traffic control block interleaves these requests and presents them to the shared memory and cache. Shared peripherals 116 are also accessed via the traffic control block. A direct memory access controller 106 can transfer data between an external source such as off-chip memory 132 or on-chip memory 134 and the shared memory. Various application specific processors or hardware accelerators 108 can also be included within the megacell as required for various applications and interact with the DSP and MPU via the traffic control block. External to the megacell, a level three (L3) control block 130 is connected to receive memory requests from internal traffic control block 110 in response to explicit requests from the DSP or MPU, or from misses in shared cache 114. Off chip external memory 132 and/or on-chip memory 134 is connected to system traffic controller 130; these are referred to as L3 memory subsystems. A frame buffer 136 and a display device 138 are connected to the system traffic controller to receive data for displaying graphical images. A host processor 120a interacts with the external resources a system traffic controller 130. A host interface connected to traffic controller 130 allows access by host 120a to external memories and other devices connected to traffic controller 130. Thus, a host processor can be connected at level three or at level two in various embodiments. A set of private peripherals 140 are connected to the DSP, while another set of private peripherals 142 are connected to the MPU. Figure 2, comprised of Figure 2A Figure 2B together, is a more detailed block diagram of the megacell core of Figure 1. DSP 104 includes a configurable cache 203 that is configured as a local memory 200 and data cache 202, and a configurable cache 204 that is configured as instruction cache 206 and a RAM-set 208, which are referred to as level one (L1) memory subsystems. The DSP is connected to the traffic controller via an L2 interface 210 that also includes a translation look-aside buffer (TLB) 212. A DMA circuit 214 is also included within the DSP. Individual micro TLBs ( mu TLB) 216-218 are associated with the DMA circuit, data cache and instruction cache, respectively. Similarly, MPU 102 includes a configurable cache 223 that is configured as a local memory 220 and data cache 222, and a configurable cache 224 that is configured as instruction cache 226 and a RAM-set 228, again referred to as L1 memory subsystems. The MPU is connected to traffic controller 110 via an L2 interface 230 that also includes a TLB 232. A DMA circuit 234 is also included within the MPU. Individual micro TLBs ( mu TLB) 236-238 are associated with the DMA circuit, data cache and instruction cache, respectively. L2 traffic controller 110 includes a TLB 240 and one or more micro-TLB ( mu TLB) 242 that are associated with system DMA block 106, host processor interface 120b for a host connected at level two, and other application specific hardware accelerator blocks. Similarly, L3 traffic controller 130 includes a mu TLB controllably connected to TLB 240 that is associated with system host 120a at level three. This mu TLB is likewise controlled by one of the megacell 100 processors. Memory Management Unit At the megacell traffic controller level, all addresses are physical. They have been translated from virtual to physical at the processor sub-system level by a memory management unit (MMU) associated with each core, such as DSP core 105 and MPU core 103. At the processor level, access permission, supplied through MMU page descriptors, is also checked, while at the megacell level protection between processors is enforced by others means, which will be described in more detail later. The translation look-aside buffer (TLB) caches contain entries for virtual-to-physical address translation and access permission checking. If the TLB contains a translated entry for the virtual address, the access control logic determines whether the access is permitted. If access is permitted, the MMU generates the appropriate physical address corresponding to the virtual address. If access is not permitted, the MMU sends an abort signal via signal group 244 to the master CPU 102. The master CPU is identified by the value of a resource identification (R-ID) field. On a slave processor such as a hardware accelerator the R-ID is equal to the R-ID of the master CPU. Upon a TLB miss, i.e., the TLB does not contain an entry corresponding to the virtual address requested, translation table walk software retrieves the translation and access permission information from a translation table in physical memory. Once retrieved, the page or section descriptor is stored into the TLB at a selected victim location. Victim location selection is done by software or with hardware support using methods known by persons skilled in the art. Translation Table To provide maximum flexibility, the MMU is implemented as a software table walk, backed up by TLB caches both at the processor sub-system and megacell level. This allows easy addition of new page size support or new page descriptor information if required. A TLB miss initiates a TLB handler routine to load the missing reference into the TLB. At the Megacell 100 level, a TLB miss asserts a miss signal in signal group 244 and is routed via system interrupt router 250 to the processor having generated the missing reference or to the processor in charge of the global memory management, via interrupt signals 251, 252. Translation tables and TLB cache contents must be kept consistent. A flush operation is provided for this reason. An address reference is generally located within the mu TLB or main TLB of each processor sub-system; however, certain references, such as those used by system DMA 106 or host processor 120, for example, to access megacell memories can be distributed within L2 traffic controller 110 and cached into L2 system shared TLB 240. Because system performance is very sensitive to the TLB architecture and size, it is important to implement efficient TLB control commands to lock entries for critical tasks or unlock and flush those entries when a task is deleted without degrading the execution of other tasks. Therefore, each TLB and L2 cache entry holds a task-ID. Commands are supplied to flush locked or unlocked entries of a TLB/ mu TLB corresponding to a selected task. As part of the page descriptor information, the MMU provides cacheability and bufferability attributes for all levels of memory. The MMU also provides a "Shared" bit for each entry to indicate that a page is shared among multiple processors (or tasks). This bit, as standalone or combined with the task-ID, allows specific cache and TLB operation on data shared between processors or/and tasks. The MMU may also provide additional information, such as memory access permission and access priority as described later. All megacell memory accesses are protected by a TLB. As they all have different requirements in term of access frequencies and memory size, a shared TLB with individual mu TLB backup approach has been chosen to reduce the system cost at the megacell level. This shared TLB is programmable by each processor. The architecture provides enough flexibility to let the platform work with either independent operating systems (OS) on each processors or a distributed OS with a unified memory management, for example. The present embodiment has a distributed operating system (OS) with several domains corresponding to each processor but only a single table manager for all processors. Slave processors do not manage the tables. In a first embodiment slave processors R-ID are equal to the R-ID of the master CPU. In another embodiment, they could, however, have a different R-ID to control their TLB entries lock/unlock entries corresponding to some of their own tasks or flush all their entries, when putting themselves in sleep mode to free entries for the others processors. Having different R-ID provides a means to increase security in a concurrent multi-processor environment, processor X can not access memory allocated to processor Y. In another embodiment with several independent OS(s), for example, there will be independent tables. These tables can be located in a memory space only viewed by the OS that they are associated with in order to provide protection from inadvertent modification by another OS. As they manage the virtual memory and task independently, the R-ID provides the necessary interprocessor security. R-Ids are managed by a single master CPU. This CPU can make TLB operations on all TLB entries. TLB operation or memory accesses from slave processor are restricted by their own R-ID. The CPU master will have rights to flush out entries belonging to another processor in a different OS domain. The organization of the data structures supporting the memory management descriptor is flexible since a software TLB-miss handler resolves each TLB miss. These data structures include the virtual-to-physical address translation and additional descriptors to manage the memory hierarchy. An example list of these descriptors and their function is described in Table 1. Various memory access permission attributes can be specified. In other embodiments, a processor may have other modes that enable access to memory without permission checks. Similarly, other embodiments may provide more or fewer permission attributes and/or more or fewer memory management descriptors. <tb><TABLE> Id=Table 1 - Columns=2 <tb>Title: Memory Management Descriptors <tb>Memory<SEP>Supervisor: no access, read only, read/write <tb>Access Permissions attributes<SEP>User: no access, read only, read/write <tb>Execute Never<SEP>provides access permission to protect data memory area from being executed. This information can be combined with the access permission described above or kept separate. <tb>Shared<SEP>indicates that this page may be shared by multiple tasks across multiple processor. <tb>Cacheability<SEP>Various memory entities such as individual processor's cache and write buffer, and shared cache and write buffer are managed through the MMU descriptor. The options included in the present embodiment are as follows: Inner cacheable, Outer cacheable, Inner Write through/write back, Outer write through/write back, and Outer write allocate. The terms Inner and outer refer to levels of caches that are be built in the system. The boundary between inner and outer is defined in specific embodiment, but inner will always include L1 cache. In a system with 3 levels of caches, the inner correspond to L1 and L2 cache and the outer correspond to L3 due to existing processor systems. In the present embodiment, inner is L1 and outer is L2 cache. <tb></TABLE> MMU/TLB Control Operation Figure 3 is a block diagram illustrating a shared translation look-aside buffer (TLB) 300 and several associated micro-TLBs ( mu TLB) 310(0)-310(n) included in megacell 100 of Figure 2. On a mu TLB miss, the shared TLB is first searched. TLB controller 320 is alerted by asserting a mu TLB miss signal 324. In case of a hit on the shared TLB, the mu TLB that missed is loaded with the entry content of the shared TLB 300. In case of miss in shared TLB 300, the shared TLB alerts TLB controller 320 by asserting a TLB miss signal 326. Controller 320 then asserts an interrupt request signal 328 to system interrupt controller 250. Interrupt controller 250 asserts an interrupt to the processor who's OS supervises the resource that caused the miss. A TLB entry register 330 associated with TLB controller 320 is loaded by a software TLB handler in response to the interrupt. Once loaded, the contents of TLB entry register 330 are transferred to both shared TLB 300 and the requesting mu TLB at a selected victim location as indicated by arcs 332 and 334. A separate TLB entry register 330 is only one possible implementation and is not necessarily required. The separate TLB entry register is a memory mapped register that allows buffering of a complete TLB entry (more than 32 bits). A TLB value is not written directly in the TLB cache but is written to the TLB entry register first. Because of the size of an entry, several writes are required to load the TLB entry register. Loading of a TLB cache entry is then done in a single operation "Write TLB entry". Advantageously, other mu TLBs associated with other modules can continue to access the shared TLB while the TLB entry register is being loaded, until a second miss occurs. Advantageously, by controlling access to the TLB via the TLB entry register, CPUs have no direct access to TLB cache internal structure and thus the risk of partial modifications inconsistent with the MMU tables is avoided. The sequence of operations to update a TLB cache entry after a miss is: 1 - the software TLB handler writes to the TLB entry register, 2 - the software TLB handler sends a command to write the TLB entry, which transfers a value from TLB entry register to a preselected victim TLB cache entry; and 3 - control circuitry checks and preselects a next victim TLB entry, in preparation for the next miss. In this embodiment, this step is generally performed in background prior to the occurrence of a miss. Advantageously, TLB cache entries can be preemptively updated under OS software control to prevent TLB miss by preloading a new entry, using the following sequence of operation: 1 - control circuitry checks and selects a TLB entry, referred to as a victim TLB cache entry. 2 - the software TLB handler writes to the TLB entry register, and 3 - the software TLB handler sends a command to write the TLB entry, which transfers a value from TLB entry register to the selected victim TLB cache entry. The priority on the shared TLB is managed in the same way as priority on a memory access. One or more resources can be using the shared TLB. One or more resources can program the shared TLB. The replacement algorithm for selecting the next victim location in the shared TLB is under hardware control. A victim pointer register 322 is maintained for each TLB and mu TLB to provide a victim separate pointer for each. A typical embodiment will use a round robin scheme. Another embodiment may use a least recently used scheme or a random scheme, for example. Different TLBs within a single megacell can use different replacement schemes. However, in an embodiment in which the system has a master CPU with a distributed OS, this master CPU could also bypass the hardware replacement algorithm by selecting a victim entry, reading and then writing directly to the shared TLB, for example. In this embodiment, each shared TLB has 256 entries. Each mu TLB is generally much smaller, i.e., has fewer entries, than the shared TLB. In various embodiments, each shared TLB has 64-256 or more entries while mu TLBs generally have 4-16 entries. The penalty for a miss in a mu TLB is small since a correct entry is generally available from the shared TLB. Therefore, the present embodiment does not provide direct control of the victim pointers of the various mu TLBs; however, direct control of the victim pointer of shared TLBs, such as 212, 232, and 240, is provided. Each entry in a TLB has a resource identifier 301 along with task-ID 302. Resource-IDs and task IDs are not extension fields of the virtual address (VA) but simply address qualifiers. Resource IDs are provided by a resource-ID register associated with each resource; such as R-ID register 442a associated with resource 440 and R-ID register 442n associated with resource 450 of Figure 4. Resource 440 is representative of various DMA engines, coprocessor, etc within megacell 100 and/or an external host connected to megacell 100. Resource 450 is representative of various processors within megacell 100. Each resource 440, 450 typically has its own associated R-ID register; however, various embodiments may choose to provide resource ID registers for only a selected portion of the resources. A task ID is provided by a task-ID register, such as task-ID register 444a associated with resource 440 and task-ID register 444n associated with resource 450. A task register associated with a non-processor resource, such as DMA, a coprocessor, etc, is loaded with a task value to indicate the task that it is supporting. In another embodiment, only processor resources 440, 450 that execute program modules have an associated programmable task-ID register. In this case, a system wide default value may be provided for access requests initiated by non-processor resources such as DMA. The default value may be provided by a programmable register or hardwired bus keepers, for example. Advantageously, with the task-ID, all entries in a TLB belonging to a specific task can be identified. They can, for instance, be invalidated altogether through a single operation without affecting the other tasks. Advantageously, the resource ID permits discrimination of different tasks being executed on different resources when they have the same task number. Task-ID number on the different processors might not be related; therefore, task related operations must be, in some cases, qualified by a resource-ID. In another embodiment, the R-ID and Task_ID registers are not necessarily part of the resource core and can be located elsewhere in the system, such as a memory mapped register for example, and associated to a resource bus. The only constraint is that a task_ID register related to a CPU must be under the associated OS control and updated during context switch. R-ID must be set during the system initialization. In some embodiments at system initialization, all R-ID and Task-ID registers distributed across the system are set to zero, which is a default value that causes the field to be ignored. In other embodiments, a different default value may be used. In other embodiments, R-ID "registers" provide hardwired values. Referring again to Figure 3, each TLB entry includes a virtual address field 305 and a corresponding physical address field 308 and address attributes 309. Various address attributes are described in Table 1. Address attributes define conditions or states that apply to an entire section or page of the address space that is represented by a given TLB entry. An S/P field 306 specifies a page size. In the present embodiment, an encoding allows page sizes of 64kb, 4kb and 1 kb to be specified. Naturally, the page size determines how many most significant (ms) address bits are included in a check for an entry. Each TLB entry also includes "shared" bit 303 and a lock bit 304. All entries marked as shared can be flushed in one cycle globally or within a task. In this embodiment of the invention, each TLB also includes a task related memory attribute field 312, referred to as "task memory attribute" (TMA), the operation of which will be described in more detail below. Advantageously, a TMA value is provided along with a translated physical address for each transaction request. A V field 307 indicates if an associated TLB cache entry is valid. V field 307 includes several V-bits that are respectively associated with R-ID field 301 to indicate if a valid R-ID entry is present, task-ID field 302 to indicate if a valid task-ID entry is present, and virtual address field 305 to indicate if a valid address entry is present. These valid bits enable compare logic for each associated field. As mentioned earlier, the resource ID field and task ID field in each entry of the TLB/ mu TLB can be used to improve security. During program task execution, each transaction request is checked by the miss control circuitry of the TLB/ mu TLB to determine if the entry is allowed for a specific resource or for all resources and for a specific task or for all tasks. For example, if a request is received and a valid entry is present for the proffered virtual address but a task ID or R-ID which accompany the request does not match the corresponding valid task ID and R-ID fields of the entry, then a miss is declared. If the task ID and/or R-ID fields of the entry are marked as invalid, then they are ignored. Figure 7 is a block diagram of a digital system similar to that of Figure 1 illustrating cloud of tasks that are scheduled for execution on the various processors of the digital system. Typically, each software task includes a task priority value that is commonly used by an operating system to schedule an order of execution for a set of pending tasks 1440. In this illustration, a circle such as 1442 represents a task, with a task name "c" and a task priority of 12, for example. Likewise, task 1443 has a task name "r" and a priority of 15, where a lower number indicates a higher priority. If the set of tasks 1440 are assigned to three processors, then an operating system on each processor forms a ready to execute queue, such as ready queue 1446 in which task "c" is scheduled for first execution, then task "a" and finally task "b" according to priority values of 12, 15, and 50 respectively. The Task ID register in each processor is loaded when a task is invoked. Table 2 illustrates several portions of an example instruction code sequences in which a task is spawned. From line 1 to line 5, task "c" is active and spawns a new task, "audio" on line 5. The kernel is then invoked to instantiate the new task and create an associated task control block (TCB) . A TCB is a control structure that is stored in memory; a separate TCB is used to identify each instantiation of a task, as is generally known. An eight-bit (numbers of bits can be more or less in other embodiments) task-ID field is stored in the TCB at line 11. At line 12, a task memory attribute value is stored in the TCB. During the context switch (reschedule in line 14) before launching the "audio" task, the kernel loads task-ID register 1412 with the task-ID value held in the TCB (Table 3) or in another table. At line 15, the new task is now active. As the new task begins to execute, data transfer requests to memory are initiated by either a processor that is executing the task, or by other initiator resources such as a DMA resource in support of the task. Since this is a new task, misses may occur in the TLB due to new pages of memory being accessed by the new task. Of course, if the task had been previously executed, correct page entries may already be present in the TLB. Also, as described below, if the new task accesses a page of memory that has previously been accessed by another task and the page entry is still present in the TLB, a miss will still occur if the task-valid bit is set because the task-ID field does not match the new task-ID value provided by the initiator resource with each data transfer request. The MMU handler will be invoked to handle each of the TLB misses and will access, in addition to the standard MMU table, the TCB of the currently executing task in order to obtain TMA values for TMA field 312 of each new TLB entry that is handled. Advantageously, by accessing TCBs to obtain TMA values to be included as memory attributes in each TLB entry, the contents of the operating system memory address translation tables are not impacted. Table 3 is an example task control block that is used to define a task memory attribute value. Typically, the OS uses a 32-bit task-ID that is in fact an address that enables the OS to locate the task control block information. At line 4, an execution priority value is defined that is used by the operating system to schedule execution of the task. At line 5, a task-ID value is defined that is used to set the task ID register when the task is instantiated. At line 6, the task memory attribute is defined. In other embodiments, other means than a TCB may be provided for storing the task ID for use by the OS or MMU handler, such as a table of task-IDs, for example. Referring again to Figure 3, task memory attribute field 312 can be set in response to information provided at line 6 of the TCB illustrated in Table 3. This information can be used directly by the MMU manager when loading a new entry in TLBs. In the present embodiment, TMA information is not maintained in page tables but is inserted by the TLB miss handler at the time of a TLB miss by using the task-ID value of the transaction request that caused the TLB miss to access the corresponding task control block. Other embodiments may use other means for setting the TMA field in the TLB entry, such as by storing this information in a separate table or in the MMU page tables, for example, but this might require multiple table entries for a same page if different tasks use the same page. In the present embodiment, the valid bit associated with the task-ID field is loaded through the MMU table walk and is part of the MMU tables. Thus, when the TLB miss handler accesses a page table in response to a TLB miss, it queries the task-ID valid bit field of the MMU page table; if this bit field is asserted, then the TLB miss handler asserts the task-ID valid bit in the TLB entry and loads the task-ID value from the task-ID register of the requester that caused the TLB miss into task ID field 302. If the task-ID valid bit field of the MMU page table is not asserted, then the TLB miss handler de-asserts the task-ID valid bit in the TLB entry and the task-ID value from the task-ID register of the requester that caused the TLB miss is ignored. Thus, a page entry in the TLB can be made sensitive to the task-ID of a transaction request, or the task-ID can be ignored such that several tasks can use the same TLB entry. Figure 4 is a block diagram of a digital system similar to Figure 1 illustrating a functional unit 470 that is responsive to task memory attribute values. As described above, each TLB 400, 402 or mu TLB 410a-n provides a translated physical address 414a, 414n in response to virtual address value 404a, 404n provided by an initiator resource in a transaction request. Additionally, a TMA value 412a, 412n is provided by the TLB/ mu TLB along with the translated physical address. Traffic control circuitry 420 provides arbitration and passes the highest priority transaction request to storage resource 460. The transaction request includes a physical address value on address bus 414 and a TMA value on TMA bus 412. Data bus 416-466 is arranged so that data being transferred between an initiator resource 440, 450 and storage resource 460 can be either pre-processed or post-processed by functional unit 470 in a manner that is defined by the TMA value provided by each transfer request. Advantageously, this allows data that is being transferred to a selected address in memory 460 by one task to be pre/post-processed in one manner, and data that is being transferred to the same address by another task on a same or different processor to be pre/post-processed in a different manner. For example, in one embodiment, functional unit 470 performs compression/decompression using the TMA value as a guide. Data being written to memory is compressed if the TMA has a first value, or not compressed for another TMA value, for example. Compression could be specified to span just a 32-bit word of memory in response to a TMA value, or to span a longer quantity such as 256-bits in response to another TMA value, for example. For spanning larger regions, data bus 466 may be 256 bits, for example. In another embodiment, functional unit 470 performs endianness byte swapping of data. In this case, one task can access a region of memory and transfer data that is arranged as big endian data. Another task can access the same region or a different region and transfer data that is arranged as little endian data. In this case, the TMA value specifies the desired endianness and functional unit 470 monitors several least significant address bits from address bus 414. Functional unit 470 then performs byte swapping in accordance with the TMA specified endianness and the proffered address bits. In another embodiment, functional unit 470 performs encryption/decryption of data using a TMA key value directly as a key or indirectly as a means to select a key or as a pointer to a key, for example. As with compression, encryption may be embodied to cover just a data width corresponding to the width of data bus 416, or to a larger region by sizing data bus 466 accordingly. In other embodiments, more than one functional unit can be provided and TMA field 312 can be defined as two or more sub-fields. In this case, each functional unit would be arranged to be responsive to selected bits of TMA bus 412, for example. Figure 5 is a block diagram of a digital system similar to Figure 1 illustrating a functional unit 570 that is responsive to task memory attributes and task-ID values. Traffic control circuitry 520 is similar to traffic control circuitry of 110 of Figure 1. In this embodiment, functional unit 570 is proximate to processor core 550 such that pre/post-processing is performed on data only for the benefit of processor core 550. Advantageously, in various embodiments of the invention, a functional unit can either be shared such as functional unit 470, or private such as functional unit 570. As described above, a portion of each entry in TLB 500 is loaded from MMU tables 580, such as translated address and descriptor field 508. Task memory attribute values are retrieved from a task control block 582 that is associated with a currently executing task and stored in TMA field 512 of each TLB entry. In this example, functional unit 570 is an encryption unit and the TMA value provides on TMA bus 513 an encryption key or information to select the desired key. This example also includes address range register 574 and associated comparison logic that is used to specify a range of addresses within which encryption/decryption is performed. For addresses proffered on address bus 514 that are outside of a specified range, functional unit 570 passes data between processor core data bus 516 and the traffic controller on data bus 566 without modification. Address range register 574 is memory mapped and available to processor 550. For a given task-id, there can be different TMA values depending on the address range. In this embodiment, the ranges of address correspond to pages. Therefore, several TLB entries may be used for the same task for the different pages, each of them having a different TMA value. The information resides in the TCB in a composite "C" Data type TMA that may hold several TMA values for several address ranges. Of course, other embodiments may equate an address range to something other than a page in an MMU, for example. Likewise, the various TMA values may be stored as separate entries in the TCB, for example. In this example, functional unit 570 also includes task-ID register 572 and associated comparison logic that is used to specify a particular task for which encryption/decryption is performed. For task-ID values proffered on task-ID bus 545 from task register 544 that are different from a selected task-ID value, functional unit 570 passes data between processor core data bus 516 and the traffic controller on data bus 566 without modification. Task-ID register 572 is memory mapped and available to processor 550. In other embodiments, task-ID register 572 may be arranged to allow more than one task to be selected by providing multiple storage locations, for example. Figure 6 is a combined timing diagram and flow diagram illustrating how task memory attributes are loaded into a memory management unit in the above systems. Digital system 600 is a subsystem representative of any of the previously described processors, such as DSP 104 or MPU 102 of Figure 1. Main bus interconnect 620 connects this processor subsystem to other subsystems. TLB 610 with associated mu TLBs 610a-c operate as described previously. Task-ID register 644 provides a task-ID of a task being executed on processor core 605 as described previously. MMU page tables 680 are representative of earlier described MMU page tables. Task control block 682a is associated with task A, task control block 682b is associated with task B, and task control block 682c is associated with task C. Timeline 690 illustrates sequential execution of three tasks. Task A executes during time period 691, task B executes during time periods 692a-b, and task C executes during time period 693. At each task transition, there is a context switch CTSW, such as during time period 699. As described previously, during each context switch, task-ID register 644 is loaded with a task-ID value of the new currently executing task. When initiator resource 605 initiates a memory transfer request a page miss will occur if a corresponding page entry is not available in TLB 610, as represented during time period 694. An MMU handler task will then be invoked to handle the TLB miss. Page tables 680 will be accessed to provide a translated address and associated address attributes, as indicated at 695a and these will be loaded into TLB 610 as indicated at 695b. The TCB of the currently executing task, in this case task B, will be accessed in order to obtain a TMA value for the TMA field of the new TLB entry as indicated at 696a and this will be loaded into the TLB as indicated at 696b. Advantageously, TMA values can be provided by the task control blocks without modifying MMU tables 680. Digital System Embodiment Figure 8 illustrates an exemplary implementation of an example of such an integrated circuit in a mobile telecommunications device, such as a mobile personal digital assistant (PDA) 10 with display 14 and integrated input sensors 12a, 12b located in the periphery of display 14. As shown in Figure 8, digital system 10 includes a megacell 100 according to Figure 1 that is connected to the input sensors 12a,b via an adapter (not shown), as an MPU private peripheral 142. A stylus or finger can be used to input information to the PDA via input sensors 12 a,b. Display 14 is connected to megacell 100 via local frame buffer similar to frame buffer 136. Display 14 provides graphical and video output in overlapping windows, such as MPEG video window 14a, shared text document window 14b and three dimensional game window 14c, for example. Radio frequency (RF) circuitry (not shown) is connected to an aerial 18 and is driven by megacell 100 as a DSP private peripheral 140 and provides a wireless network link. Connector 20 is connected to a cable adaptor-modem (not shown) and thence to megacell 100 as a DSP private peripheral 140 provides a wired network link for use during stationary usage in an office environment, for example. A short distance wireless link 23 is also "connected" to earpiece 22 and is driven by a low power transmitter (not shown) connected to megacell 100 as a DSP private peripheral 140. Microphone 24 is similarly connected to megacell 100 such that two-way audio information can be exchanged with other users on the wireless or wired network using microphone 24 and wireless ear piece 22. Megacell 100 provides all encoding and decoding for audio and video/graphical information being sent and received via the wireless network link and/or the wire-based network link. It is contemplated, of course, that many other types of communications systems and computer systems may also benefit from the present invention, particularly those relying on battery power. Examples of such other computer systems include portable computers, smart phones, web phones, and the like. As power dissipation and processing performance is also of concern in desktop and line-powered computer systems and microcontroller applications, particularly from a reliability standpoint, it is also contemplated that the present invention may also provide benefits to such line-powered systems. Fabrication of the digital systems disclosed herein involves multiple steps of implanting various amounts of impurities into a semiconductor substrate and diffusing the impurities to selected depths within the substrate to form transistor devices. Masks are formed to control the placement of the impurities. Multiple layers of conductive material and insulative material are deposited and etched to interconnect the various devices. These steps are performed in a clean room environment. A significant portion of the cost of producing the data processing device involves testing. While in wafer form, individual devices are biased to an operational state and probe tested for basic operational functionality. The wafer is then separated into individual dice which may be sold as bare die or packaged. After packaging, finished parts are biased into an operational state and tested for operational functionality. The digital systems disclosed herein contain hardware extensions for advanced debugging features. These assist in the development of an application system. Since these capabilities are part of the megacell itself, they are available utilizing only a JTAG interface with extended operating mode extensions. They provide simple, inexpensive, and speed independent access to the core for sophisticated debugging and economical system development, without requiring the costly cabling and access to processor pins required by traditional emulator systems or intruding on system resources. As used herein, the terms "applied," "connected," and "connection" mean electrically connected, including where additional elements may be in the electrical connection path. "Associated" means a controlling relationship, such as a memory resource that is controlled by an associated port. The terms assert, assertion, de-assert, de-assertion, negate and negation are used to avoid confusion when dealing with a mixture of active high and active low signals. Assert and assertion are used to indicate that a signal is rendered active, or logically true. De-assert, de-assertion, negate, and negation are used to indicate that a signal is rendered inactive, or logically false. A storage resource is typically a memory or a cache; however, other resources may make use of pre/post-processing capabilities as described herein. For example, memory mapped input/output (I/O) devices and ports, graphical or video frame buffers, etc. An initiator resource is generally a processor or a DMA controller; however, other resources may initiate transfer requests, such as smart I/O devices or ports or bridges to other systems or subsystems. While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, various types of functional processors may be embodied to perform preprocessing and/or post-processing of data that is being transferred between an initiator resource and a storage resource in accordance with a task related memory attribute value.
A semiconductor device is disclosed that includes a plurality of fins on a substrate. A long channel gate is disposed over a first portion of the plurality of fins. A gate contact is provided having an extended portion that extends into an active area from a gate contact base outside the active area.
CLAIMSWHAT IS CLAIMED IS:1. A semiconductor device, comprising:a plurality of fins on a substrate;a long channel gate disposed over a first portion of the plurality of fins; and a gate contact having a gate contact base and an extended portion that extends into an active area from the gate contact base outside the active area.2. The semiconductor device of claim 1, wherein the extended portion has a smaller length than the gate contact base.3. The semiconductor device of claim 1, wherein the extended portion has generally a same length as the gate contact base.4. The semiconductor device of claim 1, wherein the extended portion extends over at least one of the plurality of fins.5. The semiconductor device of claim 1, wherein the extended portion extends over the plurality of fins.6. The semiconductor device of claim 1, wherein the substrate is a first portion of the substrate and further comprising:a short channel gate disposed over a second portion of the plurality of fins on a second portion of the substrate; anda second gate contact outside an active area of the second portion of the plurality of fins.7. The semiconductor device of claim 6, further comprising:a second long channel gate disposed over a third portion of the plurality of fins on a third portion of substrate; anda third gate contact having an extended portion that extends into an active area of the third portion of a plurality of fins from a gate contact base outside the active area.8. The semiconductor device of claim 7, wherein the first portion of the substrate, the second portion of substrate and the third portion of the substrate is isolated from each other by one or more shallow trench isolation (STI) areas.9. The semiconductor device of claim 8, wherein the first portion and third portion of the substrate is configured for analog circuits and/or radio frequency circuits and the second portion of substrate is configured for logic circuits.10. The semiconductor device of claim 1, wherein the substrate is a bulk semiconductor substrate.11. The semiconductor device of claim 1, wherein the substrate is a silicon on insulator substrate.12. The semiconductor device of claim 1, wherein the substrate is at least one of silicon, germanium, silicon-germanium alloy, carbon-doped silicon, carbon-doped silicon-germanium alloy, gallium arsenide or indium phosphide.13. The semiconductor device of claim 1, wherein the gate contact including the extended portion and gate contact base are made of the same material.14. The semiconductor device of claim 13, wherein the gate contact is at least one of titanium nitride, titanium aluminum nitride, titanium aluminum, aluminum, copper or tungsten.15. The semiconductor device of claim 1, wherein the gate contact base has a length in a range of 30nm to 300nm.16. The semiconductor device of claim 15, wherein the gate contact base has a height in a range of lOnm to 50nm.17. The semiconductor device of claim 15, wherein the extended portion has a length that is approximately lOnm narrower than the length of the gate contact base. 18. The semiconductor device of claim 17, wherein the extended portion has a common centerline with the gate contact base.19. The semiconductor device of claim 1, wherein the extended portion has a length in a range of 20nm to 290nm.20. The semiconductor device of claim 1, wherein the semiconductor device is incorporated into a device selected from a group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a component in an automotive vehicle.21. A method of fabricating a semiconductor device, the method comprising:forming a plurality of fins on a substrate;forming a long channel gate disposed over a first portion of the plurality of fins; andforming a gate contact having a gate contact base and an extended portion that extends into an active area from the gate contact base outside the active area.22. The method of claim 21, wherein the extended portion has a smaller length than the gate contact base.23. The method of claim 21 , wherein the extended portion has generally a same length as the gate contact base.24. The method of claim 21, wherein the extended portion extends over at least one of the plurality of fins.25. The method of claim 21, wherein the extended portion extends over the plurality of fins.26. The method of claim 21, wherein the substrate is a first portion of the substrate and further comprising:forming a short channel gate disposed over a second portion of the plurality of fins on a second portion of the substrate; andforming a second gate contact outside an active area of the second portion of the plurality of fins.27. The method of claim 26, further comprising:forming a second long channel gate disposed over a third portion of the plurality of fins on a third portion of substrate; andforming a third gate contact having an extended portion that extends into an active area of the third portion of a plurality of fins from a gate contact base outside the active area.28. The method of claim 27, further comprising:performing one or more shallow trench isolation (STI) processes to isolate the first portion of the substrate, the second portion of substrate and the third portion of the substrate from each other.29. The method of claim 21, wherein the substrate is bulk semiconductor substrate and is at least one of silicon, germanium, silicon-germanium alloy, carbon-doped silicon, carbon-doped silicon-germanium alloy, gallium arsenide or indium phosphide.30. The method of claim 21, where the gate contact including the extended portion and gate contact base are made of the same material and formed in the same process.
FINFET SEMICONDUCTOR DEVICECLAIM OF PRIORITY[0001] The present Application for Patent claims priority to Application No. 16/526,756 entitled “FINFET SEMICONDUCTOR DEVICE” filed July 30, 2019, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.TECHNICAL FIELD[0002] The disclosed subject matter relates to semiconductor devices. In some aspects, the disclosed subject matter relates to metal oxide semiconductor field effect transistors (MOSFET) semiconductor devices including fin-type field-effect transistors (FinFET) devices having long channel transistors and short channel transistors.BACKGROUND[0003] Conventional semiconductor devices, such as MOSFET devices are being reduced in size to increase processing speed, reduce power consumption, reduce device size and/or decrease manufacturing cost. The demand for increased performance and reduced size from semiconductor devices has led to the use of multi-gate devices. These multi-gate devices include multi-gate fin-type field-effect transistors (FinFET s). In a FinFET the channel is formed on a“fin” that extends from the substrate. FinFET devices allow for reducing the gate width of device by providing a gate on the sides and top of the fin including the channel region.[0004] However, reduction in size can lead to negative effects on the semiconductor devices, including MOSFET devices and FinFETs discussed herein. Using conventional techniques semiconductor devices with FinFET long channel transistors and short channel transistors may not be able to achieve their desired threshold voltage (Vt) for each transistor type (long channel or short channel).[0005] Accordingly, it would be advantageous to have a FinFET design that would allow for both short channel transistors and long channel transistors to achieve their desired Vt.SUMMARY[0006] This summary identifies features of some example aspects and is not an exclusive or exhaustive description of the disclosed subject matter. Whether features or aspects are included in or omitted from this summary is not intended as indicative of relative importance of such features. Additional features and aspects are described and will become apparent to persons skilled in the art upon reading the following detailed description and viewing the drawings that form a part thereof.[0007] An aspect of the disclosure includes a semiconductor device including a plurality of fins on a substrate. A long channel gate is disposed over a first portion of the plurality of fins. A gate contact has a gate contact base and an extended portion that extends into an active area from the gate contact base outside the active area.[0008] Another aspect of the disclosure includes a method of fabricating a semiconductor device.The method includes forming a plurality of fins on a substrate. A long channel gate is disposed over a first portion of the plurality of fins. A gate contact is formed having a gate contact base and an extended portion that extends into an active area from a gate contact base outside the active area.[0009] Other objects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0010] The accompanying drawings are presented to aid in the description of examples of one or more aspects of the disclosed subject matter and are provided solely for illustration of the examples and not limitation thereof.[0011] FIG. l is a chart illustrating conventional short channel vs. long channel Vt differences to a target value.[0012] FIG. 2 is a chart illustrating short channel vs. long channel Vt differences to a target value according to one or more aspects of the disclosure.[0013] FIG. 3 is an illustration of conventional FinFETs.[0014] FIG. 4 is an illustration of FinFETs according to one or more aspects of the disclosure.[0015] FIG. 5 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0016] FIG. 6 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0017] FIG. 7 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. [0018] FIG. 8 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0019] FIG. 9 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0020] FIG. 10 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0021] FIG. 11 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0022] FIG. 12 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0023] FIG. 13 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0024] FIG. 14 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0025] FIG. 15 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure.[0026] FIG. 16 illustrates a flow chart of an example method of fabricating a semiconductor device according to one or more aspects of the disclosure.[0027] FIG. 17 illustrates example devices with aspects of the disclosure integrated therein.[0028] FIG. 18 illustrates additional examples of devices with aspects of the disclosure integrated therein.DETAILED DESCRIPTION[0029] Aspects of the subject matter are provided in the following description and related drawings directed to specific examples of the disclosed subject matter. Alternates may be devised without departing from the scope of the disclosed subject matter. Additionally, well-known elements will not be described in detail or will be omitted so as not to obscure the relevant details.[0030] The word“exemplary” is used herein to mean“serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term“aspects” does not require that all aspects include the discussed feature, advantage, or mode of operation. [0031] The terminology used herein describes particular aspects only and should not be construed to limit any aspects disclosed herein. As used herein, the singular forms“a,”“an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Those skilled in the art will further understand that the terms“comprises,” “comprising,”“includes,” and/or“including,” as used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.[0032] Further, various aspects may be described in terms of sequences of actions to be performed by, for example, elements of a computing device. Those skilled in the art will recognize that various actions described herein can be performed by specific circuits (e.g., an application specific integrated circuit (ASIC)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequences of actions described herein can be considered to be embodied entirely within any form of non-transitory computer-readable medium having stored thereon a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects described herein may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example,“logic configured to” and/or other structural components configured to perform the described action.[0033] Further, It should be noted that the terms "connected," "coupled," or any variant thereof, mean any connection or coupling, either direct or indirect, between elements, and can encompass a presence of an intermediate element between two elements that are "connected" or "coupled" together via the intermediate element. It should also be understood that“coupled” or“connected” as used herein mean electrically coupled or electrically connected unless stated otherwise.[0034] As indicated above, disadvantages of the conventional circuit designs include the inability to adjust for a desired Vt for both the short channel transistors and long channel transistors. FIG. 1 illustrates a conventional short channel vs. long channel Vt differences to a target value. It will be appreciated that conventional FinFET manufacturing technologies cannot meet the target Vts for both short channel transistors (which may be used for logic circuits) and long channel transistors (which may be used for analog / radio frequency (RF) circuits), using conventional Vt tuning techniques (e.g., adjusting Well, Halo and/ or work function (WF)). FIG 1 illustrates this result for the 7 nanometer node technology with a short channel and long channel. X-axis is the devices with different Vt type and gate length. For example, RVT L8 indicates the Vt type is regular voltage threshold (RVT), gate length =8nm; LVT L200 indicates the Vt type is LVT low voltage threshold (LVT), gate length =200nm; and SLVT L8 indicates the Vt type is super low voltage threshold (SVLT), gate length = 8nm. As illustrated in the example, in 7 nanometer technology (7nm), Vt at the opposite sides of a pMOS design yielded short channel (SC) having Vt -30-50 mV below target and a long channel (LC) having a Vt - 20-50 mV above target. Conventional Vt tuning techniques move Vt in the same direction, so to address the SC devices by increasing the Vt would cause the LC devices to move further away from the target Vt. Accordingly, the conventional Vt tuning techniques provide no solution to meet Vt for analog circuits using long channel transistors.[0035] FIG. 2 illustrates short channel vs. long channel Vt differences to a target value according to one or more aspects of the disclosure. Once again, the difference between a target Vt for the short channel and long channel in 7nm technology is shown. In contrast to the deviations illustrated in FIG. 1, in FIG. 2, Vt for the pMOS design yielded a short channel (SC) design having Vt -10-20 mV below target and a long channel (LC) design having a Vt - 10-20 mV above the target, which is a significant improvement over the conventional designs and within a desired range of Vt variance (e.g., -20 mV to +20 mV). Details of the FinFET design according to various aspects of the disclosure will be described in the following paragraphs after a brief introduction to conventional designs.[0036] FIG. 3 is an illustration of conventional FinFETs semiconductor devices 300.Conventional FET devices are generally planar devices wherein the entire channel region of the device is formed parallel and slightly below the planar upper surface of the semiconducting substrate. In contrast FinFETs are considered to be 3D devices that are formed above a semiconductor substrate. As illustrated in FIG. 3, from a planar view, in the short channel (SC) devices 310, the short channel gate 314 (SC gate) overlaps the fins 312, which may be made of material similar to the substrate (e.g., silicon, not illustrated). The fins 312 may protrude from the substrate and are separated by a plurality of fin- formation trenches between the fins 312. The gate width of the SC device 310 is orthogonal to the gate length direction as illustrated by short channel gate 314. The portions of the fins 312 covered by the short channel gate 314 are the channel regions of the SC device 310. The portions of the fins 312 that are positioned outside of the short channel gate 314 are the source / drain regions and are generally consider the active area 318 of the SC device 310. Gate contact 316 is provided at one end of the short channel gate 314 that is outside the active area 318.[0037] Likewise, as illustrated in FIG. 3, long channel (LC) devices 320, include a long channel gate 324 (LC gate) that overlaps the fins 322, which may be made of material similar to the substrate (e.g., silicon, not illustrated). The fins 322 may protrude from the substrate and are separated by a plurality of fin-formation trenches between the fins 322. The gate width of the LC device 320 is orthogonal to the gate length direction as illustrated by long channel gate 324. The portions of the fins 322 covered by the long channel gate 324 are the channel regions of the LC device 320. The portions of the fins 322 that are positioned outside of the long channel gate 324 are the source / drain regions of the LC device 320 and are generally consider the active area 328 of the LC device 320. Gate contact 326 is provided at one end of the long channel gate 324 that is outside the active area 328.[0038] As noted above, FinFET devices can be formed with different channel lengths (critical dimension) and with different threshold voltages (Vt) such that the FinFET devices exhibit different characteristics that allow integrated circuits to have transistors that perform with different characteristics. For example, in some applications, integrated circuits are designed with a plurality of short channel devices and a plurality of long channel devices, such as illustrated in FIGs 3 and 4. The critical dimension or channel length of the long channel devices is typically greater than the channel length or critical dimension of the short channel devices and a short channel device typically has a threshold voltage (Vt) that is less than the threshold voltage of a long channel device. Conversely, the off-state leakage current of a short channel device is typically greater than the off-state leakage current of a long channel device. In general, relative to the long channel devices, the short channel devices exhibit faster switching speeds and higher off- state leakage currents. Short channel devices are frequently employed in logic portions of an integrated circuit where fast switching speeds of the transistors is desired. In contrast, long channel devices can be used in circuits where the switching speed of the transistors is less important than their ability to exhibit low off-state leakage currents. For example, long channel devices may be used in analog portions, RF portions and/or for input/output circuits.[0039] FIG. 4 is an illustration of FinFETs according to one or more aspects of the disclosure.As illustrated, from a planar view, a semiconductor device 400 may include one or more short channel (SC) FinFET devices 410 and/or one or more long channel (LC) FinFET devices 420. In the short channel (SC) FinFET device 410, the short channel gate 414 (SC gate) overlaps the fins 412, which may be made of material similar to the substrate (not illustrated). In some aspects, the substrate may be a bulk semiconductor substrate, or silicon on insulator (SOI) substrate and may be formed by at least one of silicon, germanium, silicon-germanium alloy, carbon-doped silicon, carbon-doped silicon- germanium alloy, gallium arsenide, indium phosphide, or any other conventional semiconductor substrate. The fins 412 may protrude from the substrate and are separated by a plurality of fin-formation trenches between the fins 412. The three fins 412 are provided only for illustration and more or less fins may be provided. Similarly, the two SC gates 414 are provided solely for illustration and the number of gates may be more or less. The gate width of the SC FinFET device 410 is orthogonal to the gate length direction“L” as illustrated by short channel gate 414. The portions of the fins 412 covered by the short channel gate 414 are the channel regions of the SC FinFET device 410. The portions of the fins 412 that are positioned outside of the short channel gate 414 are the source / drain regions and are generally consider the active area 418 of the SC FinFET device 410. Gate contact 416 is provided at one end of the short channel gate 414 that is outside the active area 418. It will be appreciated that SC FinFET device 410 is similar to the SC device 310.[0040] Likewise, as illustrated in FIG. 4, a semiconductor device 400 may include long channel (LC) FinFET device 420, include a long channel gate 424 (LC gate) that overlaps the fins 422, which may be made of material similar to the substrate (e.g., silicon, not illustrated). The fins 422 may protrude from the substrate and are separated by a plurality of fin- formation trenches between the fins 422. The three fins 422 are provided only for illustration and more or less fins may be provided. Similarly, the two LC gates 424 are provided solely for illustration and the number of gates may be more or less. The gate width of the LC FinFET device 420 is orthogonal to the gate length direction as illustrated by long channel gate 424. The portions of the fins 422 covered by the long channel gate 424 are the channel regions of the LC device 420. The portions of the fins 422 that are positioned outside of the short channel gate 424 are the source / drain regions of the LC FinFET device 420 and are generally consider the active area 428 of the LC FinFET device 420. Gate contact 425 includes a gate contact base 426 is provided at one end of the long channel gate 424 that is outside the active area 428 and also includes an extended portion 427 that extends into the active area 428 (e.g., area with fins, source / drain) of the FinFET device 420.[0041] In some aspects the extended portion 427 extends beyond the fins 422. It will be appreciated that the extended portion 427 may have different dimensions from the gate contact base 426, as the extended portion 427 has to maintain separation from the active area 428 to avoid potential shorts or excessive leakage to the active area 428. In some aspects, the gate contact base 426 may have a length (L) in the range of 30nm to 300nm and a thickness / height in the range of lOnm to 50nm. In some aspects, the extended portion 427 may have a smaller length (in“L” direction) than the gate contact base 426. For example, the extended portion 427 may be about lOnm narrower than the gate contact base 426 in length (L). Accordingly in some aspect the extended portion may have a length in the range of 20nm to 290nm. In some aspects the extended portion is formed on a common centerline with the gate contact base (centered in the L direction), e.g., 5nm narrower on each side to provide additional margin to prevent shorting, leakage, etc. However, in some aspects, the extended portion 427 may have the same length or greater length (in“L” direction) relative to the gate contact base 426. The gate contact base 426 and extended portion 427 may be formed of the same material, e.g., titanium nitride (TiN), titanium aluminum (TiAl), titanium aluminum nitride (TiAIN), aluminum, tungsten, copper or any suitable conductive material. The extended portion 427 allows for an additional tuning parameter for designers to adjust the Vt of the long channel devices, which also allows greater adjustment of conventional tuning parameters (discussed above) so both the long channel and short channel target threshold voltages can be met, as illustrated in FIG. 2. It will be appreciated that the foregoing dimensions and materials, as well as the illustrated configurations are provided merely as examples and should not be construed to limit the various aspects disclosed herein.[0042] Accordingly, at least one aspect of the disclosure includes a semiconductor device (e.g., 400) having a plurality of fins (e.g., 422) on a substrate. A long channel gate (e.g., 424) is disposed over a first portion of the plurality of fins (e.g., 422). The semiconductor device (e.g., 400) also has a gate contact (e.g., 425) having a gate contact (e.g., 426) and an extended portion (e.g., 427) that extends into an active area (e.g., 418) from the gate contact base (e.g., 426) outside the active area.[0043] Further, it will be appreciated that aspects disclosed herein can be fabricated without additional costs, no impact on yield and no additional mask or process steps, as discussed in greater detail in the following paragraphs. In the following process description, like parts will be numbered the same. However, not all parts will be numbered and/or discussed in successive figures, as to avoid excessive redundancy and focus on the portions of the device that each portion of the illustrated fabrication process is related to.[0044] FIG. 5 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, a silicon (Si) substrate is effectively separated into 3 separate regions using conventional shallow trench isolation (STI) processes. For example, by using lithography techniques, optionally in combination with one or more lithography masks, designers can define the lateral size, position and shape of the substrate portions (e.g., Si substrates 510, 520 and 530). Optionally, the substrate portions may be formed in a separate patterning sequence. The separate substrate portions may be used for different applications. For example, Si substrate 510 may be used for SC logic devices and is separated from the other portions of the silicon substrate by STI portion 512. Si substrate 520 may be used for LC logic devices and is separated from the other portions of the silicon substrate by STI portion 512 and STI portion 522. Si substrate 520 may be used for LC input / output (EO) devices and is separated from the other portions of the silicon substrate by STI portion 522. However, it will be appreciated that the illustrated substrates and design uses are merely for illustration and more or less substrate portions can be formed and the design use may be varied from the illustrated examples.[0045] FIG. 6 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, dummy gates are patterned and formed over the various Si substrate portions using conventional processes. For example, an SC dummy gate 610 is formed over Si substrate 510, a LC dummy gate 620 is formed over Si substrate 520 and a LC dummy gate 630 is formed over Si substrate 530.[0046] FIG. 7 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, insulating spacers may be applied to the gates using conventional processes. For example, insulating spacer 710 is formed on SC dummy gate 610 which is formed over Si substrate 510, insulating spacer 720 which is formed on LC dummy gate 620 is formed over Si substrate 520 and insulating spacer 730 is formed on LC dummy gate 630 which is formed over Si substrate 530. Portions 712, 714 are the source / drain regions in Si substrate 510, portions 722, 724 are the source / drain regions in Si substrate 520, and portions 732, 734 are the source / drain regions in Si substrate 530 and are formed using conventional processes.[0047] FIG. 8 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. Conventional epitaxially growth materials may be used to form source / drain regions of FinFET devices. As illustrated, source / drain epitaxial layers are grown, forming source / drain regions 812 and 814 adjacent to SC dummy gate 610 and formed over Si substrate 510, source / drain regions 822 and 824 formed adjacent to LC dummy gate 620 and formed over Si substrate 520 and source / drain regions 832 and 834 formed adjacent to LC dummy gate 630 and formed over Si substrate 530.[0048] FIG. 9 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. A first dielectric layer 900 may be formed over source / drain regions 812, 814, 822, 824, 832 and 834, SC dummy gate 610, insulating spacer 710, Si substrate 510, LC dummy gate 620, insulating spacer 720, Si substrate 520, LC dummy gate 630, insulating spacer 730 and Si substrate 530. The first dielectric layer 900 may be planarized until top portions of the SC dummy gate 610, insulating spacer 710, LC dummy gate 620, insulating spacer 720, LC dummy gate 630 and insulating spacer 730 are exposed by first dielectric layer 900. A chemical mechanic polishing (CMP) may be conducted until the first dielectric layer 900 is substantially coplanar or substantially flush with the top portions as illustrated. The first dielectric layer 900 may remain above the substrate portions and source / drain regions to protect these areas during subsequent processing.[0049] FIG. 10 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, the SC dummy gate 610, LC dummy gate 620 and LC dummy gate 630 are removed using conventional techniques, such as a vertical etch process such as reactive-ion etching (RLE). Thereby voids are left within insulating spacer 710, insulating spacer 720 and insulating spacer 730 which are surrounded by first dielectric layer 900.[0050] FIG. 11 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, the void left within insulating spacers 710, 720 and 730 is filled with high-K (HK or HiK) (e.g., 1112) / metal-gate (MG) (e.g., 1114) deposition to form SC gate 1110, LC gate 1120 and LC gate 1 130. The term high- K refers to a material with a high dielectric constant K, relative to silicon dioxide. After the deposition, the gate material is planarized using CMP. After the CMP process the first dielectric layer 900 is substantially coplanar or substantially flush with the top portions of the SC gate 1110, insulating spacer 710, LC gate 1120, insulating spacer 720, LC gate 1130 and insulating spacer 730 as illustrated.[0051] FIG. 12 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, a second dielectric layer 1200 may be formed over first dielectric layer 900 and the top portions of the SC gate 1110, insulating spacer 710, LC gate 1120, insulating spacer 720, LC gate 1130 and insulating spacer 730.[0052] FIG. 13 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, source / drain contacts are formed through first dielectric layer 900 and second dielectric layer 1200. Specifically, source / drain contacts 1312, 1314, 1322, 1324, 1332 and 1334 are coupled to source / drain regions 812, 814, 822, 824, 832 and 834, respectively[0053] FIG. 14 is an illustration of a portion of a process for forming FinFETs according to one or more aspects of the disclosure. As illustrated, gate contacts 1410, 1420 and 1430 are formed through the second dielectric layer 1200. Specifically, gate contact 1410 is coupled to SC gate 1110 using conventional processes. However, LC gate contact 1420, which is coupled to LC gate 1120, has an extended portion that extends on top of LC gate 1120. Likewise, LC gate contact 1430, which is coupled to LC gate 1130, has an extended portion that extends on top of LC gate 1130.[0054] FIG. 15 is an illustration of a two dimensional (2D) planar view FinFETs according to one or more aspects of the disclosure. As illustrated, the extended portion 1527 extends from the gate contact base 1526 of gate contact 1420. It will be appreciated that both the extended portion 1527 and the gate contact base 1526 of LC gate contact 1420 are formed as one structure, as illustrated in FIG. 14. Extended portion 1527 extends over the active region and at least one of the fins 1522. In some aspects, extended portion 1527 extends over all of the fins 1522. Accordingly, no additional process steps are needed to form the extended gate configuration. Likewise, the extended portion 1537 extends from the gate contact base 1536 of gate contact 1430. Extended portion 1537 extends over the active region and at least one of the fins 1532. In some aspects, extended portion 1537 extends over all of the fins 1532. It will be appreciated that both the extended portion 1537 and the gate contact base 1536 of LC gate contact 1430 are also formed as one structure, as illustrated in FIG. 14. In contrast to the extended portions of the gate contacts discussed previously, SC gate contact 1410 is formed in a conventional manner and does not have a portion that extends over fins 1512. Accordingly, aspects of the disclosure allow for improved long channel FinFET devices without impacting the short channel devices. Additionally, there various aspects disclosed require no additional process steps and therefore do not negatively impact production complexity or costs.[0055] It will be appreciated that additional metal processing, such as separating the source / drain contacts 1312, 1314, 1322, 1324, 1332 and 1334 and coupling the gate, source and drain contacts to additional transistors and/or other elements may be performed using conventional techniques as known in the art. Further, it will be appreciated that in the foregoing description an exhaustive detailing of conventional process methods for forming FinFETs was not provided. Further, the various aspects described herein are not limited to the details provided in the foregoing fabrication process description and skilled designers may use various known processes to form semiconductor devices according to the disclosed aspects.[0056] Accordingly, it will be appreciated that the various aspects disclosed herein include methods for fabricating a semiconductor device. FIG. 16 illustrates a flowchart for an exemplary method for fabricating a semiconductor device in accordance with some examples of the disclosure. As shown in FIG. 16, the method 1600 begins in block 1602 with forming a plurality of fins on a substrate. The method 1600 continues in block 1604 with forming a long channel gate disposed over a first portion of the plurality of fins. The method 1600 continues in block 1606 with forming a gate contact having a gate contact based and an extended portion that extends into an active area from the gate contact base outside the active area.[0057] FIG. 17 illustrates an exemplary mobile device in accordance with some examples of the disclosure. Referring now to FIG. 17, a block diagram of a mobile device that is configured according to exemplary aspects is depicted and generally designated 1700. In some aspects, mobile device 1700 may be configured as a wireless communication device, which may include one or more FinFET semiconductor devices (e.g., logic, memory, RF, EO, etc.) as disclosed herein in that may be integrated into the various active devices discussed below. As shown, mobile device 1700 includes processor 1701. Processor 1701 is shown to comprise instruction pipeline 1712, buffer processing unit (BPU) 1708, branch instruction queue (BIQ) 1711, and throttler 1710 as is well known in the art. Other well-known details (e.g., counters, entries, confidence fields, weighted sum, comparator, etc.) of these blocks have been omitted from this view of processor 1701 for the sake of clarity.[0058] Processor 1701 may be communicatively coupled to memory 1732 over a link. Mobile device 1700 also include display 1728 and display controller 1726, with display controller 1726 coupled to processor 1701 and to display 1728.[0059] In some aspects, FIG. 17 may include coder/decoder (CODEC) 1734 (e.g., an audio and/or voice CODEC) coupled to processor 1701; speaker 1736 and microphone 1738 coupled to CODEC 1734; and wireless controller 1740 (which may include a modem) coupled to wireless antenna 1742 and to processor 1701.[0060] In a particular aspect, where one or more of the above-mentioned blocks are present, processor 1701, display controller 1726, memory 1732, CODEC 1734, and wireless controller 1740 can be included in a system-in-package or system-on-chip device 1722. Input device 1730 (e.g., physical or virtual keyboard), power supply 1744 (e.g., battery), display 1728, input device 1730, speaker 1736, microphone 1738, wireless antenna 1742, and power supply 1744 may be external to system-on-chip device 1722 and may be coupled to a component of system-on-chip device 1722, such as an interface or a controller.[0061] It should be noted that although FIG. 17 depicts a mobile device, processor 1701 and memory 1732 and other components, which may include one or more semiconductor devices as disclosed herein, may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices.[0062] FIG. 18 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP) in accordance with some examples of the disclosure. For example, a mobile phone device 1802, a laptop computer device 1804, and a fixed location terminal device 1806 may include one or more semiconductor devices as disclosed herein 1800 as described herein. The integrated device 1800 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 1802, 1804, 1806 illustrated in FIG. 18 are merely exemplary. Other electronic devices may also feature the integrated device 1800 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles, drones, autonomous vehicles, or any other device that stores or retrieves data or computer instructions, or any combination thereof.[0063] It will be appreciated from the foregoing disclosure that additional processes for fabricating the various aspects disclosed herein will be apparent to those skilled in the art and a literal rendition of all variations of the processes discussed above and illustrated in the included drawings is not necessary.[0064] The foregoing disclosed devices and functionalities, e.g., as described in reference to any one or more of FIGS. 4-18 may be designed and configured into computer files (e.g., RTL, GDSII, GERBER, etc.) stored on computer-readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products may include semiconductor wafers that are then cut into semiconductor dies and packaged into a semiconductor chip. The chips are then may be employed in devices as described above.[0065] The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0066] Accordingly, embodiments disclosed herein can include a non-transitory computer- readable media embodying a method for fabricating the one or more semiconductor devices, as disclosed herein. Accordingly, the disclosure is not limited to the illustrated examples as any means for performing the fabrication processes described herein are contemplated by the present disclosure.[0067] While the foregoing disclosure shows various illustrative embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the teachings of the present disclosure as defined by the appended claims. The various materials identified for example and illustration may be substituted by known equivalent or alternative materials. The example fabrication processes discussed above may have various process operations combined or split into additional process operations. Additionally, the processes, functions, steps and/or actions described in the method claims in accordance with the embodiments of the disclosure described herein need not be performed in any particular order, unless specifically describe as requiring a particular order or if it is necessary as being dependent on the previous process. Furthermore, although elements of the present disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Package substrates including conductive interconnects having noncircular cross-sections, and integrated circuit packages incorporating such package substrates, are described. In an example, a conductive pillar having a noncircular pillar cross-section is electrically connected to an escape line routing layer. The escape line routing layer may include several series of conductive pads having noncircular pad cross-sections. Accordingly, conductive traces, e.g., strip line escapes and microstrip escapes, may be routed between the series of conductive pads in a single escape line routing layer.
CLAIMS1. A package substrate, comprising:a dielectric layer;a plurality of conductive pillars extending through the dielectric layer, wherein the conductive pillars include a noncircular pillar cross-section; andan escape line routing layer over the dielectric layer, the escape line routing layer having a plurality of conductive pads electrically connected to respective conductive pillars. 2. The package substrate of claim 1, wherein the noncircular pillar cross-section includes a pillar width dimension and a pillar length dimension, and wherein the pillar width dimension is different than the pillar length dimension.3. The package substrate of claim 2, wherein the noncircular pillar cross-section is a rectangular cross-section.4. The package substrate of claim 1, wherein the conductive pillars include respective sidewalls having a height through the dielectric layer, and wherein the sidewalls include respective tapers of less than 5 microns over the height.5. The package substrate of claim 1, wherein the conductive pads include a noncircular pad cross-section, wherein the conductive pads are arranged in a first series in an axial direction and in a second series in the axial direction, and wherein the conductive pads of the first series are laterally separated from the conductive pads of the second series by a gap.6. The package substrate of claim 5, wherein the noncircular pad cross-section is a rectangular cross-section having a pad width dimension and a pad length dimension.7. The package substrate of claim 6, wherein the pad length dimension is at least twice the pad width dimension. 8. The package substrate of claim 5, wherein the escape line routing layer includes a plurality of conductive traces extending from respective conductive pads of the first series and the second series, and wherein the conductive traces extend through the gap in the axial direction. 9. An integrated circuit package, comprising:a package substrate includinga dielectric layer,a plurality of conductive pillars extending through the dielectric layer, wherein the conductive pillars include a noncircular pillar cross-section, andan escape line routing layer over the dielectric layer, the escape line routing layer having a plurality of conductive pads electrically connected to respective conductive pillars; and an integrated circuit mounted on the package substrate and having a plurality of pins electrically connected to the conductive pillars. 10. The integrated circuit package of claim 9, wherein the noncircular pillar cross-section has a pillar width dimension and a pillar length dimension, and wherein the pillar width dimension is different than the pillar length dimension.11. The integrated circuit package of claim 10, wherein the noncircular pillar cross-section is a rectangular cross-section.12. The integrated circuit package of claim 9, wherein the conductive pillars include respective sidewalls having a height through the dielectric layer, and wherein the sidewalls include respective tapers of less than 5 microns over the height. 13. The integrated circuit package of claim 9, wherein the conductive pads include a noncircular pad cross-section, wherein the conductive pads are arranged in a first series in an axial direction and in a second series in the axial direction, and wherein the conductive pads of the first series are laterally separated from the conductive pads of the second series by a gap. 14. The integrated circuit package of claim 13, wherein the noncircular pad cross-section is a rectangular cross-section.15. The integrated circuit package of claim 13, wherein the escape line routing layer includes a plurality of conductive traces extending from respective conductive pads of the first series and the second series, and wherein the conductive traces extend through the gap in the axial direction.16. A method, comprising:forming an escape line routing layer on a conductive seed layer, wherein the conductive seed layer is over a dielectric layer of a package substrate, and wherein the escape line routing layer includes a conductive pad;applying a photoresist over the conductive pad, wherein the photoresist includes a hole having a noncircular cross-section over the conductive pad; andfilling the hole to form a conductive pillar having the noncircular cross- section, wherein the conductive pillar is electrically connected to the conductive pad.17. The method of claim 16 further comprising:removing the photoresist; andetching the conductive seed layer to expose the dielectric layer around the conductive pad and the conductive pillar.18. The method of claim 17 further comprising:laminating a second dielectric layer over the conductive pad and the conductive pillar; andplaning the second dielectric layer to remove the second dielectric layer over the conductive pillar to expose an end of the conductive pillar, wherein the end has the noncircular cross-section.19. The method of claim 16, wherein the noncircular cross-section is a rectangular cross- section.20. The method of claim 16, wherein the conductive pillar includes a sidewall having a height through the second dielectric layer, and wherein the sidewall includes a taper of less than 5 microns over the height.
PACKAGE SUBSTRATE HAVING NONCIRCULAR INTERCONNECTSTECHNICAL FIELDEmbodiments described herein generally relate to the field of integrated circuit packages and, in particular, package substrates having escape line routing layers electrically connected to vertical interconnects.BACKGROUNDAn integrated circuit package is used for protecting an integrated circuit chip or die, and also to provide the chip or die with a physical and electrical interface to external circuitry. The integrated circuit package may include the die mounted on a package substrate having escape line routing layers, e.g., strip line escape layers and microstrip escape layers. More particularly, the die may be electrically connected to external circuitry through the escape line routing layers and vertical interconnects of the package substrate. For example, the escape line routing layers may be electrically connected to other conductive layers of the package substrate by microvias.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a sectional view of an escape line routing layer of a package substrate including microvias and capture pads having circular cross-sections.Figure 2 is a sectional view of an integrated circuit package, in accordance with an embodiment.Figure 3 is a sectional view, taken about line A-A of Figure 2, of a package substrate having an escape line routing layer electrically connected to conductive pillars having noncircular cross-sections, in accordance with an embodiment.Figure 4 is a detail view, taken from Detail A of Figure 3, of an escape line routing layer having conductive pads arranged in several series, in accordance with an embodiment.Figure 5 is a sectional view, taken about line B-B of Figure 3, of a conductive pillar having a noncircular cross-section electrically connected to an escape line routing layer of a package substrate, in accordance with an embodiment.Figure 6 is a flowchart of a method of fabricating a package substrate including conductive pillars having noncircular cross-sections, in accordance with an embodiment. Figures 7A-7I are sectional views of several operations of a method of fabricating a package substrate including conductive pillars having noncircular cross-sections, in accordance with an embodiment.Figure 8 is a schematic of a computer system, in accordance with an embodiment.DESCRIPTION OF EMBODIMENTSPackage substrates including conductive pillars having noncircular cross-sections when viewed from a top view, and integrated circuit packages incorporating such package substrates, are described. In the following description, numerous specific details are set forth, such as packaging and interconnect architectures, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as specific semiconductor fabrication processes, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.Existing package substrates of integrated circuit packages may include several conductive layers separated by intervening dielectric layer(s), and vertical electrical interconnects may transfer electrical signals through the dielectric layer(s) between the conductive layers. Such interconnects may be microvias, which are typically formed by laser drilling holes in laminated dielectric build up materials, and then filling the holes with copper. The laser-drilled holes, and thus the microvias, include circular cross-sections because the laser beam used to drill the holes has a circular spot size. Capture pads are typically patterned over the ends of the microvias using photolithography. Misalignment between the microvias and the capture pads occurs, however, because there is a tolerance stack-up from the series of processes, i.e., a first photolithography operation, the laser drilling operation, and a second photolithography operation.Referring to Figure 1 , a sectional view of an escape line routing layer of a package substrate including microvias and capture pads having circular cross-sections is illustrated. The sectional view is through the package substrate along a plane at a vertical location having microstrip traces 102. More particularly, microstrip escape routing layer 104 may include microstrip traces 102 extending laterally from microstrip capture pads 106. Capture pads 106 may be formed having circular cross-sections that are larger than necessary to compensate for misalignment with vertically connected microstrip microvias 108 (shown by hidden lines). More particularly, microstrip microvia 108 extends vertically from microstrip capture pad 106, and the circular cross-sections of these features are misaligned. Thus, to ensure that electrical contact is made between the capture pads and the microvias, the circular capture pads are oversized. For example, an alignment capability of 14 microns may require a 49 micron-diameter microvia and a 77 micron-diameter capture pad to ensure contact. Such dimensions may allow the escape line routing layer to have 9 micron- wide copper lines separated by 12 micron spaces.Still referring to Figure 1 , the oversized capture pad dimensions may require that more substrate routing layers be used than would otherwise be necessary. For example, microstrip escape routing layer 104, which includes microstrip capture pads 106 and microstrip microvias 108 having circular conductive pads, may limit lateral spacing allotted to conductive traces. That is, the oversized capture pad dimensions may limit space between capture pads through which conductive traces may be routed, such that there is only room for microstrip traces 102, and not for strip line traces (not shown) extending laterally from corresponding strip line capture pads (not shown). Thus, strip line escape routing 110 may include strip line microvias 112 extending to a plane parallel to the plane illustrated in Figure 1, and the parallel plane may include corresponding strip line traces and strip line capture pads. The parallel plane, however, may have limited inter-pad space available for trace routing, such that only strip line traces can be routed laterally between corresponding strip line capture pads. Thus, a package substrate including microvias having circular cross-sections may require at least two routing layers to route microstrip escape routing 104 and strip line escape routing 110.In an aspect, a package substrate includes conductive pillars and conductive pads formed by plating and build-up lamination operations. More particularly, formation of the package substrate may not require laser drilling. The plating processes may allow the conductive pillars and pads to be aligned with accuracy better than 14 microns, and thus, smaller conductive pillars and/or conductive pads may be used. Furthermore, since the plating processes may form conductive pillars and/or conductive pads having noncircular cross-sections, capture pad size may be minimized in at least one lateral direction, and conductive traces may be routed between capture pads with a higher line density. That is, more capture pads and conductive traces can be fit into a single conductive layer, e.g., strip line escapes and microstrip escapes may be combined into a single escape line routing layer. Therefore, the package substrate may incorporate a minimum number of escape line routing layers.Referring to Figure 2, a sectional view of an integrated circuit package is illustrated in accordance with an embodiment. An integrated circuit package 200 may include an integrated circuit 202 mounted on a package substrate 204. For example, the integrated circuit 202 may be positioned over the package substrate 204, and an underfill material 206 may couple the integrated circuit 202 to the package substrate 204. As shown, the integrated circuit package 200 may include a wire-bonding package, however, it will be appreciated by one skilled in the art that other non-wire bonding packages may be used in accordance with the description below. For example, electrical connections between integrated circuit 202 and package substrate 204 may be made by connections extending through underfill material 206.Package substrate 204 of integrated circuit package 200 may have a laminate structure. For example, conductive layers, e.g., layers having copper pads and traces, may be separated by dielectric layers, e.g., layers having organic epoxy-based dielectric material.Integrated circuit package 200 may include a chip carrier, such as a ball grid array (BGA) component having a top package portion 208, e.g., a plastic cap, over package substrate 204. The chip carrier may include several electrical contacts, e.g., several solder balls 210, arranged in a ball field. More particularly, solder balls 210 may be arranged in a pattern on a bottom surface of package substrate 204.Each solder ball 210 may be electrically connected to integrated circuit 202 to provide an electrical function. For example, solder balls 210 may be electrically connected to pins 212, e.g., a signal pin used for I/O of integrated circuit 202, or power and/or ground pins of integrated circuit 202. Furthermore, solder balls 210 may be mounted and attached to a circuit board 214, e.g., a motherboard or another printed circuit board of a computer system, to provide a physical and electrical interface between integrated circuit 202 and circuit board 214.The electrical connection between solder balls 210 and pins 212 of integrated circuit 202 may be through an interconnect 216 and/or a lead 218. More particularly, lead 218 may electrically connect pins 212 of integrated circuit 202 to one or more bonding pads 220 mounted on a top surface of package substrate 204. Bonding pads 220 mounted on the top surface may be electrically connected to corresponding solder pads 222 on a bottom surface of package substrate 204 through interconnect 216. As described below, interconnect 216 may include horizontal segments, e.g., electrical conductors in a substrate routing layer, and vertical segments, e.g., vertical interconnects between substrate routing layers. Thus, pins 212 of integrated circuit 202 may be electrically connected to horizontal and vertical segments of interconnect 216.Referring to Figure 3, a sectional view, taken about line A- A of Figure 2, of a package substrate having an escape line routing layer electrically connected to conductive pillars having noncircular cross-sections is illustrated in accordance with an embodiment. Package substrate 204 acts as a space transformer to expand escape line routing from a small area, e.g., at integrated circuit 202 or bonding pads 220, to a larger area, e.g., at solder pads 222. Escape line routing is a term used to refer to the horizontal and vertical segments of interconnect 216 extending from the small area to the larger area. As described above, escape routingincorporating microvias having circular cross-sections requires a first escape routing layer having microstrip escapes 316, and a second escape routing layer vertically offset from the first escape routing layer and having strip line escapes 314. An improvement in the alignment capability between vertical interconnects and capture pads of the escapes, however, can allow for smaller capture pad dimensions to be used. For example, an alignment accuracy of less than 14 microns, e.g., an alignment capability of 10 microns, may allow microstrip traces and strip line traces to be combined into a single escape routing layer. Thus, a total number of layers in package substrate 204 may be reduced.Package substrate 204 may include an escape line routing layer 302 extending in a horizontal direction over a dielectric layer 304. More particularly, escape line routing layer 302 may include several conductive pads 306 electrically connected to respective conductive traces 308. That is, conductive traces 308 may extend from respective conductive pads 306 to carry electrical signals within a plane of escape line routing layer 302.Conductive pads 306 may include a noncircular pad cross-section 310. More particularly, noncircular pad cross-section 310 may have a width dimension that differs from a length dimension. For example, noncircular pad cross-section 310 may be a rectangular cross- section. Alternatively, noncircular pad cross-section 310 may have any other noncircular shape, including an elliptical or a polygonal profile.In an embodiment, package substrate 204 includes conductive pillars 312(indicated by hidden lines underneath conductive pad 306) extending vertically through dielectric layer 304. More particularly, conductive pads 306 within escape line routing layer 302 may be electrically connected to respective conductive pillars 312 extending vertically away from escape line routing layer 302. As described below, conductive pads 306 may be formed using a semi- additive process, and noncircular pad cross-sections 310 may be achieved using such processes. Furthermore, conductive pillars 312 may be formed using the same semi-additive process, and thus, alignment between conductive pads 306 and conductive pillars 312 may be maintained in a range of 10 microns or less because several different processes are not required.The noncircular shape of conductive pads 306 may allow for conductive pads 306 to be arranged in a manner that provides sufficient space to route both strip line escapes 314 and microstrip escapes 316 in the single escape line routing layer 302. In an embodiment, one or more conductive pads 306 of strip line escape 314 may be arranged in a first series 318. That is, conductive pads 306 of strip line escape 314 may come one after another in spatial succession. The series of conductive pads 306 may be in a sequence extending in an axial direction 320 or a lateral direction 322. Here, lateral direction 322 is used to define any direction orthogonal or not parallel with axial direction 320. For example, first series 318 may extend diagonally relative to axial direction 320. One or more conductive pads 306 of microstrip escape 316 may be arranged in a second series 326. That is, conductive pads 306 of microstrip escape 316 may come one after another in spatial succession, and the sequence may be in axial direction 320 or lateral direction 322.It is not necessary for conductive pads 306 of strip line escape 314 and microstrip escape 316 to be arranged in different series. More particularly, conductive pads 306 of any escape line may be combined into a same series. For example, conductive pads 306 of both strip line escape 314 and microstrip escape 316 may be arranged in first series 318 and/or second series 326. In any case, conductive pads 306 of all escape lines may be combined into the single escape line routing layer 302.The noncircular cross-section of conductive pads 306 may allow for higher line density in escape line routing layer 302. More particularly, when conductive pads 306 and/or conductive pillars 312 are formed having noncircular cross-sections and arranged in a manner to form a gap 328 between conductive pads 306, a greater number of conductive traces 308 may be routed through gap 328 to expand the escape line routing from the smaller area to the larger area of package substrate 204.Conductive pads 306 of first series 318 may be laterally separated from conductive pads 306 of second series 326 by gap 328. Here, lateral separation is intended to refer to a separation along the plane of escape line routing layer 302, and not necessarily a separation in lateral direction 322. For example, first series 318 and second series 326 may be arranged in parallel in a lateral direction 322, in which case, conductive pads 306 of respective series would be laterally separated in axial direction 320. In an embodiment, conductive pads 306 of first series 318 are arranged in a first direction and conductive pads 306 of second series 326 are arranged in a second direction parallel to the first direction. Thus, gap 328 may provide a routing space between the series, and the routing space may run in the same direction as the first direction and the second direction. Accordingly, conductive traces 308 extending from respective conductive pads 306 of first series 318 and second series 326 may extend through gap 328 in the same direction as the series of conductive pads 306, e.g., in axial direction 320. For example, conductive traces 308 may be routed through the space between several series of pads over a length greater than several width or length dimensions of the sequentially arranged pads.Referring to Figure 4, a detail view, taken from Detail A of Figure 3, of an escape line routing layer having conductive pads arranged in several series is illustrated in accordance with an embodiment. In an embodiment, a pad width dimension 402 of conductive pads 306 arranged in parallel series may be orthogonal to axial direction 320. More particularly, a pad length dimension 404 of conductive pads 306 may be in the same direction as the sequence of conductive pads 306. The pad length dimension 404 of noncircular pad cross-section 310 may be at least twice, e.g., three times, the pad width dimension 402 of noncircular pad cross-section 310. For example, conductive pads 306 may have a non-circular profile having pad width dimension 402 different than pad length dimension 404. By way of example, the rectangular pad cross-sections illustrated in Figure 4 may have a pad width dimension 402 less than 20 microns, e.g., 16 microns, and a pad length dimension 404 greater than 60 microns, e.g., 79 microns.Still referring to Figure 4, package substrate 204 may include conductive pillars 312 having noncircular pillar cross-sections 406 (indicated by hidden lines). The noncircular profile of conductive pillars 312 may be the same as the noncircular profile of conductive pads 306. For example, when conductive pad 306 includes a rectangular profile, conductive pillar 312 may include a rectangular profile, as shown. Alternatively, the noncircular profile of conductive pillars 312 may be different than the noncircular profile of conductive pads 306. For example, when conductive pad 306 includes a rectangular profile, conductive pillar 312 may include an elliptical profile. Similarly, conductive pillar 312 may include a pillar width dimension 408 and a pillar length dimension 410, and the dimensions may be different, e.g., smaller, than pad width dimension 402 and pad length dimension 404. By way of example, a pad dimension may be twice an alignment capability plus a pillar dimension. For example, in a case of a 5 micron misalignment between the pad and pillar, and when a pillar width dimension 408 is 10 microns, a pad width dimension 402 may be 20 microns. In an embodiment, the rectangular pillar cross- section illustrated in Figure 4 may have a width dimension less than 20 microns, e.g., 16 microns, and a length dimension greater than 60 microns, e.g., 79 microns. Thus, pillar width dimension 408 may be different than pillar length dimension 410, e.g., noncircular pillar cross- section 406 may be a rectangular cross-section.Referring to Figure 5, a sectional view, taken about line B-B of Figure 3, of a conductive pillar having a non-circular cross-section electrically connected to an escape line routing layer of a package substrate is illustrated in accordance with an embodiment. Typically, laser drilling to form a microvia as described above with respect to Figure 1 results in a tapered microvia structure. For example, a laser drilled hole, and thus a microvia filling the hole, may include a top diameter of 49 microns at one side of a dielectric layer tapering to a bottom diameter of 35 microns at another side of the dielectric layer. Thus, a laser drilled hole may have a 15 micron taper based on diameter. By contrast, the two-operation lithography and plating process described below for forming conductive pads 306 and conductive pillars 312 having noncircular cross-sections may result in no or minimal taper of the conductive elements.In an embodiment, conductive pillars 312 include respective sidewalls 502 having a height 504 through dielectric layer 304 of package substrate 204. Sidewall 502 may be absolutely or nearly vertical. For example, a taper 506 of sidewall 502 may be less than 5 microns over height 504, based on a difference in pillar dimension at a first end 508 and a second end 510 of conductive pillar 312. By way of example, conductive pillar 312 may include a nominal width dimension of 16 microns, and pillar width dimension 408 may vary by less than five microns over height 504. Experimental results have indicated that conductive pillar 312 formed using the processes described below may have pillar width dimension 408 of 15.5 microns at first end 508 and 16.0 microns at second end 510, i.e., taper 506 may be 0.5 micron.Still referring to Figure 5, axial alignment between conductive pillar 312 and conductive pads 306 above or below conductive pillar 312 may be in a range less than 10 microns, e.g., 6 microns. For example, a central axis 514 running through a geometric center of conductive pad 306 formed on a seed layer 512 may be within 10 microns of a central axis 514 running through a geometric center of conductive pillar 312 formed over such conductive pad 306. Similarly, a central axis 514 running through a geometric center of conductive pad 306 formed over dielectric layer 304, e.g., coplanar with a second dielectric layer 516, may be within 10 microns of a central axis 514 running through a geometric center of conductive pillar 312 formed under such conductive pad 306. Accordingly, conductive pads 306 and conductive pillar 312 of an escape routing may be well-aligned, allowing for smaller conductive pads 306 and conductive pillars 312 to be used such that escape routing may be incorporated in a single escape routing layer.Referring to Figure 6, a flowchart of a method of fabricating a package substrate is illustrated in accordance with an embodiment. The method may include a two-operation lithography and plating process to form conductive pads 306 and conductive pillars 312 having noncircular cross-sections. Furthermore, the method may be performed without using laser drilling to form holes for microvias. Thus, unlike current substrate architectures, vertical interconnects may include noncircular cross-sections and may be well-aligned with adjacent capture pads. Figures 7A-7I illustrate sectional views of several operations of the method of fabricating the package substrate having noncircular conductive pillars and/or pads.Accordingly, Figures 6 and 7A-7I are referred to intermittently in the following description of the method.At operation 602, escape line routing layer 302 may be formed on conductive seed layer 512. Referring to Figure 7A, seed layer 512 may be located over dielectric layer 304 of package substrate 204. For example, seed layer 512 may include a layer of copper deposited on a base layer of dielectric material. Copper may be deposited in an electrolytic copper plating process. Escape line routing layer 302 may include conductive pad 306 formed over seed layer 512 using a first lithography operation. For example, a photoresist 702, such as a dry film resist, may be patterned on seed layer 512, and copper may be deposited into photoresist 702 spaces to form conductive pad 306 and/or conductive traces 308. The photoresist 702 spaces may have non-circular profiles such that conductive pad 306 is formed having noncircular pad cross- section 310.Referring to Figure 7B, photoresist 702 may be stripped or removed during the semi additive process to expose the upper surfaces of seed layer 512, conductive pad 306, or conductive traces 308.At operation 604, unlike a typical semi additive process in which seed layer 512 is flash etched, photoresist 702 may be applied over conductive pad 306 and conductive traces 308 of the lower escape line routing layer 302. Referring to Figure 7C, photoresist 702 may be laminated such that a hole 704 remains in photoresist 702 over conductive pad 306. Hole 704 may be a space opened through photoresist 702 by an exposure of the photoresist layer. In an embodiment, hole 704 includes a noncircular cross-section, i.e., hole 704 includes a same cross- section as conductive pillar 312 that is to be formed over conductive pad 306. As described above, noncircular pillar cross-section 406 may be the same as or different than noncircular pad cross-section 310, and thus, hole 704 may have a profile that is the same as or different than a profile of the underlying conductive pad 306. Accordingly, the exposure and development stages of the second dry film resist lamination may determine a shape and dimension of the eventual conductive pillar 312.At operation 606, the noncircular hole 704 may be filled to form conductive pillar 312 having a noncircular cross-section. Referring to Figure 7D, copper may be electrolytically plated in hole 704 to form copper pillar 312 electrically connected to conductive pad 306.Conductive pillar 312 may have an outer profile identical to an inner profile of hole 704. Thus, conductive pillar 312 may include noncircular pillar cross-section 406 and taper 506characteristics as described above.Referring to Figure 7E, photoresist 702 may be stripped or removed during the semi additive process to expose the upper surfaces of seed layer 512, conductive pad 306, conductive pillar 312, and conductive traces 308. Referring to Figure 7F, the conductive seed layer 512 may be etched to expose dielectric layer 304 around conductive pad 306 and conductive pillar 312. For example, conductive seed layer 512 may be removed using a flash etch chemistry. The built-up structure may be exposed to a wet chemistry process to remove seed layer 512 entirely without removing conductive pillar 312, conductive pad 306, or conductive trace 308 entirely. Thus, after performing the photolithography and plating operations described above, package substrate 204 may include well-aligned conductive pillar 312 and conductive pad 306 above dielectric layer 304. Given that conductive pillar 312 and conductive pad 306 of package substrate 204 may be formed using lithography, the dimensions of the pillar and pad may be limited only by a resolution and alignment of photoresist 702.Furthermore, the shape or profile of the pillar and pad may be limited only by the patterning of photoresist 702. Accordingly, high density routing may be achieved using noncircular pillar and pad geometries as described above. As a result, a cost reduction may be realized through the reduction in substrate layer count, as well as the elimination of laser drilling and desmear operations inherent in conventional circular-pattern pillar geometries. If necessary, after the flash etch process, typical dielectric to copper adhesion promotion, e.g., CZ, may be applied.At operation 608, second dielectric layer 516 may be laminated over conductive pad 306 and conductive pillar 312. Referring to Figure 7G, buildup dielectric is laminated on top of the pillar structure after an adhesion promoter is applied to the copper features. In an embodiment, second dielectric layer 516 is built up thicker than dielectric layer 304 to accommodate height 504 of conductive pillar 312. More particularly, second dielectric may be laminated over first end 508 of conductive pillar 312. Accordingly, a planarization operation may be used to remove second dielectric layer 516 to expose the top surface of conductive pillar 312.At operation 610, planarization may be used to remove second dielectric layer 516 over conductive pillar 312. Planing the second dielectric layer 516 may be a mechanical process, e.g., grinding, and/or a chemical process, e.g., an etching process. Accordingly, planing may include a coarse mechanical polishing operation, a fine plasma or chemical etching operation, etc. Referring to Figure 7H, after removing a portion of second dielectric layer 516 over conductive pillar 312, noncircular pillar cross-section 406 may be exposed at first end 508. That is, a top surface of second dielectric layer 516 may be flush with a top surface of conductive pillar 312.Referring to Figure 71, a second escape line routing layer 302 may be formed above second dielectric layer 516 and conductive pillar 312. For example, conductive pads 306 and/or conductive traces 308 may be deposited using the photolithography and plating operations described above. When electroless copper is a desired copper seed layer for the second escape line routing layer 302, a desmear process may be used to roughen the buildup surface. The desmear operation, however, may be used to improve mechanical adhesion in the process rather than to clean microvias vis-a-vis conventional package substrate processing. By contrast, when the copper seed layer is to be deposited by sputtering, the desmear operation may be omitted for a further cost reduction opportunity. The two-operation photolithography and plating process described above can be repeated for each escape line routing layer 302 of package substrate 204.Referring to Figure 8, a schematic of a computer system is illustrated in accordance with an embodiment. The computer system 800 (also referred to as the electronic system 800) as depicted can embody a package substrate including conductive pillars and/or pads having noncircular cross-sections, according to any of the several disclosed embodiments and their equivalents as set forth in this disclosure. The computer system 800 may be a mobile device such as a netbook computer. The computer system 800 may be a mobile device such as a wireless smart phone. The computer system 800 may be a desktop computer. The computer system 800 may be a hand-held reader. The computer system 800 may be a server system. The computer system 800 may be a supercomputer or high-performance computing system.In an embodiment, the electronic system 800 is a computer system that includes a system bus 820 to electrically couple the various components of the electronic system 800. The system bus 820 is a single bus or any combination of busses according to various embodiments. The electronic system 800 includes a voltage source 830 that provides power to the integrated circuit 810. In some embodiments, the voltage source 830 supplies current to the integrated circuit 810 through the system bus 820.The integrated circuit 810 is electrically coupled to the system bus 820 and includes any circuit, or combination of circuits according to an embodiment. In an embodiment, the integrated circuit 810 includes a processor 812 that can be of any type. As used herein, the processor 812 may mean any type of circuit such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor, or another processor. In an embodiment, the processor 812 includes, or is coupled with, a package substrate including conductive pillars and/or pads having noncircular cross-sections, as disclosed herein. In an embodiment, SRAM embodiments are found in memory caches of the processor. Other types of circuits that can be included in the integrated circuit 810 are a custom circuit or an application- specific integrated circuit (ASIC), such as a communications circuit 814 for use in wireless devices such as cellular telephones, smart phones, pagers, portable computers, two-way radios, and similar electronic systems, or a communications circuit for servers. In an embodiment, the integrated circuit 810 includes on-die memory 816 such as static random- access memory (SRAM). In an embodiment, the integrated circuit 810 includes embedded on-die memory 816 such as embedded dynamic random- access memory (eDRAM).In an embodiment, the integrated circuit 810 is complemented with a subsequent integrated circuit 811. Useful embodiments include a dual processor 813 and a dualcommunications circuit 815 and dual on-die memory 817 such as SRAM. In an embodiment, the dual integrated circuit 810 includes embedded on-die memory 817 such as eDRAM.In an embodiment, the electronic system 800 also includes an external memory 840 that in turn may include one or more memory elements suitable to the particular application, such as a main memory 842 in the form of RAM, one or more hard drives 844, and/or one or more drives that handle removable media 846, such as diskettes, compact disks (CDs), digital variable disks (DVDs), flash memory drives, and other removable media known in the art. The external memory 840 may also be embedded memory 848 such as the first die in a die stack, according to an embodiment.In an embodiment, the electronic system 800 also includes a display device 850, and an audio output 860. In an embodiment, the electronic system 800 includes an input device such as a controller 870 that may be a keyboard, mouse, trackball, game controller, microphone, voice-recognition device, or any other input device that inputs information into the electronic system 800. In an embodiment, an input device 870 is a camera. In an embodiment, an input device 870 is a digital sound recorder. In an embodiment, an input device 870 is a camera and a digital sound recorder.As shown herein, the integrated circuit 810 can be implemented in a number of different embodiments, including having a package substrate incorporating conductive pillars and/or pads having noncircular cross-sections, according to any of the several disclosed embodiments and their equivalents, an electronic system, a computer system, one or more methods of fabricating an integrated circuit, and one or more methods of fabricating an electronic assembly that includes a package substrate incorporating conductive pillars and/or pads having noncircular cross-sections, according to any of the several disclosed embodiments as set forth herein in the various embodiments and their art-recognized equivalents. The elements, materials, geometries, dimensions, and sequence of operations can all be varied to suit particular I/O coupling requirements including array contact count, array contact configuration for a microelectronic die embedded in a processor mounting substrate according to any of the several disclosed package substrates incorporating conductive pillars and/or pads having noncircular cross-sections embodiments and their equivalents. A foundation substrate may be included, as represented by the dashed line of Figure 8. Passive devices may also be included, as is also depicted in Figure 8.Embodiments of package substrates including conductive interconnects having noncircular cross-sections, and integrated circuit packages incorporating such package substrates, are described above. In an embodiment, a package substrate includes a dielectric layer, several conductive pillars extending through the dielectric layer, and an escape line routing layer over the dielectric layer. The escape line routing layer has several conductive pads electrically connected to respective conductive pillars. The conductive pillars include a noncircular pillar cross-section.In one embodiment, the noncircular pillar cross-section includes a pillar width dimension and a pillar length dimension. The pillar width dimension is different than the pillar length dimension. In one embodiment, the noncircular pillar cross-section is a rectangular cross- section.In one embodiment, the conductive pillars include respective sidewalls having a height through the dielectric layer. The sidewalls include respective tapers of less than 5 microns over the height.In one embodiment, the conductive pads include a noncircular pad cross-section. The conductive pads are arranged in a first series in an axial direction and in a second series in the axial direction. The conductive pads of the first series are laterally separated from the conductive pads of the second series by a gap.In one embodiment, the noncircular pad cross-section is a rectangular cross- section having a pad width dimension and a pad length dimension.In one embodiment, the pad length dimension is at least twice the pad width dimension.In one embodiment, the escape line routing layer includes several conductive traces extending from respective conductive pads of the first series and the second series. The conductive traces extend through the gap in the axial direction.In an embodiment, an integrated circuit package includes a package substrate and an integrated circuit. The package substrate includes a dielectric layer, several conductive pillars extending through the dielectric layer, and an escape line routing layer over the dielectric layer. The escape line routing layer has several conductive pads electrically connected to respective conductive pillars. The conductive pillars include a noncircular pillar cross-section. The integrated circuit is mounted on the package substrate and has several pins electrically connected to the conductive pillars.In one embodiment, the noncircular pillar cross-section has a pillar width dimension and a pillar length dimension. The pillar width dimension is different than the pillar length dimension.In one embodiment, the noncircular pillar cross-section is a rectangular cross- section.In one embodiment, the conductive pillars include respective sidewalls having a height through the dielectric layer. The sidewalls include respective tapers of less than 5 microns over the height.In one embodiment, the conductive pads include a noncircular pad cross-section. The conductive pads are arranged in a first series in an axial direction and in a second series in the axial direction. The conductive pads of the first series are laterally separated from the conductive pads of the second series by a gap. In one embodiment, the noncircular pad cross-section is a rectangular cross- section.In one embodiment, the escape line routing layer includes several conductive traces extending from respective conductive pads of the first series and the second series. The conductive traces extend through the gap in the axial direction.In an embodiment, a method of fabricating a package substrate including conductive pillars having noncircular cross-sections includes forming an escape line routing layer on a conductive seed layer. The conductive seed layer is over a dielectric layer of a package substrate. The escape line routing layer includes a conductive pad. The method further includes applying a photoresist over the conductive pad. The photoresist includes a hole having a noncircular cross-section over the conductive pad. The method further includes filling the hole to form a conductive pillar having the noncircular cross-section. The conductive pillar is electrically connected to the conductive pad.In one embodiment, the method further includes removing the photoresist. The method further includes etching the conductive seed layer to expose the dielectric layer around the conductive pad and the conductive pillar.In one embodiment, the method further includes laminating a second dielectric layer over the conductive pad and the conductive pillar. The method further includes planing the second dielectric layer to remove the second dielectric layer over the conductive pillar to expose an end of the conductive pillar. The end has the noncircular cross-section.In one embodiment, the noncircular cross-section is a rectangular cross-section.In one embodiment, the conductive pillar includes a sidewall having a height through the second dielectric layer. The sidewall includes a taper of less than 5 microns over the height.
Disclosed herein are embodiments related to security in cloudlet environments. In some embodiments, for example, a computing device (e.g., a cloudlet) may include: a trusted execution environment; a Basic Input/Output System (BIOS) to request a Key Encryption Key (KEK) from the trusted execution environment; and a Self-Encrypting Storage (SES) associated with the KEK; wherein the trusted execution environment is to verify the BIOS and provide the KEK to the BIOS subsequent to verification of the BIOS, and the BIOS is to provide the KEK to the SES to unlock the SES for access by the trusted execution environment.
1.A server system usable in association with at least one remote cloud-based computer system, management related logic, at least one processing resource, and at least one network, the server system comprising:non-volatile storage hardware associated with at least one encryption key (EK) encrypted using at least one key encryption key (KEK), the non-volatile storage hardware for storing data, the data stored in the non-volatile storage hardware being encrypted based on the at least one EK, the data including operating system code;hardware circuitry for, based on the at least one EK, when one or more corresponding portions of the data are read from and written to the non-volatile storage hardware, respectively while decrypting/encrypting one or more corresponding portions of said data;computing hardware for performing at least one boot operation based on at least a portion of the operating system code read from the non-volatile storage hardware, the computing hardware for performing instantiation with at least one operating system associated at least one workload; andnetwork hardware for communicating with the at least one remote cloud-based computer system through secure data exchange and with the management-related logic through the at least one network;in:obtaining the at least one KEK for use in generating the at least one EK in response, at least in part, to at least one request for the at least one processing resource;the at least one processing resource is hardware and/or software at least partially isolated from execution of the operating system code;the management-related logic for use in connection with patching and installing, at least in part, at least one software update of the server system and the at least one remote cloud-based computer system;The server system and/or the at least one remote cloud-based computer system may be configured to provide diagnostic-related and/or log-related data to the management-related logic for monitoring in association with an application programming interface (API) and/or managing the server system and/or the at least one remote cloud-based computer system; andThe at least one remote cloud-based computer system may be configured to execute at least one virtual machine-related and/or container-related workload.2.The server system of claim 1, wherein:the at least one network includes the Internet network; andThe at least one workload associated with the at least one operating system instantiation includes a plurality of workloads executing in association with a plurality of virtual machines and/or containers.3.The server system of claim 1 or 2, wherein:the server system includes at least one mobile edge computing (MEC) server to execute the plurality of workloads; andThe at least one MEC server implements, at least in part, fifth generation mobile network (5G) protocol functionality.4.The server system of any one of claims 1-3, wherein:The management-related logic is at least partially remote from the server system.5.The server system of any one of claims 1-4, wherein:The server system includes a platform root of trust component for controlling access to the non-volatile storage hardware.6.A networked computing system for use in association with at least one network, the networked computing system comprising:at least one remote cloud-based computer system;Manage related logic;at least one processing resource; andAt least one server system, including:non-volatile storage hardware associated with at least one encryption key (EK) encrypted using at least one key encryption key (KEK), the non-volatile storage hardware for storing data, the data stored in the non-volatile storage hardware being encrypted based on the at least one EK, the data including operating system code;hardware circuitry for, based on the at least one EK, when one or more corresponding portions of the data are read from and written to the non-volatile storage hardware, respectively while decrypting/encrypting one or more corresponding portions of said data;computing hardware for performing at least one boot operation based on at least a portion of the operating system code read from the non-volatile storage hardware, the computing hardware for performing instantiation with at least one operating system associated at least one workload; andnetwork hardware for communicating with the at least one remote cloud-based computer system through secure data exchange and with the management-related logic through the at least one network;in:obtaining the at least one KEK for use in generating the at least one EK in response, at least in part, to at least one request for the at least one processing resource;the at least one processing resource is hardware and/or software at least partially isolated from execution of the operating system code;the management-related logic to be associated at least in part with patching and installing at least one software update for the at least one server system and the at least one remote cloud-based computer system;The at least one server system and/or the at least one remote cloud-based computer system may be configured to provide diagnostic-related and/or log-related data to the management-related logic for use in association with an application programming interface (API). for monitoring and/or managing the at least one server system and/or the at least one remote cloud-based computer system; andThe at least one remote cloud-based computer system may be configured to execute at least one virtual machine-related and/or container-related workload.7.The networked computing system of claim 6, wherein:the at least one network includes the Internet network; andThe at least one workload associated with the at least one operating system instantiation includes a plurality of workloads executing in association with a plurality of virtual machines and/or containers.8.The networked computing system of claim 6 or 7, wherein:the at least one server system includes at least one mobile edge computing (MEC) server to execute the plurality of workloads; andThe at least one MEC server implements, at least in part, fifth generation mobile network (5G) protocol functionality.9.The networked computing system of any of claims 6-8, wherein:The management-related logic is at least partially remote from the at least one server system.10.The networked computing system of any one of claims 6-9, wherein:The at least one server system includes a platform root of trust component for controlling access to the non-volatile storage hardware.11.The networked computing system of any of claims 6-10, wherein:The at least one server system includes a plurality of server systems.12.A method implemented using a server system usable in association with at least one remote cloud-based computer system, management related logic, at least one processing resource, and at least one network, the server system comprising a non-volatile Storage hardware, hardware circuits, computing hardware and network hardware, the method comprising:When one or more corresponding portions of the data are read from and written to the non-volatile storage hardware, respectively, the data is encrypted based on at least one encryption key (EK). One or more corresponding portions of the data are decrypted/encrypted, the data stored in the non-volatile storage hardware is encrypted based on the at least one EK, the data includes operating system code, the at least one EK associated with the non-volatile storage hardware and encrypted with at least one key encryption key (KEK);at least one boot operation performed by the computing hardware based on at least a portion of the operating system code read from the non-volatile storage hardware;at least one workload associated with at least one operating system instantiation is executed by the computing hardware; andusing the network hardware to communicate with the at least one remote cloud-based computer system through secure data exchange and with the management-related logic through the at least one network;in:obtaining the at least one KEK for use in generating the at least one EK in response, at least in part, to at least one request for the at least one processing resource;the at least one processing resource is hardware and/or software at least partially isolated from execution of the operating system code;the management-related logic for being associated with patching and installing, at least in part, at least one software update of the server system and the at least one remote cloud-based computer system;The server system and/or the at least one remote cloud-based computer system may be configured to provide diagnostic-related and/or log-related data to the management-related logic for monitoring in association with an application programming interface (API) and/or managing the server system and/or the at least one remote cloud-based computer system; andThe at least one remote cloud-based computer system may be configured to execute at least one virtual machine-related and/or container-related workload.13.The method of claim 12, wherein:the at least one network includes the Internet network; andThe at least one workload associated with the at least one operating system instantiation includes a plurality of workloads executing in association with a plurality of virtual machines and/or containers.14.The method of claim 12 or 13, wherein:the server system includes at least one mobile edge computing (MEC) server to execute the plurality of workloads; andThe at least one MEC server implements, at least in part, fifth generation mobile network (5G) protocol functionality.15.The method of any of claims 12-14, wherein:The management-related logic is at least partially remote from the server system.16.The method of any one of claims 12-15, wherein:The server system includes a platform root of trust component for controlling access to the non-volatile storage hardware.17.One or more computer-readable media storing instructions which, when executed by one or more machines, result in performing the method of any of claims 12 to 16.18.A computer program product comprising instructions which, when executed by one or more machines, cause the one or more machines to perform the method of any one of claims 12 to 16.19.A method for operating a micro cloud, comprising:Booting a microcloud remote from the data center, wherein the microcloud boot cannot be tampered with by software executed by the operating system of the microcloud; andData is received from a personal mobile computing device at the micro-cloud.20.The method of claim 19, further comprising:detecting an attempt to tamper with the hardware of the cloud; andThe boot process is interrupted in response to detecting the attempt to tamper with the hardware of the cloudlet.
computing equipmentThis application is a divisional application, the original application is a patent application submitted to the China Patent Office on May 18, 2018 (the international filing date is November 16, 2016), the application number is 201680067500. ".CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims US Provisional Patent Application No. 62/269,666, filed on December 18, 2015, entitled "SECURITY IN CLOUDLETENVIRONMENTS," and US Non-Provisional Patent Application No. 62/269,666, filed on March 4, 2016, and entitled "COMPUTING DEVICES" The benefit of priority from Application No. 15/060,844, which is incorporated herein by reference in its entirety.Background techniqueMany computing applications are provided to end users via processing and storage resources concentrated in remote data centers the size of rooms or buildings. These data centers provide physical security for these resources, protecting them from physical tampering or theft.Description of drawingsEmbodiments will be readily understood from the following detailed description in conjunction with the accompanying drawings. For convenience of description, the same reference numerals denote the same structural elements. Embodiments are shown by way of example and not by way of limitation in the figures of the accompanying drawings.1 is a block diagram of a networked computing system including one or more microclouds, according to various embodiments.2 is a block diagram of a networked computing system including a micro-cloud lifecycle manager in one or more micro-clouds, according to various embodiments.3 is a block diagram of a networked computing system for mobile edge computing (MEC) including microclouds, according to various embodiments.4 is a block diagram of a networked computing system for network functions virtualization (NFV) including microclouds, according to various embodiments.5 illustrates a first stage of a trusted boot process in accordance with various embodiments.6 illustrates a second stage of a trusted boot process in accordance with various embodiments.7 illustrates a first stage of a trusted bootstrap process including a root of trust measurement, according to various embodiments.8 illustrates a second stage of a trusted bootstrap process including root of trust measurements, according to various embodiments.9 is a block diagram of a computing device that may be used to implement various components of the networked computing systems disclosed herein, according to various embodiments.detailed descriptionTraditional cloud computing systems typically locate storage and processing resources in centralized data centers remote from the user devices that host these resources. The result of this arrangement is often high latency and high traffic across the network. However, if these storage and processing resources are taken out of the centralized data center and moved closer to the "edge" of the network (where user equipment is located), they are no longer physically protected and monitored by the centralized data center, and these resources Increased risk of actual damage. In particular, these resources may be stolen and/or tampered with, causing them to behave in undesirable ways that are not easily detectable. For example, a "remote" processing resource may download compromised cloud platform firmware, operating system (OS), software virtualized network function (VNF) updates and/or patches from a remote site via the Internet, and such compromise may go undetected. In another example, a hacker might gain physical access to a competing resource and tamper with it to operate in a compromised state. Conventional computing systems cannot trust that software (eg, firmware, OS, etc.) running on remote computing resources has not been compromised.Disclosed herein are methods and apparatus for providing tamper-resistant or tamper-resistant security for cloudlets in environments where physical security cannot be guaranteed. The micro-clouds disclosed herein can provide a "cloud-in-a-box system that provides cloud computing system functionality without the need for hard wiring back to a traditional cloud environment and that meets the security expectations of service providers for their conventional data center-based cloud resources. Require". Various embodiments disclosed herein may involve the creation of hardware-implemented boot integrity schemes and chains of trust throughout the operating platform.In some embodiments, the microclouds disclosed herein may enable Network Functions Virtualization (NFV) and Software Defined Networking (SDN) operators to extend the infrastructure of their cloud services closer to their subscribers, enabling performance and latency aspects improvements without compromising safety and reliability. Various embodiments disclosed herein may be particularly advantageous in mobile edge computing (MEC) (eg, European Telecommunications Standards Institute (ETSI) MEC), fog computing, and cloud edge computing applications. For example, the micro-clouds disclosed herein can support the secure implementation of fifth-generation mobile networks (5G) and MEC capabilities and their associated usage scenarios.In the following detailed description, reference is made to the accompanying drawings which form a part hereof, wherein like reference numerals refer to like parts throughout, and wherein embodiments in which may be practiced are shown by way of example. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not intended to be limiting.Various operations may be described as multiple discrete acts or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed to imply that these operations are necessarily order-dependent. In particular, the operations may be performed out of the order presented. The described operations may be performed in a different order than the described embodiments. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.For the purposes of this disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or ( A, B and C). This description uses the phrases "in one embodiment" or "in an embodiment," which may refer to one or more of the same or different embodiments, respectively. Furthermore, the terms "comprising", "comprising", "having" and the like used with respect to the embodiments of the present disclosure are synonymous. The drawings are not necessarily drawn to scale.1 is a block diagram of a networked computing system 100 including one or more microclouds 102, according to various embodiments. As used herein, a "microcloud" may refer to computing resources (eg, memory, processors, and networking devices) contained within a single enclosure (or a small number of enclosures) to provide data storage, processing, and/or distribution functions. In some embodiments, the micro-cloud can act as a small-scale data center. In some embodiments, as described above, the micro-cloud 102 may provide a substantially fully functional cloud system in a box without the need to connect back to a full cloud environment. The various cloudlets 102 in the system 100 may be deployed in remote environments where physical security of the cloudlets 102 cannot be ensured (eg, in public parks, street corners, shopping malls). System 100 may include a single cloudlet 102 or multiple cloudlets 102 (eg, tens or hundreds of cloudlets 102). Example embodiments of microclouds are discussed below with reference to FIG. 9 .Generally, the microcloud 102 may run virtual functions, applications, workloads, and data storage and collection processes. In some embodiments, one or more of the microclouds 102 may run one or more virtualized network functions (VNFs) 136 . For example, VNF 136 may include one or more VNFs provided by a Long Term Evolution (LTE) communications operator, such as Virtual Evolved Packet Core (vEPC) or Virtual Customer Premise Equipment (vCPE). In some embodiments, one or more of the microclouds 102 may run one or more workload virtual machines (VMs) 138 . As is known in the art, each workload VM 138 may provide an independent instantiation of an operating system (OS) and applications running on top of the operating system. The applications running in workload VM 138 may be any suitable applications, such as video caching, transcoding, and the like. VNF 136 and workload VM 138 may utilize a set of OpenStack services 134 running on host OS/virtual machine manager (VMM) 128, and host OS/VMM 128 may include docker daemon 132 (eg, for container management), As known in the art. One or more containers 117 may also run on the microcloud 102, providing operating system level virtualization, as known in the art (eg, for high performance computing applications). The security techniques disclosed herein can securely enable these capabilities of the microcloud 102 (by, eg, using keys and ciphertexts) without the physical security of a centralized data center.The cloudlet 102 may include a number of security components. For example, the cloudlet 102 may include a manageability engine (ME) 108 . For example, ME 108 may include a Converged Security and Manageability Engine (CSME). ME 108 may be an independent trusted execution environment and may act as a manufacturer's root of trust for microcloud 102 (eg, providing a secure environment for manufacturer-controlled bootstrap processes). For example, a trusted execution environment may provide one or more processors and memory devices that may execute code at a higher level of security than that provided by host OS/VMM 128 . In some embodiments, the trusted execution environment may be isolated from the operating hardware and/or software of the host OS/VMM 128 (eg, through encryption), and thus may execute in isolation from code executing as part of the host OS/VMM 128 code. In some embodiments, the trusted execution environment may be a secure enclave of the secure processor 126 in the microcloud 102 , and code executing in the trusted execution environment may be safe from code executing in the host OS/VMM 128 tampering.In some embodiments, ME 108 may be a secure service processor running a manufacturer trusted and host independent OS. ME 108 may utilize various platform protocols and silicon functions such as Intelligent Platform Management Interface (IPMI), Platform Environment Control Interface (PECI), and Host Embedded Controller Interface (HECI) to connect external management systems to the platform. In some embodiments, ME 108 may interface with various hardware components via a secure fabric (eg, Intel System on Chip Fabric (IOSF)). ME 108 may include Platform Trust Technology (PTT) component 110 and may communicate with Trusted Platform Module (TPM) 118 . As is known in the art, TPM 118 may include a chip (with a processing device) that may securely store data used to authenticate the platform of microcloud 102 . PTT 110 may provide credential storage and key management functions, and may act as a firmware TPM (fTPM) that provides TPM functionality as an application on ME 108, as is known in the art.The micro cloud 102 may include an innovation engine (IE) 112 . IE 112 may communicate with ME 108 and may be a separate independent trusted execution environment. In particular, IE 112 may serve as a root of trust for operators (platform owners) of microclouds 102 (eg, telecommunications equipment manufacturers (TEMs)). IE 112 may be supplied with operator specific firmware. In some embodiments, IE 112 may include a secure out-of-band (OOB) service processor running a host-independent OS trusted by the operator. IE 112 may contain boot images and authentication credentials from the operator (stored in, for example, fuses and manifests), and may store operator authorization schemes for executing specific applications or applets within IE 112 . IE 112 can utilize various platform protocols and silicon functions such as IPM I, PECI and HECI to connect external management systems to the platform. In some embodiments, IE 112 may interface with various hardware components via a security fabric (eg, IOSF). IE 112 may provide the OOB manageability access point to the platform of micro cloud 102, and may optionally include fTPM. In some embodiments, IE 112 may have networking capabilities that ME 108 may not have; eg, an Ethernet interface and associated networking access. IE 112 can also access dedicated platform accelerators, such as field programmable gate arrays (FPGAs).IE 112 may include a multi-party authorization (MPA) component 116 . In use, IE 112 itself can be securely booted with a signed image and signed configuration parameters, and as described above, can act as a hardware root of trust for the operator's infrastructure (holding IE 112's OS and application security credentials) . The MPA component 116 may implement access control and explicit authorization for security applications (eg, NFV operator access, telemetry, monitoring, updates, etc.) running within the IE 112 . IE 112 may also be responsible for verifying any UEFI/BIOS signatures (stored eg in fuses) using platform credentials. IE 112 may store a key encryption key (KEK) for self-encrypting storage (SES) 156; this KEK is represented as SES-KEK 114 in FIG. 1 . SES 156 is discussed in further detail below. ME 108 and IE 112 may include their own processors, cryptographic computing cores, static random access memory (SRAM), and the like.The cloudlet 102 may include a boot guard component 160 . The boot protection component 160 can provide hardware-based boot integrity protection to prevent unauthorized software and malware from taking over the boot blocks of the microcloud 102 . In some embodiments, the boot guard component 160 may be included in an Authentication Code Module (ACM). ACM is firmware that is configured to invoke the appropriate CPU instructions to perform Boot Guard measurements and verifications. The ACM code may be privileged code signed by the manufacturer or another trusted entity. In some embodiments, the ACM may be part of the security processor 126 discussed below. Boot protection component 160 may provide measured booting, where the initial boot block is measured into the TPM 118 or PTT 110, or authenticated booting, where the initial boot block is cryptographically authenticated using a boot policy key. The boot guard component 160 may be utilized by the central processing unit (CPU) of the cloudlet 102 to boot and trigger the signing and verification process during boot. ME108 and IE 112 can be verified by hardware before CPU booting begins.The cloudlet 102 may include a secure processor 126 . Security processor 126 may be a security-enhanced general-purpose processor. In some embodiments, secure processor 126 may include a Software Guard Extensions (SGX) component (not shown) to provide secure processor 126 with a set of instructions that may be used by applications to store private areas of code and data Stay in a "safe enclave". In some embodiments, the security processor 126 may include a trusted measurement service to perform attestation to ensure that all system components are authorized. For example, secure processor 126 may include a Trusted Execution Technology (TXT) component (not shown) to create an encrypted unique identifier for each approved boot-enabled component of microcloud 102 and then provide a hardware-based enforcement mechanism to prevent the launch of codes that do not match the approved codes. For example, the TXT component can be implemented by ACM. In some embodiments, secure processor 126 may be an x86 processor.The cloudlet 102 may include a basic input/output system (BIOS) 122 , which in turn may include an option read only memory (OROM) 124 . BIOS 122 may be Unified Extensible Firmware Interface (UEFI) BIOS, and OROM 124 may be UEFI OROM. As discussed below, OROM 124 may be implemented as firmware loaded by BIOS 122 and may be used by BIOS 122 to enable ME 108 and IE 112 to read data in SES 156 . BIOS 122 can be certified by ME108. In some embodiments, BIOS 122 may implement signature verification of OROM 124 (eg, UEFI OROM), as well as for OS bootloaders and OS images in microcloud 102 . For example, the UEFI secure boot process may be provided by the operator of the micro cloud 102 at boot time with OS bootloader and OS signing and verification, and UEFI authentication variables (eg, platform key (PK), KEK, signature database (DB) and A forbidden signature database (DBX)) may be stored in a secure portion of the host storage device 154 (eg, an embedded multimedia card (eMMC) or an anti-rollback partition in a universal flash device (UFS)). In some embodiments, OROM 124 may be a UEFI loadable module controlled by IE 112 and stored in SPI flash 150 . In some embodiments, the payload of OROM 124 may be responsible for primary host storage management and/or updates.BIOS 122 may use a key (eg, SES-KEK 114) supplied by the operator to microcloud 102 as part of its authentication variables. In some embodiments, BIOS 122 may store authenticated variables in a separate partition of SES 156 . In some embodiments, BIOS 122 may store authenticated variables in a secure storage partition of primary storage device 152 (described below), accessible only through a platform root of trust (eg, ME 108).Host OS/VMM 128 may include Cloud Integrity Technology (CIT) agent 130 . The CIT agent 130 may interact with the trusted measurement service (eg, TXT) of the security processor 126 to enable the BIOS 122, the OS and VMM of the host OS/VMM 128, and the boot time of any VNF 136, VM 138, or container 117 that is booted Measurement. In some embodiments, the trusted measurement service (eg, TXT) of the boot guard 160 , the CIT agent 130 and the security processor 126 may together provide trusted, verified and measured booting up to running on the microcloud 102 application or service.In some embodiments, as discussed in detail below with reference to FIGS. 5-8 , the microcloud 102 may perform a secure and trusted bootstrap process. The boot process may include releasing the SES-KEK 114 to the SES 156 to complete the boot process. Several of the security components discussed herein, including the boot protection component 160 , the BIOS 122 , and the OS of the host OS/VMM 128 , can be utilized during this boot process, as discussed in detail below.The microcloud 102 may include one or more network interface controllers (NICs)/switches 120 . NIC/switch 120 can communicate with host OS/VMM 128 and IE 112 and can route data to/from microcloud 102 . In some embodiments, all firmware and configuration information installed to the NIC/switch 120 may be verified by the ME 108, the IE 112, and/or the trusted measurement service (eg, SGX) of the security processor 126. These firmware and configuration elements may be stored in SES 156 . In some embodiments, NIC/switch 120 may be part of the main processor of microcloud 102 (eg, in a central processing unit (CPU) north complex) or a chipset (eg, platform controller hub (PCH) or Southern Complex). In some embodiments, the NIC/switch 120 may be implemented in an FPGA programmable logic module. In some embodiments, the NIC/switch 120 may be located external to the microcloud 102 and on a Peripheral Component Interconnect Express (PCIe), optical, or other high-speed bus. In some embodiments, the NIC/switch 120 and the cloudlet 102 may be manufactured by different manufacturers.Micro cloud 102 may include firmware storage 140 and main storage 152 . In some embodiments, firmware storage 140 may include serial peripheral interface (SPI) flash memory 150, but may alternatively or additionally include eMMC, for example. SPI flash 150 may include BIOS firmware storage 142 (for BIOS 122), ME firmware storage 144 (for ME 108), IE firmware storage 146 (for IE 112), NIC firmware storage 162 (for NIC/ switch 120) and OROM firmware storage 148 (for OROM 124). SPI flash 150 may provide storage for the main platform storage (eg, to store UEFI platform configuration parameters).Primary storage 152 may include storage 158 for the host OS cloud service and storage 154 for the host. Main storage 152 may store an image of the host OS, and may store all images stored in main storage 152 in an encrypted manner. Primary storage 154 may include one or more SES 156; although mentioned in the singular, SES 156 may include one or more SES devices. SES 156 may include a memory device (eg, a hard drive) and hardware circuitry that encrypts/decrypts data as it is written to or from the memory device. Encryption/decryption of data in the memory device is performed using the Media Encryption Key (MEK), which is itself encrypted by the KEK. For example, the KEK for SES 156 is SES-KEK 114 in IE 112. SES 156 can be used for OS. Although shown separately in FIG. 1 , in some embodiments, SES 156 may be used for platform firmware. In some embodiments, primary storage 152 may have dual redundant partitions such that if a partition fails, microcloud 102 may revert to its redundant partition.In some embodiments, SES 156 may be divided into partitions, and IE 112 and/or ME 108 may incrementally unlock these partitions (eg, using different KEKs) as needed. A KEK (eg, SES-KEK 114) can always be protected within IE 112 and/or ME 108 (or other trusted environment) and programmed into SES 156 as needed. In some embodiments, each bucket may have its own unique encrypted KEK. In some embodiments, the KEK (eg, SES-KEK 114 ) may be securely packaged by the IE 112 and/or ME 108 and passed to the operator's secure command center or the infrastructure owner of the microcloud 102 . For example, a secure command center can use the wrapped KEK for auditing and escrow.Primary storage 152 and/or firmware storage 140 may be secure storage, such as secure rollback protected eMMC and/or secure flash partitions. For example, the secure storage may be used to store platform firmware, OS bootloaders, and OS components. In some embodiments, the secure storage of microcloud 102 may be used to store platform firmware, OS bootloader, and/or OS identification information that can be used to check whether the correct version is in place. Examples of such OS identifying information include version, security version, OS composition (eg, Openstack image, storage and networking services), authorized signers, and authentication variables, among others. "Version" may refer to the book value that distinguishes different versions of software. "Security Version" may refer to a value that changes when a security policy violation is detected in software, firmware, or other related components. For example, software may have a security version of 1 until a security issue is discovered, at which point the security version may be updated to 2 (and all security versions prior to this new security version may be considered vulnerable). "Authentication variables" may refer to secure signature database variables such as signing keys, authorization databases, key hierarchies, update logs, and the like. When BIOS 122 is a UEFI BIOS, these authentication variables are defined by UEFI. In some embodiments, the secure storage device may be cryptographically bound to a platform hardware root of trust (eg, ME 108, IE 112, and/or a trusted measurement service (eg, SGX) of secure processor 126). The secure storage device may be bound to the platform of the micro-cloud 102, and in some embodiments, any physical tampering may render the platform unbootable. In some embodiments, the platform of the microcloud 102 may not boot without secure storage.As shown in FIG. 1 , a cloudlet 102 may communicate with one or more additional cloudlets 102 . These additional cloudlets 102 may be configured according to any of the embodiments discussed above. In some embodiments, a cloudlet 102 may not communicate with any other cloudlet 102 . The micro cloud 102 may also communicate with the micro cloud management center 106 (which may also be referred to as the micro cloud control center) via the Internet 104 . The Internet 104 may consist of network equipment, Internet connections, backbone fibers, or any other network hardware that couples the microcloud 102 to the microcloud management center 106 . In some embodiments, one or more microclouds 102 may communicate with one or more network infrastructure components 119 (eg, top-of-rack switches or routers).The micro-cloud management center 106 may provide infrastructure as a service (IAAS) for managing the micro-clouds 102 in the system 100 . Using the micro-cloud management center 106 to manage the micro-cloud 102 may allow the system 100 to be implemented with a lower total cost of ownership (TCO) and large-scale deployment capabilities. In some embodiments, the micro-cloud management center 106 may include installation and configuration management circuitry to provide the micro-cloud 102 with appropriate software and configuration information. The remote management and telemetry circuitry in the micro-cloud management center 106 can use dedicated out-of-band mechanisms to communicate with the micro-cloud 102 when the host OS or applications running on the micro-cloud 102 are to be updated. For example, a port of NIC/switch 120 can be assigned to operate as the out-of-band mechanism and can provide a secure and reliable channel between microcloud 102 and microcloud management center 106 . The new image including the update can be pushed down to the microcloud 102 by the microcloud management center 106, and the IE 112 can call the OROM 124 to provide the IE 112 access to the SES 156 in the main storage 152 to store the new image. The host OS/VMM 128 can continue to run while the new image is pushed down to the microcloud 102 via an out-of-band mechanism, thereby minimizing downtime caused by updates. In other embodiments, there may be a direct connection between IE 112 and main storage 152, and/or between ME 108 and main storage 152 (eg, main storage 152 may include multiple heads of communication). In this way, the controller for primary storage 152 can cause host OS/VMM 128, IE 112 and/or ME 108 to act as different "agents" to connect to primary storage 152 and use it for read/write .In some embodiments, the OS images on multiple of the microclouds 102 included in the system 100 may be the same, and the identity of the microclouds 102 may be hosted by a secure pseudo universal serial bus (USB) (or pseudo PCIe ) configuration file on the device to determine. A pseudo-device may provide the operation of a set of similar devices without the hardware typically associated with such devices, to enhance the functionality of existing devices or to access subsystems of the microcloud 102 . In some embodiments, a pseudo device may be implemented by a pseudo device driver, which may be part of a kernel that acts as a device driver but does not correspond to any "real" device hardware in the microcloud 102 . In particular, a secure and trusted boot process (such as the processes discussed below with reference to Figures 5-8) can be constructed to expose configuration information as a pseudo device on the USB (or PCIe) bus, and cause IE112 to update securely Information about the device. In some embodiments, such embodiments may include having OROM 124 mount the associated storage device as a USB or PCIe device, and have a USB or PCIe redirection controller in IE 112 . The presence of encrypted storage at rest can limit the risk of physical attack.2 is a block diagram of a networked computing system 100 that includes a micro-cloud lifecycle manager 170 in one or more micro-clouds 102, according to various embodiments. The micro-cloud lifecycle manager 170 may be embedded in the micro-cloud 102 . In some embodiments, the microcloud lifecycle manager 170 of the microcloud 102 may be located in IE 112 . As shown in FIG. 2 , each micro cloud 102 may communicate with the micro cloud management center 106 . In particular, the micro-cloud lifecycle manager 170 may communicate with the installation and configuration management circuitry and the remote management and telemetry circuitry of the micro-cloud management center 106, as discussed above. During operation, the platform telemetry circuitry of the micro cloud 102 may communicate with a telemetry hub included in the ME 108 (which may include firmware TPM 118 as discussed herein), and the ME 108 may communicate with the micro cloud lifecycle manager in the IE 112 to communicate. Each microcloud 102 may also communicate with cloud systems 174 provided by telecommunications companies or other service providers to perform NFV and SDN operations. The cloud system 174 may have its own data center 176, which may take the form of a traditional cloud computing data center. Each micro-cloud 102 may also communicate with a cloud application distribution device 172, which may provide software for a particular application to the micro-cloud 102.The micro-cloud lifecycle manager 170 may interact with the micro-cloud management center 106 to allow secure exchanges between the micro-cloud 102 and the micro-cloud management center 106 without the possibility of a man-in-the-middle or deceptive arrangement. For example, in some embodiments, the micro-cloud lifecycle manager 170 can emulate a read-only device and can expose the emulated read-only device to a host server (eg, the micro-cloud management center 106 or the micro-cloud 102 in the system 100 ) . The simulated device may include configuration parameters, which may be published as files or other data forms known to the operating application software on the host server. The microcloud lifecycle manager 170 may expose an application programming interface (API) to the microcloud management center 106 to allow secure updates to the content of the simulated device. The micro-cloud lifecycle manager 170 can thus provide a node configuration pseudo-device.In another example, in some embodiments, the micro-cloud lifecycle manager 170 can emulate a logging device and can expose the emulated read-only device to the master server. Information written to the device may be securely presented to the microcloud management center 106 by the microcloud lifecycle manager 170 as logger diagnostic information. The microcloud lifecycle manager 170 may filter log information sent to the microcloud management center 106 based on configuration or policy settings from the microcloud management center 106 .In another example, once the platform of the micro-cloud 102 has been fully authenticated, the micro-cloud 102 may expose the out-of-band attestation level to external systems. This out-of-band attestation level may represent the security of the cloudlet 102 measurements. For example, a "five-star" certification level may indicate that the firmware, OS boot, keys, and configuration of the microcloud 102 are as expected. A "four star" certification level may indicate that the cloud 102 is mostly but not entirely as expected (eg, the firmware is an outdated version). A "0-star" attestation level may indicate a complete failure (eg, the measured lead does not match the expected value).In some embodiments, the micro-cloud lifecycle manager 170 may communicate with the remote management and telemetry circuitry of the micro-cloud management center 106 via a RESTful interface. The interface may use the JavaScript Object Notation (JSON) data format and, in some embodiments, may be a Hypertext Transfer Protocol Secure (HTTPS) interface (eg, according to the X.509 standard for client/server authentication).As described above, in some embodiments, the microclouds 102 disclosed herein may be included in a MEC arrangement. 3 is a block diagram of a networked computing system 100 for mobile edge computing (MEC) including microcloud 102, according to various embodiments. In the system 100 of FIG. 3, the user device 178 may represent any end device, such as a smartphone, other personal computing device, Internet of Things (IoT) device, vehicle, or sensor. A single user device 178 is shown for ease of illustration, and system 100 may include multiple user devices 178 . Small cells 180 may communicate with user equipment 178 and may represent small wireless network hubs (eg, Wi-Fi hubs, 3rd Generation Partnership Project (3GPP) antennas, etc.). According to any of the embodiments disclosed herein, small cell 180 may be coupled to MEC platform 182 , and MEC platform 182 may include micro cloud 102 . Termination can be performed at the MEC platform 182 and the micro cloud 102 can provide the VNF 136 for handset termination, signaling, data plane and applications. The MEC platform 182 can communicate with the mobile core 184 , which can have a MEC core node 186 . Communication between the MEC platform 182 and the mobility core 184 may include backhaul links, routers, switches, and any other suitable hardware, as known in the art. Mobile core 184 may include, for example, an LTE backbone network. The MEC core node 186 may then communicate with the Internet 104, which in turn may be coupled with any of a variety of services (not shown) such as content delivery, content analysis, vehicle monitoring, monitoring of other sensors, emergency services, and the like. This architecture can be contrasted with traditional mobile networks, in which small cells 180 are coupled to mobile core 184 via eNBs that do not have the capability to provide cloud computing services.4 is a block diagram of a networked computing system 100 for network functions virtualization (NFV) including microcloud 102, according to various embodiments. In the system 100 of FIG. 4, the micro-cloud 102 may play the role of NFV infrastructure (NFV), and the micro-cloud management center 106 may be included in the NFV management and orchestration (NFV MANO) component. In some embodiments, all components of microcloud 102 of FIG. 1 may be included in NFVI, except OpenStack service 134 , VNF 136 , workload VM 138 , and container 117 .As described above, in some embodiments, the micro cloud 102 may perform a secure and trusted boot process. The boot process may include releasing the SES-KEK 114 to the SES 156 to complete the boot process. Figures 5 and 6 show the first and second stages of the first embodiment of the trusted boot process, respectively, while Figures 7 and 8 show the first stage of the second embodiment of the trusted boot process, respectively and the second stage.During the trusted boot process of Figures 5-8, the SES-KEK 114 associated with the SES 156 is protected by the ME 108 and IE 112 and may be passed to the BIOS 122 as applicable. Once successfully authenticated and authorized, the SES-KEK 114 can be provided to the SES 156 for self-decryption and unlocking. BIOS 122 may have to pass signature verification checks originating from ME 108 and/or IE 112 as well as measurement checks before receiving SES-KEK 114. BIOS 122 may include mechanisms for accessing and unlocking SES 156 . In some embodiments, the BIOS operations described above may be performed through a UEFI BIOS System Management Interrupt (SMI) based System Management Mode (SMM) mode. In some such embodiments, code executing in SMM may be trusted and verified by ME 108 and/or IE 112 as a root of trust.Turning to FIG. 5, a first stage 500 of a trusted boot process is shown in accordance with various embodiments. As discussed below, the first phase 500 may be a measurement and verification phase for hardware and BIOS. After the system is powered up, at 502 , the microcode may verify and measure the Authenticated Code Module (ACM) of the Boot Guard (BtG) 160 . The result can be written to the Platform Configuration Register (PC). At 504, the ACM of the boot guard 160 can verify the BIOS 122 and can write the results to the PCR. At 506 , the ACM may verify and measure the initialization code of the BIOS 122 . The result can be written to the PCR; if the validation fails, the process can be aborted. At 508, the trusted measurement service (eg, TXT) of the secure processor 126 and its memory can be initialized, and the SMM can be loaded. At 510, SMM and other trusted codes can be measured and the results written to the PCR. At 512, the configuration of the trusted measurement service (eg, TXT) and its memory can be locked by providing the ENTERACCS:LockConfig instruction. At 514, non-critical code can be executed. At 516, the BIOS 122 may communicate with the IE 112 to obtain the SES-KEK 114 for the locked SES 156.The second stage 600 shown in FIG. 6 may be a measurement stage for various other components (eg, Trust Boot (TBOOT), OS, Docker Engine, etc.). For example, TBOOT can be a "pre-kernel" component that can call the TXT instruction to measure the OS or VM. Turning to FIG. 6 , at 602 , BIOS 122 may provide SES-KEK 114 to SES 156 . At 604 , the SES 156 may use the SES-KEK 114 to decrypt the SES 156 MEK, thereby unlocking the SES 156 . If the unlocking of the SES 156 fails, the process can be aborted. At 606, SINIT and OS code can be loaded, and a SENTER instruction can be provided (as part of a TXT process known in the art). At 608, the microcode can verify the SINIT of 606, and the result can be written to the PCR. At 610, SINIT can measure TBOOT and the results can be written into PCR. At 612, SINIT can measure the OS kernel initrd++ and can write the results to the PCR. At 614, Tboot-xm can measure applications, configuration data, docker daemons, and/or other OS components, and can write the results to the PCR. The components measured at 614 may be configurable. At 616, the OS can be started.The trusted bootstrap process shown in FIGS. 5 and 6 may provide remote secure access to the platform of the microcloud 102 , including authorization credentials that enable the ME 108 and/or the IE 112 to unlock the SES 156 . SES-KEK 114 is never visible to protected firmware, or extracted under normal circumstances. In some embodiments, the KEK of the micro cloud 102 may be retrieved using a highly privileged authorization for operator compliance. For example, IE 112 and/or ME 108 may be pre-provisioned with authorization credentials that may be used to securely deliver the KEK to a management entity (eg, the NFV Virtualization Infrastructure Manager shown in Figure 4).Figures 7 and 8 show the first and second stages, respectively, of the second embodiment of the trusted boot process. During the trusted boot process shown in Figures 7 and 8, roots of trust (eg ME 108 and IE 112) are also measured. This can be applied for security auditing and compliance to ensure that the platform of the micro cloud 102 boots with a known set of root of trust firmware/OS and a known root of trust configuration.As described below, the first phase 700 may be a measurement and verification phase for hardware and BIOS. Turning to the first stage 700 of FIG. 7, after the system is powered up, at 702, ME ROM boot (eg, ME 108) and hardware initialization may be performed, and measurements may be stored in internal SRAM (eg, when TPM 118 has not when ready). At 704, IEROM boot (eg, IE 112) and multi-party authorization (eg, multi-party authorization component 116) can be performed, and measurements can be stored in internal SRAM (eg, when TPM 118 is not ready). At 706, the microcode can verify and measure the ACM of the BIOS 122, and can write the results into the PCR. At 708 , the ACM may verify and measure the initialization code of the BIOS 122 . The result can be written to the PCR; if the validation fails, the process can be aborted. At 710, a trusted measurement service (eg, TXT) of the secure processor 126 and its memory may be initialized, and a system management mode (SMM) may be loaded. As known in the art, SMM may be a mode in which OS execution is suspended and trusted firmware is executed. At 712, SMM and other trusted codes can be measured and the results written to the PCR. At 714, the configuration of the trusted measurement service (eg, TXT) and its memory may be locked, and the ENTERACCS: LockConfig command may be provided. At 716, non-critical code can be executed. At 718, the BIOS 122 may communicate with the IE 112 to obtain the SES-KEK 114 for the locked SES 156.The second stage 800 shown in FIG. 8 may be a measurement stage for various other components (eg, TBOOT, OS, terminal engine, etc.). Turning to FIG. 8 , at 802 , BIOS 122 may provide SES-KEK 114 to SES 156 . At 804 , the SES 156 may use the SES-KEK 114 to decrypt the SES 156 MEK, and thereby unlock the SES 156 . If the unlocking of the SES 156 fails, the process can be aborted. At 806, SINIT and OS code can be loaded, and a SENTER instruction can be provided. At 808, the microcode can verify the SINIT of 806 and the result can be written to the PCR. At 810, SINIT can measure TBOOT and the results can be written into PCR. At 812, SINIT can measure the OS kernel initrd++ and can write the results to the PCR. At 814, Tboot-xm can measure application, configuration, and dock data, and can write the results to the PCR. The components measured at 814 may be configurable. At 816, the OS can be started.9 is a block diagram of a computing device 900 that may be used to implement various components of the networked computing systems disclosed herein, according to various embodiments. For example, some or all of the components of computing device 900 may be included in microcloud 102 , microcloud management center 106 , user device 178 , or cloud application distribution device 172 . Various elements are shown in FIG. 9 as being included in computing device 900, but any one or more of these elements may be omitted or duplicated as appropriate for the application.Additionally, in various embodiments, computing device 900 may not include one or more of the elements shown in FIG. 9, but computing device 900 may include interface circuitry for coupling to one or more elements. For example, computing device 900 may not include display device 906, but may include display device interface circuitry (eg, connector and driver circuitry) to which display device 906 may be coupled. In another set of examples, computing device 900 may not include audio input device 924 or audio output device 908, but may include audio input or output device interface circuitry to which audio input device 924 or audio output device 908 may be coupled (eg, connectors and supporting circuits).Computing device 900 may include processing device 902 (eg, one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device that processes electronic data from registers and/or memory to convert the electronic data into other electronic data that may be stored in the registers and/or memory equipment or part of equipment. Processing device 902 may include one or more digital signal processors (DSPs), application specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptographic processors, server processors, or any other suitable Handling equipment. For example, processing device 902 may include security processor 126 and separate processors included in ME 108 and IE 112 of microcloud 102 . Computing device 900 may include memory 904, which may itself include one or more memory devices, such as volatile memory (eg, dynamic random access memory (DRAM)), non-volatile memory (eg, read-only memory ( ROM)), flash memory, solid state memory, SES and/or hard disk drives. For example, memory 904 may include firmware storage 140 and main storage 152 of microcloud 102 .In some embodiments, computing device 900 may include a communication chip 912 (eg, one or more communication chips). For example, the communication chip 912 may be included in the NIC/switch 120 of the micro cloud 102 . For example, communications chip 912 may be configured to manage wireless communications for transferring data to and from computing device 900 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can transmit data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not.Communication chip 912 may implement any of a variety of wireless standards or protocols, including but not limited to, the Institute of Electrical and Electronics Engineers including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (eg, IEEE 802.16-2005 amendments) (IEEE) standards, the Long Term Evolution (LTE) project, and any modifications, updates, and/or revisions (eg, the LTE-Advanced project, the Ultra Mobile Broadband (UMB) project (also known as "3GPP2"), etc.). IEEE 802.16 Compliant Broadband Wireless Access (BWA) networks are often referred to as WiMAX networks, which is an acronym that stands for Worldwide Interoperability for Microwave Access, and is a product that has passed compliance and interoperability testing of the IEEE802.16 standard certification mark. The communication chip 912 may operate according to Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks operate. The communication chip 912 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Evolved UTRAN (E-UTRAN). The communication chip 912 can be based on Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Communication (DECT), Evolution-Data Optimized (EV-DO) and derivatives thereof and designated as 3G, 4G, 5G and any other wireless protocol above. In other embodiments, the communication chip 912 may operate according to other wireless protocols. Computing device 900 may include antenna 922 to facilitate wireless communications and/or receive other wireless communications, such as AM or FM radio transmissions.In some embodiments, the communications chip 912 may manage wired communications such as electrical, optical, or any other suitable communications protocol (eg, Ethernet). As described above, the communication chip 912 may include a plurality of communication chips. For example, the first communication chip 912 may be dedicated to short-range wireless communication such as Wi-Fi or Bluetooth, and the second communication chip 912 may be dedicated to communication such as Global Positioning System (GPS), EDGE, GPRS, CDMA, WiMAX, Longer range wireless communication such as LTE, EV-DO or others. In some embodiments, the first communication chip 912 may be dedicated to wireless communication, and the second communication chip 912 may be dedicated to wired communication.Computing device 900 may include battery/power circuitry 914 . Battery/power circuit 914 may include one or more energy storage devices (eg, batteries or capacitors) and/or circuits (eg, AC line power) for coupling elements of computing device 900 to an energy source separate from computing device 900 .Computing device 900 may include display device 906 (or corresponding interface circuitry as described above). For example, display device 906 may include any visual indicator, such as a heads-up display, computer monitor, projector, touch screen display, liquid crystal display (LCD), light emitting diode display, or flat panel display.Computing device 900 may include audio output device 908 (or corresponding interface circuitry as described above). For example, audio output device 908 may include any device that generates audible indicators, such as speakers, headphones, or earbuds.Computing device 900 may include audio input device 924 (or corresponding interface circuitry as described above). Audio input device 924 may include any device that produces a signal representing sound, such as a microphone, a microphone array, or a digital musical instrument (eg, a musical instrument with a musical instrument digital interface (MIDI) output).Computing device 900 may include a global positioning system (GPS) device 918 (or corresponding interface circuitry as described above). The GPS device 918 can communicate with a satellite-based system and can receive the location of the computing device 900, as is known in the art.Computing device 900 may include other output devices 910 (or corresponding interface circuitry as described above). Examples of other output devices 910 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or additional storage devices.Computing device 900 may include other input devices 920 (or corresponding interface circuitry as described above). Examples of other input devices 920 may include accelerometers, gyroscopes, image capture devices, keyboards, cursor control devices such as mice, styluses, touch pads, barcode readers, quick response (QR) code readers, Any sensor or radio frequency identification (RFID) reader.Although this article discusses specific examples of trusted execution environments (eg, ME 108 and IE 112), this is for illustration purposes only and can be implemented using any desired trusted partition or environment (eg, SGX or SMM mode) Embodiments disclosed herein.In some embodiments of the cloudlet 102 , the ME 108 , the IE 112 , the security processor 126 (eg, using SGX and/or TXT), the SES 156 , the boot protection component 160 , and the CIT agent 130 may be used together to secure the cloudlet 102 The platform's firmware and OS bootloader operations for the cloud 102 are protected (eg, by ME 108 and IE 112) and the SES-KEK 114 (and any other KEKs) are stored and protected (eg, by ME 108 and IE 112) ). The result is trusted and authenticated boot and hardware-protected authenticated key access.In some embodiments of Weiyun 102, ME 108, IE 112, Secure Processor 126 (eg, using SGX), UEFI Secure Boot (wherein Weiyun 102's firmware checks the system bootloader with the database authorized keys), security fuses (fuses in which the keys required for bootstrapping (e.g., an initial set of public key hashes) are permanently burned into hardware to provide a hardware root of trust), Secure encapsulation (in which encapsulation techniques are used that do not allow disclosure of keys stored in secure fuses) and secure eMMC/storage (e.g., use of storage with anti-rollback protection in eMMC, such as Replay Protected Memory Block (RPMB) )) can be used together to ensure that the platform of the micro cloud 102 is booted and operational in a trusted environment, to ensure that configuration information exposed as a pseudo USB (or PCIe) device on the micro cloud 102 can be securely accessed and updated, and This configuration information is protected by ME 108 and IE 112.In some embodiments of microcloud 102, ME 108, IE 112, security processor 126 (eg, using SGX and/or TXT), bootstrap protection component 160, and CIT agent 130 may be used together to provide a measured bootstrapping and chain of trust To ensure that the attestation of the micro-cloud 102 (including ME 108, IE 112, and static and dynamic trust chains) is secure (eg, not compromised), and that the out-of-band attestation level can be exposed to external systems.In some embodiments of the micro-cloud 102, the ME 108 and the IE 112 may be used together to host an embedded micro-cloud lifecycle manager. The embedded microcloud lifecycle manager can emulate a read-only device and can expose the emulated device to the master server. Additionally or alternatively, the embedded micro-cloud lifecycle manager can emulate a logging device and can expose the emulated device to the master server.Various embodiments disclosed herein may provide one or more advantages over conventional approaches. Some embodiments may provide hardware-enforced integrity and chain of trust for the entire operating platform in environments where physical security cannot be asserted. Some embodiments may provide a secure and tamper-resistant micro-cloud that remains secure, trusted, and certified through various stages of its platform lifecycle, without requiring physical security in the data center. Some embodiments may provide non-spoofable visibility into the operational status and proven trust level of the micro-cloud. Some embodiments may allow "open platform" based NFV and SDN solutions to be deployed in remote, unmanned and unprotected sites in a secure manner. This solution can enable many use cases for operators such as MEC and 5G that can benefit from remote, secure, distributed, independent data processing. Some embodiments may support 5G and/or IoT.The following paragraphs provide examples of various embodiments disclosed herein.Example 1 is a computing device comprising: a trusted execution environment; a basic input/output system (BIOS) for requesting a key encryption key (KEK) from a trusted execution environment; and self-encrypting storage associated with the KEK an apparatus (SES); wherein the trusted execution environment is for authenticating the BIOS and providing the KEK to the BIOS after authenticating the BIOS, and the BIOS is for providing the KEK to the SES to unlock the SES for access by the trusted execution environment.Example 2 can include the subject matter of Example 1, and can also specify that the trusted execution environment is a root of trust for the computing device.Example 3 may include the subject matter of any of Examples 1-2, and may further specify that the trusted execution environment includes operation in a mode in which execution of the operating system of the computing device is suspended.Example 4 can include the subject matter of any of Examples 1-3, and can also specify that the computing device is a cloudlet.Example 5 can include the subject matter of any of Examples 1-4, and can also specify a trusted execution environment to communicate with the remote management computing device to receive updates.Example 6 can include the subject matter of Example 5, and can also specify that the trusted execution environment includes a lifecycle manager for communicating with the remote management computing device through a RESTful interface.Example 7 can include the subject matter of Example 6, and can also specify a lifecycle manager for simulating a read-only device exposing configuration parameters to another computing device.Example 8 can include the subject matter of Example 6, and can also specify a lifecycle manager for simulating a read-only device that exposes log or diagnostic information to another computing device.Example 9 may include the subject matter of any of Examples 1-8, further including virtualized network function (VNF) logic.Example 10 may include the subject matter of any of Examples 1-9, further including virtual machine (VM) logic.Example 11 is a networked computing system comprising: a micro cloud including a trusted execution environment, a basic input/output system (BIOS) for requesting a key encryption key (KEK) from the trusted execution environment, and a KEK associated Encrypted Storage (SES), where the trusted execution environment provides the KEK to the BIOS, and the BIOS provides the KEK to the SES to unlock the SES for access by the trusted execution environment; and a micro cloud remote from the micro cloud in communication with the trusted execution environment; Cloud management center.Example 12 can include the subject matter of Example 11, and can also specify that the networked computing system is a mobile edge computing (MEC) system.Example 13 can include the subject matter of any of Examples 11-12, and can also specify that the networked computing system is a fifth generation mobile network (5G) system.Example 14 can include the subject matter of any of Examples 11-13, and can also include a plurality of microclouds in communication with a microcloud management center.Example 15 can include the subject matter of any of Examples 11-14, and can also specify that the trusted execution environment is an operator root of trust.Example 16 can include the subject matter of any of Examples 11-15, and can also specify that the trusted execution environment is a manufacturer root of trust.Example 17 can include the subject matter of any of Examples 11-16, and can further specify that the trusted execution environment receives an update image from the microcloud management center while the microcloud's operating system continues to execute.Example 18 is a method for secure storage access, comprising: verifying a basic input/output system (BIOS) of a computing device through a trusted execution environment of the computing device; responsive to verifying the BIOS, by the trusted execution The environment provides the BIOS with a key encryption key (KEK) for a self-encrypting storage device (SES) of the computing device; and provides the KEK to the SES through the BIOS to unlock the SES.Example 19 may include the subject matter of Example 18, and may also specify that the SES includes a hard drive.Example 20 can include the subject matter of any of Examples 18-19, wherein the platform firmware is stored in the SES.Example 21 is one or more non-transitory computer-readable media having stored thereon instructions that, in response to execution by a basic input/output system (BIOS) of the computing device, cause the computing device to: requesting a key encryption key (KEK) for a self-encrypting storage device (SES) of a computing device; receiving the KEK from a trusted execution environment of the computing device in response to verification of the BIOS; and providing the KEK to Unlock SES.Example 22 can include the subject matter of Example 21, and can also specify that the SES is partitioned and providing the key to unlock the SES includes providing the key to decrypt the partition of the SES associated with the KEK.Example 23 can include the subject matter of any of Examples 21-22, and can also specify that firmware configuration information is stored in the SES.Example 24 may include the subject matter of any of Examples 21-23, and may also specify that the SES uses the KEK to unlock a media encryption key (MEK), and that the MEK encrypts data stored in the SES.Example 25 can include the subject matter of any of Examples 21-24, and can also specify that the computing device is an edge server in a mobile edge computing (MEC) network.Example 26 is a computing device comprising: a trusted execution environment; a BIOS for requesting a KEK from a trusted execution environment; and an SES associated with the KEK; wherein the trusted execution environment is for verifying the BIOS and in The KEK is provided to the BIOS after the BIOS is authenticated, and the BIOS provides the KEK to the SES to unlock the SES for access by the trusted execution environment.Example 27 can include the subject matter of Example 26, and can also specify that the trusted execution environment includes ME and/or IE.Example 28 can include the subject matter of any of Examples 26-27, and can also specify that the trusted execution environment includes an SMM.Example 29 can include the subject matter of any of Examples 26-28, and can also specify that the computing device is a cloudlet.Example 30 can include the subject matter of any of Examples 26-29, and can also designate the computing device to communicate with the micro-cloud management center.Example 31 can include the subject matter of any of Examples 26-30, and can also include a lifecycle manager.Example 32 can include the subject matter of Example 31, and can also specify a lifecycle manager for simulating a read-only device exposing configuration parameters to another computing device.Example 33 can include the subject matter of any of Examples 31-32, and can also specify a lifecycle manager for simulating a read-only device that exposes log or diagnostic information to another computing device.Example 34 can include the subject matter of any of Examples 26-33, and can also specify that the computing device executes one or more VNFs.Example 35 can include the subject matter of any of Examples 26-34, and can also specify that the computing device includes one or more workload VMs.Example 36 is a networked computing system including any of Examples 26-35.Example 37 can include the subject matter of Example 36, and can also specify that the networked computing system is a MEC system.Example 38 can include the subject matter of Example 36, and can also specify that the networked computing system is a 5G system.Example 39 is a method for secure storage access, comprising: verifying, by a trusted execution environment of a computing device, a BIOS of a computing device; in response to verifying the BIOS, providing, by the trusted execution environment, to the BIOS for The KEK of the SES of the computing device; and the KEK is provided to the SES by the BIOS to unlock the SES.Example 40 can include the subject matter of Example 39, and can also specify that the computing device is any of the computing devices of Examples 1-10 or Examples 26-35.Example 41 is an apparatus comprising means for performing the means of any of Examples 18-20, any of Examples 39-40, any of Examples 43-45, or any other method disclosed herein .Example 42 is one or more computer-readable media (eg, non-transitory computer-readable media) having stored thereon instructions that, in response to execution by one or more processing devices of the computing device, cause a computing The apparatus performs the method of any of Examples 18-20, any of Examples 39-40, any of Examples 43-45, or any other method disclosed herein.Example 43 is a method for operating a micro cloud, comprising: directing a micro cloud away from a data center, wherein the micro cloud guidance cannot utilize software tampering performed by the micro cloud's operating system; and receiving data at the micro cloud from a personal mobile computing device .Example 44 can include the subject matter of Example 43, and can further include: detecting an attempt to tamper with the hardware of the micro-cloud; and interrupting the boot process in response to detecting the attempt to tamper with the hardware of the micro-cloud.Example 45 can include the subject matter of any of Examples 43-44, and can also include performing a virtualized network function (VNF) by the microcloud using data received at the microcloud.Example 46 is a cloud computing comprising: a secure processor, a BIOS in communication with the secure processor, an ME and IE in communication with the BIOS, and an SES in communication with the BIOS, wherein the BIOS requests a key encryption key (KEK) from the IE, The IE is used to authenticate the BIOS and after authenticating the BIOS provides the KEK to the BIOS, the BIOS provides the KEK to the SES to unlock the SES for access by the IE, and the secure processor runs a virtual process after the IE accesses the SES.Example 47 can include the subject matter of any of Examples 1-42, and can further specify that the trusted execution environment includes processing resources that are hardware and software isolated from execution of an operating system on the computing device.
An electronic device (100) has a conductive shield (105) between first and second regions (196, 198) in a multi-level metallization structure (103); a package (100) includes a conductive shield (105) having a first region (196), and a capacitor (104) in the first region (196), the capacitor (104) having first and second terminals (106, 108), the first terminal (106) laterally overlapping the second terminal (108) by an overlap distance (139) of 1.0 [mu] m to 6.0 [mu] m, the conductive shield (105) including a first metal line (136) surrounding the first terminal (106), and the first metal line (136) being spaced apart from the first terminal (106) by a gap distance (135) of 0.5 [mu] m to 1.0 [mu] m.
1. An electronic device comprising:A multi-level metallization structure over a semiconductor layer, the multi-level metallization structure having a first region, a second region, a pre-metal level above the semiconductor layer, and a metallization over the pre-metal level a structural level, the pre-metallization level and the metallization structural level extend in respective planes of orthogonal first and second directions and are arranged along a third direction orthogonal to the first and second directions in stacked, and the metallization structure levels include a first metallization structure level and a second metallization structure level;a capacitor in the first region of the multilevel metallization structure, the capacitor having a first terminal and a second terminal, the first terminal forming a first capacitor in the first metallization level plate, the second terminal forms a second capacitor plate in the second metallization level, and the first terminal overlaps the second terminal in the first and second directions by an overlap a distance, the overlapping distance is 1.0 μm to 6.0 μm; anda conductive shield between the first region and the second region in the multilevel metallization structure, the conductive shield at least partially surrounding the first region of the multilevel metallization structure There are interconnecting metal lines and vias in the corresponding metallization level of the region, the conductive shield is electrically connected to the semiconductor layer, the conductive shield includes the first A first metal line of a terminal, the first metal line is spaced apart from the first terminal by a gap distance in the first and second directions, and the gap distance is 0.5 μm to 1.0 μm.2. The electronic device according to claim 1, wherein the overlapping distance is 2.0 μm to 5.0 μm.3. The electronic device of claim 2, wherein the gap distance is 0.55 μm to 0.75 μm.4. The electronic device of claim 1, wherein the gap distance is 0.55 μm to 0.75 μm.5. The electronic device of claim 1 , further comprising a bilayer structure in one of the metallization levels, the bilayer structure having a silicon oxynitride layer and the silicon oxynitride layer a silicon nitride layer on the silicon nitride layer, the second terminal is on a portion of the silicon nitride layer.6. The electronic device of claim 5, further comprising a trench in the bilayer structure spaced laterally outward from the second terminal in the first and second directions , the trench extends through the silicon oxynitride layer along the third direction, the trench partially extends into the silicon oxynitride layer along the third direction, and the trench The slots are filled with dielectric material of the second metallization level.7. The electronic device of claim 1 , further comprising a bilayer structure in one of the metallization levels, the bilayer structure having a silicon oxynitride layer and the silicon oxynitride layer a silicon nitride layer on the silicon nitride layer, the second terminal is on a portion of the silicon nitride layer.8. The electronic device of claim 1, further comprising a transistor on or in the semiconductor layer.9. A packaged electronic device comprising:A semiconductor die comprising:semiconductor layer;A multi-level metallization structure above the semiconductor layer, the multi-level metallization structure having a first region, a second region, a metal pre-level on the semiconductor layer, and a metal above the metal pre-level a metallization level, the pre-metal level and the metallization level extend in respective planes of orthogonal first and second directions and are arranged along a third direction orthogonal to the first and second directions stacked, and the metallization structure levels include a first metallization structure level and a second metallization structure level;a capacitor in the first region of the multilevel metallization structure, the capacitor having a first terminal and a second terminal, the first terminal forming a first capacitor in the first metallization level a plate, the second terminal forms a second capacitor plate in the second metallization level; and the first terminal overlaps the second terminal in the first and second directions by an overlap a distance, the overlap distance being 1.0 μm to 6.0 μm, the second terminal including the exposed side; anda conductive shield between the first region and the second region in the multilevel metallization structure, the conductive shield at least partially surrounding the first region of the multilevel metallization structure There are interconnecting metal lines and vias in the corresponding metallization level of the region, the conductive shield is coupled to the semiconductor layer, the conductive shield includes surrounding the first metallization level in the first metallization level a first metal wire of a terminal, the first metal wire being spaced apart from the first terminal by a gap distance in the first and second directions, the gap distance being 0.5 μm to 1.0 μm;an electrical connector including an end joined to the exposed side of the second terminal;a packaging structure enclosing the semiconductor die and the electrical connections; andConductive leads are exposed along one or more sides of the package structure.10. The packaged electronic device of claim 9, further comprising a second semiconductor die comprising a conductive feature, the electrical connection comprising a second semiconductor die bonded to the conductive feature of the second semiconductor die. end, and the encapsulation structure encloses the second semiconductor die.11. The packaged electronic device of claim 9, wherein the overlap distance is 2.0 μm to 5.0 μm.12. The packaged electronic device of claim 11, wherein the gap distance is 0.55 μm to 0.75 μm.13. The packaged electronic device of claim 9, wherein the gap distance is 0.55 μm to 0.75 μm.14. The packaged electronic device of claim 9, further comprising a bilayer structure on one of the metallization levels, the bilayer structure having a silicon oxynitride layer and the oxynitride a silicon nitride layer on the silicon layer, the second terminal being on a portion of the silicon nitride layer.15. The packaged electronic device of claim 14, further comprising a trench in the bilayer structure laterally outward from the second terminal in the first and second directions spaced apart, the trenches extend through the silicon oxynitride layer along the third direction, the trenches partially extend into the silicon oxynitride layer along the third direction, and the The trench is filled with a dielectric material of the second metallization level.16. A method of forming an integrated circuit comprising:A multilevel metallization structure is formed over a semiconductor layer, the multilevel metallization structure having: a first region; a second region; a capacitor in the first region; a metal front level on the semiconductor layer; the a metallization level above a pre-metal level; and a conductive shield between said first region and said second region; said pre-metal level and said metallization level are at orthogonal first and second extending in the corresponding planes of the two directions and arranged in a stack along a third direction perpendicular to the first and second directions, and the metallization structure level includes a first metallization structure level and a second metallization structure level the capacitor has a first terminal and a second terminal, the first terminal forms a first capacitor plate in the first metallization level, and the second terminal forms a plate in the second metallization level the second capacitor plate; the first terminal overlaps the second terminal in the first and second directions by an overlap distance, and the overlap distance is 1.0 μm to 6.0 μm; andThe conductive shield has interconnect metal lines and vias in respective metallization structure levels at least partially surrounding the first region of the multilevel metallization structure, the conductive shield is coupled to the semiconductor layer, the The conductive shield includes a first metal line surrounding the first terminal in the first metallization level, the first metal line being spaced apart from the first terminal in the first and second directions open gap distance, and said gap distance is 0.5 μm to 1.0 μm;separating the semiconductor die comprising the semiconductor layer and the multi-level metallization structure from a wafer;forming an electrical connection to the second terminal of the capacitor; andThe semiconductor die and the electrical connections are enclosed in a packaging structure with conductive leads exposed along one or more sides of the packaging structure.17. The method of claim 16, wherein the overlap distance is 2.0 μm to 5.0 μm.18. The method of claim 16, wherein the gap distance is 0.55 μm to 0.75 μm.19. The method of claim 16, wherein forming the multilevel metallization structure comprises:A double-layer structure is formed in one of the metallization structure levels, the double-layer structure has a silicon oxynitride layer and a silicon nitride layer on the silicon oxynitride layer, the second terminal is at the part of the silicon nitride layer; andA trench is formed in the bilayer structure, the trench is spaced laterally outward from the second terminal in the first and second directions, the trench extends in the third direction through through the silicon oxynitride layer, the trench partially extends into the silicon oxynitride layer along the third direction, and the trench is filled with a dielectric material of the second metallization level .20. An integrated circuit comprising:an upper capacitor plate in an upper metal level above a semiconductor substrate and a lower capacitor plate in an upper metal level above a semiconductor substrate, the upper capacitor plate in a lower metal level above a semiconductor substrate a plate overlaps the lower capacitor plate such that the lower capacitor plate extends completely laterally beyond the upper capacitor plate by an overlap distance in the range from 1.0 μm to 6.0 μm; anda ground ring in the lower metal level, the ground ring surrounding the lower capacitor plate, the ground ring spaced laterally from the lower capacitor plate by a gap in the range from 0.5 μm to 1.0 μm distance.21. The integrated circuit of claim 20, wherein the overlap distance is 2.0 μm to 5.0 μm.22. The integrated circuit of claim 20, wherein the gap distance is 0.55 μm to 0.75 μm.23. The integrated circuit of claim 22, wherein the overlap distance is 2.0 μm to 5.0 μm.24. The integrated circuit of claim 20, wherein the gap distance is about 0.55 μm and the overlap distance is about 3.0 μm.25. The integrated circuit of claim 20, wherein the gap distance is 0.55 μm ± 0.25 μm and the overlap distance is about 3.0 μm ± 0.15 μm.
Integrated isolation capacitor with reinforced bottom plateCross References to Related ApplicationsThis application claims serial number 63 filed August 18, 2021 and entitled "Bottom Plate Electric Field Optimization for Enhanced Performance of a Galvanic Isolation Capacitor" /234,388, the contents of which are hereby incorporated by reference in their entirety.technical fieldBackground techniqueAn electronic device may have circuits and/or components in multiple voltage domains, such as low voltage logic circuitry in a low voltage domain, communication driver circuitry in a second high voltage domain. In normal operation, high-voltage digital isolators provide a communication channel between different voltage domains while protecting low-voltage circuits from degrading the device due to harmful currents or voltages on the high-voltage domain. In galvanic isolation capacitor systems used for Single Die Reinforced (SDR) isolation, the isolation capability may be limited by dielectric breakdown near the capacitor plates at a peak electric field (Epk) that exceeds or approaches the surrounding dielectric Occurs when the dielectric strength is high.Contents of the inventionIn one aspect, an electronic device includes a semiconductor layer, a multi-level metallization structure over the semiconductor layer, a capacitor, and a conductive shield. The multi-level metallization structure has a first region, a second region, a pre-metal level on the semiconductor layer, and a metallization structure level above the pre-metal level. The pre-metal level and the metallization structure level extend in respective planes of orthogonal first and second directions and are arranged in a stack along a third direction orthogonal to the first and second directions. The metallization structure level includes a first metallization structure level and a second metallization structure level. The capacitor is in the first region of the multilevel metallization structure and has first and second terminals forming respective first and second capacitor plates in the first and second metallization levels . The first terminal overlaps the second terminal by an overlapping distance of 1.0 μm to 6.0 μm in the first and second directions. A conductive shield is between the first and second regions and coupled to the semiconductor layer. The conductive shield has interconnecting metal lines and vias in the respective metallization levels at least partially surrounding the first region. The conductive shield includes a first metal line surrounding the first terminal in the first metallization level, and the first metal line is spaced apart from the first terminal by a gap of 0.5 μm to 1.0 μm distance.In another aspect, a packaged electronic device includes: a semiconductor die having a multi-level metallization structure over a semiconductor layer, a capacitor, and a conductive shield; and electrical connections; a packaging structure; and conductive leads. The multi-level metallization structure has a first region, a second region, a pre-metal level on the semiconductor layer, and a metallization structure above the pre-metal level. The pre-metal level and the metallization structure level extend in respective planes of orthogonal first and second directions and are arranged in a stack along a third direction orthogonal to the first and second directions. The metallization structure level includes a first metallization structure level and a second metallization structure level. The capacitor is in the first region of the multilevel metallization structure and has first and second terminals forming respective first and second capacitor plates in the first and second metallization levels . The first terminal overlaps the second terminal by an overlapping distance of 1.0 μm to 6.0 μm in the first and second directions. A conductive shield is between the first and second regions and coupled to the semiconductor layer. The conductive shield has interconnecting metal lines and vias in the respective metallization levels at least partially surrounding the first region. The conductive shield includes a first metal line surrounding the first terminal in the first metallization level, and the first metal line is spaced apart from the first terminal by a gap of 0.5 μm to 1.0 μm distance. The electrical connector has an end soldered or bonded to the exposed side of the second terminal, the package structure encloses the semiconductor die and the electrical connector, and the conductive leads are along the package One or more sides of the structure are exposed.In another aspect, a method includes forming: a multilevel metallization structure over a semiconductor layer, wherein the multilevel metallization structure is over the semiconductor layer; a capacitor; and a conductive shield. The multi-level metallization structure has a first region, a second region, a pre-metal level on the semiconductor layer, and a metallization structure above the pre-metal level. The pre-metal level and the metallization structure level extend in respective planes of orthogonal first and second directions and are arranged in a stack along a third direction orthogonal to the first and second directions. The metallization structure level includes a first metallization structure level and a second metallization structure level. The capacitor is in the first region of the multilevel metallization structure and has first and second terminals forming respective first and second capacitor plates in the first and second metallization levels . The first terminal overlaps the second terminal by an overlapping distance of 1.0 μm to 6.0 μm in the first and second directions. A conductive shield is between the first and second regions and coupled to the semiconductor layer. The conductive shield has interconnecting metal lines and vias in the respective metallization levels at least partially surrounding the first region. The conductive shield includes a first metal line surrounding the first terminal in the first metallization level, and the first metal line is spaced apart from the first terminal by a gap of 0.5 μm to 1.0 μm distance. The method also includes: separating a semiconductor die including the semiconductor layer and the multilevel metallization structure from a wafer; forming an electrical connection to the second terminal of the capacitor; and separating the semiconductor die The sheet and the electrical connections are enclosed in a packaging structure with conductive leads exposed along one or more sides of the packaging structure.Description of drawings1 is a partial cross-sectional side view of an isolated portion of an electronic device in which an isolation capacitor in a first region is surrounded by a conductive shield in a multilevel metallization structure above a semiconductor layer.FIG. 1A is a partial cross-sectional top plan view taken along line 1A-1A in the electronic device of FIG. 1 .1B is a partial cross-sectional top plan view taken along line 1B-1B in the electronic device of FIG. 1 .1C is a partial cross-sectional top plan view taken along line 1C-1C in the electronic device of FIG. 1 .FIG. 1D is a partial cross-sectional side view of an active region of an electronic device.2A is a schematic diagram of one example of a packaged electronic device including a high voltage capacitor on the device of FIGS. 1-1D and high voltage capacitors on first and second additional semiconductor die.2B is a schematic diagram of another example of a packaged electronic device with a high voltage capacitor on the device of FIGS. 1-1D.2C is a schematic diagram of another example of a packaged electronic device including high voltage capacitors on first and second additional semiconductor die.3 is a flowchart of a method of manufacturing a packaged electronic device.4 to 23 are partial cross-sectional side views of the device of FIGS. 1 to 1D undergoing a metallization structure fabrication process according to the method of FIG. 3 .24 is a partial top plan view showing a portion of a leadframe with an attached semiconductor die after undergoing wire bonding.25 is a perspective view of a packaged electronic device.26 is a graph of the ratio of the electric field of the bottom plate to the electric field of the top plate as a function of the distance of the bottom plate from the ground ring of the integrated Faraday shield for various amounts of bottom plate and top plate overlap .Detailed waysIn the drawings, like reference numerals designate like elements throughout, and various features are not necessarily drawn to scale. Also, the term "couple" or "couples" includes indirect or direct electrical or mechanical connections or combinations thereof. For example, if a first device is coupled to or with a second device, that connection may be through a direct electrical connection or through an indirect electrical connection through one or more intervening devices and connections. One or more operational characteristics of various circuits, systems, and/or components are described below in the context of functionality, which in some cases is determined by the configuration and/or interconnection of various structures when the circuitry is powered and operating. produce.Referring first to FIGS. 1 to 1D, FIG. 1 shows a partial cross-sectional side view of an electronic device 100 having a semiconductor layer 101, and FIGS. 1D shows a partial cross-sectional side view of the active region of the electronic device 100 with transistors T1 and T2 formed on or in the semiconductor layer 101 . In one example, the semiconductor layer 101 is or includes a p-type semiconductor material with an isolation structure (eg shallow trench isolation or STI structure) formed on or in the top side of the semiconductor layer 101 . In one example, the semiconductor layer 101 is a silicon layer, a silicon germanium layer, a silicon-on-insulator (SOI) structure, or another or additional layer having a semiconductor material. In some examples, the semiconductor layer 101 may be a semiconductor wafer (eg, a handle wafer) or a semiconductor layer on a wafer, eg, an epitaxial layer on a handle wafer. In various examples, the semiconductor layer 101 may be referred to as a semiconductor substrate.The electronic device 100 includes a multilevel metallization structure 103 disposed over (eg, disposed on and directly contacting) the top side of the semiconductor layer 101 . In addition, the electronic device 100 includes a capacitor 104 and a conductive shield 105 in a multilevel metallization structure 103 . The multilevel metallization structure 103 has a first region 196, a second region 198, a pre-metal level 110 on the semiconductor layer 101, and metallization structure levels 120, 130, 140, 150, 160, 170 and 180. The metal front level 110 and the metallization structure levels 120, 130, 140, 150, 160, 170, 180 extend in respective planes of respective orthogonal first and second directions X, Y and along directions orthogonal to the first and second directions X, Y. The third direction Z of the second directions X and Y are arranged in a stack.As discussed further below, the capacitor 104 is in the first region 196 of the multilevel metallization structure 103, and the conductive shield 105 is formed from the interconnecting metal lines and trench contacts and vias of the multilevel metallization structure to form in the first region 196 of the multilevel metallization structure 103. 196 and around capacitor 104 a Faraday cage is provided. The conductive shield 105 separates the first region 196 from the outer second region 198 of the multilevel metallization structure 103 . As shown in FIG. 1D , in one embodiment, the electronic device 100 further includes another circuitry (such as a low-voltage logic circuit), such as formed on the semiconductor layer 101 below the outer second region 198 of the multilevel metallization structure 103. And/or transistors T1 and T2 in. Capacitor 104 in FIG. 1 comprises a first (eg lower or bottom) terminal 106 (eg first capacitor plate) spaced apart from semiconductor layer 101 by a distance 107 (eg 2.8 μm). The capacitor 104 also includes a second (eg, upper or top) terminal 108 (eg, a second capacitor plate). In one example shown in FIG. 1C , conductive shield 105 includes a gap in one of the constituent metal layers, and metal routing feature 125 connects the low voltage logic circuitry of the second region to first terminal 106 of capacitor 104 .In this or another example, the electronic device includes two or more isolation capacitors 104 each having a circular first terminal 106 and a second terminal 108 , for example in a first region 196 . In other examples, one or both of the first and second capacitor terminals 106, 108 may have different shapes. In another embodiment, the first plates 106 of the two capacitors 104 are electrically coupled to each other to form a series combination of the two capacitors 104 to isolate the electronic device 100 with a circuit connection to the respective second terminals 108 of the capacitors 104 connected in series. high and low voltage circuits. The first terminal 106 and the second terminal 108 are substantially parallel to each other, although this is not a strict requirement for all possible implementations. Additionally, the first terminal 106 and the second terminal 108 are separated from each other by a distance 109 (eg, 17.5 to 20.5 μm), wherein the layer of dielectric material between the terminals 106 and 108 forms the capacitor 104 . In the illustrated example, the second terminal 108 of the capacitor 104 includes an exposed top side 192 that allows a bond wire or another connection to the second terminal 108 for electrical coupling to a second semiconductor die (eg, FIG. 2 below). ) of the high voltage domain terminal.The multi-level metallization structure 103 includes a pre-metal level 110 and an integer N metallization structure levels, where N is greater than two. The example of FIGS. 1-1D includes N=7 levels of metallization structures. In the illustrated example, pairs of double-stacked via loops of metallization levels are included to increase capacitor dielectric thickness in an integrated fabrication process that provides transistor circuitry and high voltage isolation capacitors to active and isolation regions. The electronic device 100 incorporates various features detailed further below into integrated series-connected high voltage isolation in a multilevel metallization structure 103 with a thickness of about 12 to 13 μm tailored for low voltage active circuitry. Capacitors wherein the reinforced isolation achieved by series capacitors 104 having a combined capacitive dielectric (e.g. SiO2) thickness of up to about 20 μm and/or individual capacitor reinforced isolation is achieved by having a capacitor dielectric (e.g. SiO2) thickness of up to about 20.5 μm on a single die A single capacitor 104 is implemented. Electronic device 100 in FIG. 1 uses dual stacked vias in a pair of metallization levels 160 and 170 and in a pair of metallization levels 140 and 150 . The conductive metal routing feature provides five routing levels or layers in this example, indicated in FIG. 1 as M1 in metallization structure level 120, M2 in metallization structure level 130, M3 in metallization structure level 140, M4 in metallization level 160 and M5 in metallization level 180 . In some examples, the illustrated electronic device 100 also includes enhanced high voltage isolation capability through the use of a silicon oxynitride/silicon nitride bilayer beneath the second terminal 108 . The bilayer includes an upper silicon nitride layer 182 and a silicon oxynitride layer 181, and is sometimes referred to as SO bilayer 181, 182 or simply "SO bilayer". The trench in the SO bilayer 181 , 182 is spaced laterally outward from the second terminal 108 and extends through the upper silicon nitride layer 182 and partially extends into and stops in the silicon oxynitride layer 181 .The pre-metal level 110 includes a pre-metal dielectric (PMD) layer 111 on the semiconductor layer 101 . In one example, the PMD layer 111 is or includes silicon dioxide (SiO 2 ) having a thickness of about 1.0 μm. The metal front level 110 includes a conductive cylindrical metal front contact 114 and a metal front trench contact 118 on the semiconductor layer 101 . Contacts 114 and 118 are or include tungsten in one example but can be or include one or more other conductive metals. Contacts 114 and 118 extend through PMD layer 111 along the vertical (eg, Z) direction in FIG. 1 . The trench contact 118 surrounds the central first portion of the PMD layer 111 in the pre-metal level 110 without a gap. In one example, the metal front level contact 114 and the trench contact 118 are or include tungsten. In one embodiment, the multilevel metallization structure 103 includes aluminum conductive wiring features or traces patterned according to the device design and tungsten contacts in levels with corresponding silicon dioxide (eg, TEOS oxide or SiO2) layers. In other implementations, different dielectrics and/or conductive metal materials such as copper may be used. In the illustrated example, metal front trench contacts 118 are connected to semiconductor layer 101 to form a ground connection between STI structures 102 shown in FIG. 1 to provide a lower section of conductive shield 105 that operates to surround The grounded Faraday cage of capacitor 104 protects surrounding circuitry in zone 198 from the adverse effects of the high electric field present in first zone 196 . The metal front level 110 also includes trench contacts 118 in a region 119 near the periphery of the illustrated portion of the electronic device 100, for example, to provide protection against cracks and mechanical stress on the device 100 and to provide a barrier against damage at the edge of the die. External ionic contamination enters, although this is not a strict requirement for all possible implementations.As shown in FIGS. 1 and 1C , the multilevel metallization structure 103 also includes an initial metallization structure level 120 (eg, labeled M1 in FIG. 1 ) on the pre-metal level 110 . The metallization structure level 120 includes a first inter-level dielectric (ILD) layer 121 and conductive metal wiring lines 122 (such as aluminum or copper) and cylindrical wiring vias 124 (such as tungsten) in a second portion of the multilevel metallization structure 103. . The metallization structure level 120 also includes a metal routing feature 125 in the first portion of the multilevel metallization structure 103 , a first metal line 126 , and a routing via 127 on the metal routing feature 125 . The first metal line 126 is or includes aluminum metal having a thickness of about 0.61 μm along the Z direction in FIG. 1 in one example. In addition, the metallization structure level 120 includes a first trench via 128 on a first metal line 126 . The first ILD layer 121 extends over the PMD layer 111 , the metal routing features 125 and the first metal lines 126 . A first metal line 126 extends at least partially over the metal front trench contact 118 , and a first trench via 128 extends over the first metal line 126 . The first trench via 128 and the first metal line 126 surround another portion of the first region of the multi-level metallization structure 103 in the first metallization structure level 120 . The first ILD layer 121 is or includes silicon dioxide (SiO 2 ) having a thickness (eg, along the Z direction in FIG. 1 ) of about 1.0 μm in one example.As shown in FIG. 1C , metal routing feature 125 extends from a first region through gap G in first metal line 126 to a second region of first metallization level 120 in one example. The metallization structure level 120 also includes a first trench via 128 in a region 119 near the periphery of the illustrated portion of the electronic device 100, for example, to provide crack suppression during die singulation and to provide a barrier against cracking at the edge of the die. External ionic contamination enters, although this is not a strict requirement for all possible implementations. Metallization level 120 and other metallization levels in the examples of FIGS. 1-1D include contacts 114 and trench contacts 118 that are or include tungsten, although this is not a requirement for all possible implementations. Additionally, the first metallization level 120 and the other metallization levels of the multilevel metallization structure 103 include metal lines that are or include aluminum or copper, although this is not a requirement for all possible implementations.Metallization structure level 130 (labeled M2 in FIG. 1 ) extends over metallization structure level 120 in multilevel metallization structure 103 . Metallization level 130 includes second ILD layer 131 and conductive metal lines 132 and cylindrical tungsten vias 134 in second sub-region 198 of multilevel metallization 103 . The second ILD layer 131 is or includes silicon dioxide having a thickness along Z of about 1.2 μm in one example. The metallization level 130 further includes a second metal line 136 at least partially on the first trench via 128 and a second trench via 138 on the second metal line 136 . The second metal line 136 is or includes aluminum metal having a thickness of about 0.61 μm along the Z direction in FIG. 1 in one example. Metallization level 130 also includes a first terminal 106 of capacitor 104 laterally spaced by a gap distance 135 from a second metal line 136 in first and second directions X and Y. Gap distance 135 is 0.5 μm to 1.0 μm in one example. In another embodiment, the gap distance 135 is 0.55 μm to 0.75 μm. Herein, "lateral", "laterally" and similar terms refer to dimensions or directions in the X-Y plane marked in FIG. 1 , eg parallel to the top surface of the semiconductor layer 101 .First terminal 106 of capacitor 104 extends at least partially over and contacts routing via 127 on metal routing feature 125 in a central first portion of metallization structure level 130 . The second ILD layer 131 extends over the first ILD layer 121 , the second metal line 136 and the first metal line 106 in this example. The second trench via 138 extends over the second metal line 136 . The second trench via 138 and the second metal line 136 surround a second portion of the first region 196 of the metallization structure level 130 . As shown in FIG. 1, the second terminal 108 of the capacitor 104 includes lateral edges (left and right in FIG. The X direction in 1 to 1D) is spaced from the corresponding lateral edge of the second terminal 108 by a corresponding non-zero separation distance 137 from the opposite lateral edge. Metallization structure level 130 also includes trench vias 138 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations.Metallization structure level 140 (labeled M3 in FIG. 1 ) extends over metallization structure level 130 and includes third ILD layer 141 and conductive metal lines 142 in second region 198 of multilevel metallization structure 103 and cylindrical Tungsten via 144 . The third ILD layer 141 is or includes silicon dioxide having a thickness of about 3 μm along the Z-direction in one example. The metallization level 140 further includes a third metal line 146 at least partially on the second trench via 138 and a third trench via 148 on the third metal line 146 . The third metal line 146 is or includes aluminum metal having a thickness of about 1.3 μm along the Z direction in FIG. 1 in one example. The third ILD layer 141 extends over the second ILD layer 131 and the third metal line 146 in this example. The third trench via 148 extends through the third ILD layer 141 on the third metal line 146 . The third trench via 148 and the third metal line 146 surround a portion of the first region 196 of the metallization level 140 . As shown in FIG. 1 , the third metal line 146 includes lateral edges that are spaced apart from respective lateral edges of the second terminal 108 by a non-zero separation distance 147 along the X direction. The conductive shield 105 includes the stepped shape shown in FIG. 1 with a non-zero separation distance 147 (eg, about 30 μm) that is greater than the separation distance 137 of the previous (eg, underlying) metallization structure level 130 . Metallization structure level 140 also includes trench via 148 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations.Metallization level 150 extends over metallization level 140 and includes fourth ILD layer 151 and stacked cylindrical tungsten vias 154 in second region 198 of multilevel metallization 103 . The fourth ILD layer 151 is or includes silicon dioxide having a thickness of about 3 μm along the Z direction in one example. Metallization level 150 further includes a fourth trench via 158 stacked on previous trench via 148 . The third metal line 146 is or includes aluminum metal having a thickness of about 1.3 μm along the Z direction in FIG. 1 in one example. The fourth ILD layer 151 extends over the third ILD layer 141 in this example. The fourth trench via 158 extends through the fourth ILD layer 151 . The fourth trench via 158 surrounds a portion of the first region 196 of the metallization level 150 . Metallization structure level 150 also includes trench via 158 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations.Metallization structure level 160 (labeled M4 in FIG. 1 ) extends over metallization structure level 150 in multilevel metallization structure 103 . Metallization level 160 includes fifth ILD layer 161 and conductive metal lines 162 and 166 and cylindrical tungsten via 164 in the second portion of multilevel metallization 103 . The fifth ILD layer 161 is or includes silicon dioxide having a thickness of about 3 μm along the Z direction in one example. The metallization level 160 further includes a fifth metal line 166 at least partially on the fifth trench via 158 and a fifth trench via 168 on the fifth metal line 166 . The fifth metal line 166 is or includes aluminum metal having a thickness of about 1.3 μm along the Z direction in FIG. 1 in one example. The fifth ILD layer 161 extends over the fourth ILD layer 151 and the fifth metal line 166 in this example. The fifth trench via 168 extends through the fifth ILD layer 161 on the fifth metal line 166 . Fifth trench via 168 and fifth metal line 166 surround a portion of the first region of fifth metallization level 160 . As shown in FIG. 1 , the fifth metal line 166 includes a non-zero spacing distance 167 (eg, about 50 μm) that is spaced along the X direction from the corresponding lateral edge of the second terminal 108 that is greater than the spacing distance 147 of the lower metallization structure level 140 . horizontal edge. Metallization structure level 160 also includes trench vias 168 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations.The (sixth) metallization structure level 170 extends over the metallization structure level 160 in the multi-level metallization structure 103 . Metallization level 170 includes sixth ILD layer 171 and stacked cylindrical tungsten vias 174 in second region 198 of multilevel metallization 103 . The sixth ILD layer 171 is or includes silicon dioxide having a thickness along the Z direction of about 3 μm in one example. Metallization level 170 further includes a sixth trench via 178 on trench via 168 . The sixth ILD layer 171 extends over the fifth ILD layer 161 in this example. The sixth trench via 178 surrounds a portion of the first region 196 of the metallization level 170 . Metallization structure level 170 also includes trench vias 178 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations.The example multi-level metallization structure 103 in FIGS. 1-1D has N metallization levels, where N=7. An uppermost or top (eg, Nth or seventh) metallization structure level 180 (labeled M5 in FIG. 1 ) extends over metallization structure level 170 in multilevel metallization structure 103 . The metallization structure level 170 comprises SO bilayers 181 , 182 with a 0.3 μm thick layer 181 of or comprising silicon oxynitride (SiON) and a 0.55 μm layer 182 of or comprising silicon nitride (SiN). The metallization structure level 180 underlies a dielectric layer 183 (e.g., SiO 2 ), conductive metal lines 184 in a second region 198 of the multilevel metallization structure 103, some of which lead to external components (e.g., another die or a leadframe). Conductive features, not shown) provide conductive die pads (not shown). A dielectric layer 185 (eg, silicon oxynitride) extends over portions of layer 183 and has a thickness of 2.8 μm in one example. Dielectric layers 183 and 185 together form protective overcoat (PO) stack 189 .In addition, the metallization structure level 180 includes the second terminal 108 of the capacitor 104 and a seventh (eg, Nth) metal line 186 . The second terminal 108 is laterally spaced apart from the Nth metal line 186 shown in FIGS. 1 and 1A by a non-zero spacing distance 187 (eg, about 75 μm) that is greater than the spacing distance 167 of the previous (eg, underlying) metallization structure level 160 . Dielectric layer 183 is or includes silicon dioxide having a thickness along the Z-direction of about 4.5 μm in one example. The seventh metal line 186 extends at least partially over the sixth trench via 178 . The seventh metal line 186 is or includes aluminum metal having a thickness of about 1.3 μm along the Z direction in FIG. 1 in one example. The dielectric layer 183 extends in this example over the SO bilayers 181 , 182 over the sixth ILD layer 171 in the trench 193 of the SO bilayer. In addition, layer dielectric 183 extends over a portion of second terminal 108 and over seventh metal line 186 . A seventh metal line 186 surrounds the upper portion of the first region of the metallization structure level 180 to complete the conductive shield 105 .The example electronic device 100 in FIGS. 1-1D includes a capacitor 104 having a first terminal 106 in a metallization structure level 130 and a second terminal 108 in a metallization structure level 180 . In other implementations, the respective first terminal 106 and second terminal 108 may be in different ones of the metallization structure levels 120, 130, 140, 150, 160, 170, 180 and may (but need not) be in adjacent levels. . In the illustrated example, furthermore, the stepped shape of the conductive shield 105 includes gradually increasing separation distances 137, 147, 167, and 187, although this is not a strict requirement for all possible implementations. Additionally, various embodiments include conductive shields having non-stepped shapes. The illustrated stepped shape advantageously provides a substantially consistent spacing between the second terminal 108 of the capacitor 104 and the conductive shield 105 . Conductive shield 105 provides a substantially continuous conductive metal (e.g., copper, tantalum nitride, titanium, titanium nitride, aluminum, tungsten) Faraday cage or shield structure connected to semiconductor layer 101 by trench contacts 118 to protect the first The circuitry of the second zone 198 is protected from adverse effects of high electric fields. In one example, copper-doped aluminum lines are sandwiched by titanium nitride, and the copper lines are encapsulated on three sides by tantalum nitride. The trench vias are or include tungsten or copper for copper damascene schemes in certain embodiments to form the entire conductive shield 105 or portions thereof.In one embodiment, the second terminal 108 of the capacitor 104 is electrically connected to circuitry (eg, and the second semiconductor die) from circuitry in a different voltage domain than the circuitry of the second region 198 of the multilevel metallization structure 103 high voltage signal. In one example, the electronic device 100 includes low voltage logic circuitry (eg, transistors T1 and T2 in FIG. 1D ) with connection and wiring structures in the second region 198 of the multilevel metallization structure 103 . In the illustrated embodiment, the conductive shield 105 consists of interconnected metal lines surrounding the first region 196 of the multilevel metallization structure 103 in the respective metallization structure levels 120, 130, 140, 150, 160, 170, 180. 126, 136, 146, 166, 186 and trench contacts/vias 118, 128, 138, 148, 158, 168, 178 are created with only one or more small gaps G (eg FIG. The first terminal 106 is electrically connected to the low voltage circuitry or to the second region 198 of the multilevel metallization structure 103 . In another embodiment, the first terminals 106 of two or more of the capacitors 104 are connected together in the first region 196 without a gap in the conductive shield 105 . In another embodiment, trench contacts/vias 118 , 128 , 138 , 148 , 158 , 168 , 178 may be a discontinuous array of contacts/vias 114 , 124 , 134 , 144 , 154 , 164 , 174 .Electronic device 100 includes, in one example, two or more capacitors 104 (eg, a capacitor with a top plate or second terminal 108 as seen in FIG. 1A ). In one implementation, one or more pairs of capacitors 104 are provided in first region 196 of multilevel metallization structure 103 , and conductive shield 105 provides a single grounded Faraday cage structure surrounding all capacitors 104 . In another alternative embodiment, a plurality of conductive shields 105 are created in the multilevel metallization structure 103 to provide a plurality of grounded Faraday cage structures that individually surround one or more associated first regions 196 . Capacitor 104. In the example of FIGS. 1-1D , the individual capacitors 104 are laterally spaced apart from each other in the first region of the multilevel metallization structure 103 and are individually included in different ones of the metallization structure levels (eg, in the illustrated example, The first terminal 106 and the second terminal 108 in the metallization structure levels 130 and 180). In example device 100 , furthermore, each of capacitors 104 includes an associated metal routing feature 125 extending through a corresponding gap G in first metal line 126 .As shown in FIG. 1 , in an example the capacitor plate or second terminal 108 may be wire bonded or otherwise electrically connected to another circuit (eg, of a high voltage domain or of a different voltage domain). 1 shows one example in which a bond wire 188 has a first end bonded to an exposed top side 192 of a second terminal 108 to facilitate an electrical connection to a conductive feature of another die (eg, as described further below in connection with FIG. 2 ). instructions and descriptions). The example electronic device 100 also includes a 10 μm thick polyimide layer 190 extending over portions of the PO stack 189 . The polyimide layer 190 provides a stress barrier in one example to relieve mechanical stress on the semiconductor layer 101 and the multilevel metallization structure 103 after encapsulation in the molded packaging structure, for example to relieve the overlying mold compound and Mechanical stress between the surfaces of the potentially delaminated silicon oxynitride layer 185 may occur after a certain number of temperature cycling events.The polyimide layer 190 and PO stack 189 in this example include a gap exposing the top side 192 of the second terminal 108 . The silicon nitride layer 182 in this example includes a gap having a width 191 completely surrounding the second terminal 108 . Additionally, PO stack 189 includes a recess or gap spaced laterally along the X direction by a distance 194 (eg, 273 μm) from the bond wire opening. This recess or gap is located between the outermost conductive lines 184 in one example in order to terminate cut-induced cracks from entering the die. The distance 194 varies in different implementations depending on which external circuitry is present around the capacitor or capacitors 104, and the recess or gap completely surrounds the die in one example. As shown in phantom in FIG. 1 , the conductive shield 105 provides a grounded Faraday cage with a retractable ladder structure that surrounds the capacitor 104 and connects the first region 196 (e.g., associated with the high voltage domain). Separate from the second region 198 (eg, associated with a lower or different voltage domain).The first capacitor terminal 106 is formed in the illustrated example in a metallization level 130, referred to as the first metallization level 130, although it need not be the lowest metallization level. The second capacitor terminal 108 is formed in metallization level 180 in this example and is referred to herein as the second metallization level, although it need not be the second level in the metallization stack arrangement. The first and second metallization levels may be adjacent in a stacked arrangement in other examples. The first terminal 106 forms the bottom or lower first capacitor plate in the first metallization level 130 in the orientation illustrated in FIG. 1 and the second terminal 108 forms the top or upper second capacitor plate in the metallization level 180. pole.As shown in FIGS. 1 and 1A , the first terminal 106 overlaps the second terminal 108 in the first and second directions X and Y by an overlap distance 139 , where the overlap distance 139 is 1.0 μm to 6.0 μm in one implementation. In another example, the overlap distance 139 is 2.0 μm to 5.0 μm, eg, about 3 μm. The conductive shield 105 extends between the first region 196 and the second region 198 in the multilevel metallization structure 103 and at least partially surrounds the first region 196 of the multilevel metallization structure 103 . Conductive shield 105 is coupled to semiconductor layer 101 in one example. Additionally, the conductive shield 105 in FIGS. 1-1D includes a metal line 136 surrounding the first terminal 106 in the first metallization structure level 130 . Metal line 136 is spaced laterally from first terminal 106 in first and second directions X and Y by a gap distance 135 shown in FIGS. 1 and 1B . Gap distance 135 is 0.5 μm to 1.0 μm in one example. In another example, gap distance 135 is 0.55 μm to 0.75 μm.Furthermore, in the illustrated electronic device 100 , the multilevel metallization structure 103 includes an SO bilayer having layers 181 and 182 . In another example, the SO bilayer is formed on a different one of the metallization levels below the second capacitor terminal 108 . the SO bilayer has a silicon oxynitride layer 181 and a silicon nitride layer 182 on the silicon oxynitride layer 181, and the second terminal 108 extends over (eg, and contacts) portions of the silicon nitride layer 182, As seen in Figure 1. The example of FIG. 1 also includes trenches 193 in the SO bilayer. As seen in FIG. 1 , the trench 193 is spaced laterally outward from the second terminal 108 in first and second directions X and Y, and the trench 193 extends through the silicon oxynitride layer along a third direction Z. 181. As shown in FIG. 1 , the trench 193 extends a distance 195 along the third direction Z partly into the silicon oxynitride layer 181 , leaving a non-zero thickness of the silicon oxynitride layer 181 at the bottom of the trench 193 197. Trench 193 is filled with dielectric layer 183 (eg, SiO 2 ) of metallization level 180 .Referring also to FIGS. 2A , 2B, and 2C, FIG. 2A schematically illustrates an example packaged electronic device 200 including the high voltage capacitor on the electronic device 100 described above and the high voltage on the first and second additional semiconductor dies. capacitor. In this example, packaged electronic device 200 includes an example of electronic device 100 on a first die representing the singulated or separated semiconductor die depicted and described above in connection with FIGS. 1-1D . The device 100 of the first die has a multi-level metallization structure 103 with a conductive shield 105 between the previously described isolated first and second regions 196 and 198 and the capacitor 104 . A first semiconductor die or electronic device 100 is packaged together with one or more additional semiconductor die to create a packaged electronic assembly having conductive leads or terminals 201, 202 associated with a first (eg, low voltage) voltage domain , 203, 204, 205, 206, 208, and conductive leads or terminals 209, 210, 211, 214, 215, and 216 associated with one or more additional (eg, higher voltage) voltage domains.As shown schematically in FIG. 2A , an electronic device 100 (eg, a first semiconductor die) includes a pair of capacitors 104 each having a first terminal 106 and a second terminal connected (eg, wire bonded) to a corresponding bond wire 188 108. In a corresponding user application (such as a communication system printed circuit board), terminals 201 to 206, 208 to 211, and 214 to 216 are soldered to corresponding circuit board traces 221 to 226, 228 to 231, and 234 to 236 to provide, respectively, an associated Electrical interconnection and operation of signal lines or signals INA, INB, VCCI, GND, DIS, DT, VCCI, VSSB, OUTB, VDDB, VSSA, OUTA, and VDDA. The first die or electronic device 100 includes logic circuitry 240 in this example that provides low voltage first and second communication channel signals to the first terminals 106 of respective capacitors 104 .The conductive shield 105 of the first semiconductor die electronic device 100 in FIGS. 1 and 2A protects the first portion 196 of the multilevel metallization structure 103 from the high voltage associated with the second terminal 108 . Capacitor 104 in FIG. 2A provides an isolation barrier between logic circuit 240 of packaged electronic device 200 and capacitive coupling circuits of first and second additional semiconductor die 251 and 252 . In one example, semiconductor dies 251 and 252 also include multilevel metallization structure 103 with conductive shield 105 between isolated first and second regions 196 and 198 and capacitor 104 having plate terminals 106 and 108 , as previously described. As shown in FIG. 2A , respective bond wires 188 are wire bonded to exposed top side 192 of second terminal 108 to provide series connected capacitive coupling between logic circuit 240 and respective drivers 253 and 254 of semiconductor die 251 and 252 . In another example, the second and third semiconductor die 251 and 252 do not include internal isolation capacitors, and the bond wires 188 are soldered to conductive features of the respective semiconductor die 251 and 252, for example, to the conductive features of the respective drivers 253 and 254. input (see example in Figure 26 below). Semiconductor dies 251 and 252 are receivers of packaged electronic device 200 in one example, with outputs from respective drivers 253 and 254 connected to external circuitry that controls the voltage VSSA at switch node 234 .A first receiver output channel (eg, channel “A”) in FIG. 2A provides a first channel driver output biased to the supply voltage VDD received at supply node 260 . Supply node 260 is connected through boost resistor 262 and diode 263 to provide a first supply voltage signal VDDA at circuit board trace 236 . The first driver 253 receives the first supply voltage VDDA as an upper rail supply, and the lower rail of the driver 253 is connected to the circuit board trace 234 to operate at the reference voltage VSSA. The external circuitry includes a boost capacitor 264 connected between terminals 214 and 216, and the output of driver 253 is connected to terminal 215 to provide a first gate drive output. A second receiver output channel (eg, channel "B") includes a second driver 254 of a second semiconductor die 252 biased according to a supply voltage VDD and a ground reference voltage VSSB at terminals 211 and 209, respectively. The external circuitry also includes a supply voltage capacitor 266 connected between the supply voltage VDD and the ground reference voltage VSSB at the ground reference node 229 . In operation, drivers 253 and 254 operate according to signals received from logic circuit 240 through isolated capacitively coupled channels and provide respective gate drive signals OUTA and OUTA connected to the gates of respective high-side and low-side transistors 271 and 272 . OUTB. The high-side transistor 271 has a drain terminal 270 connected to the high-voltage supply voltage HV, and a capacitor 274 is connected between the drain terminal 270 and the ground reference node 229 . The source terminal of the high-side transistor 271 and the drain terminal of the low-side transistor 272 are connected to the switch node 234 .FIG. 2B shows another example of a packaged electronic device 280 with a high voltage capacitor on device 100 . 2C shows yet another example of a packaged electronic device 290 including high voltage capacitors on the first and second additional semiconductor die.Referring also to FIGS. 3-25 , FIG. 3 shows a method 300 of fabricating a packaged electronic device including a first die having capacitors and multi-level isolation structures in the multi-level metallization structure. 4-25 show partial views of the electronic device 100 (eg, the first die) of FIGS. 1-1D and 2 undergoing a fabrication process according to method 300 . Method 300 shows steps, such as actions and/or events associated with construction of the multi-level metallization structure incorporating capacitor 104 and conductive shield 105 . The steps described may be used concurrently in the fabrication and interconnection of other electronic circuits and/or components (eg, transistor circuits forming logic circuit 440 in FIG. 4 , etc.) in a single semiconductor die. Multi-level metallization structure 103 includes, in one example, metal lines, cylindrical contacts and vias, and/or trench contacts and vias that electrically couple the terminals of capacitor 104 to one or more internal components (not shown).Method 300 includes performing front-end processing at 302, e.g., fabricating one or more circuit components (e.g., in FIG. Transistors T1 and T2, etc.). In the electronic device 100 of FIG. 1 , front-end processing at 302 includes processing of a starting semiconductor wafer, such as a p-type silicon wafer, an SOI structure with a layer of silicon, a layer of silicon germanium, or with another layer of semiconductor material. Processing at 301 also includes, in one example, fabricating transistors T1 and T2 on and/or in semiconductor layer 101 and forming isolation structures, such as the illustrated STI structure 102 , on and/or in the top side of semiconductor layer 101 .4 to 23 show the multi-level metallization structure 103 formed over the semiconductor layer 101 as at 304 , 306 , 310 , 320 , 330 and 340 . The example method 300 includes forming a pre-metal dielectric layer at 304 and forming associated contacts (eg, tungsten) at 306 to create the pre-metal level 110 . Thereafter, N metallization levels in the multilevel metallization structure 103 are fabricated level by level. FIG. 4 shows one example of the processing at 304 , where a deposition process 400 is performed that deposits a PMD layer 111 (eg, SiO 2 ) on the semiconductor layer 101 . In one example, process 400 deposits silicon dioxide to form PMD layer 111 to a thickness of about 1 μm.Method 300 continues at 306 with forming contacts (eg, contacts 114 and 118 ) through PMD layer 111 . FIG. 5 shows one example in which a contact formation process 500 is performed that passes through the PMD layer 111 and forms a cylindrical contact 114 and a metal front trench contact 118 on the semiconductor layer 101 . In one example, process 500 includes a patterned etch (not shown) to form cylindrical holes and trenches for corresponding cylindrical and trench contacts and depositing a suitable metal in the openings (eg, which is or One or more deposition steps comprising tungsten), followed by a planarization step (such as chemical mechanical polishing or CMP) for providing the planar top sides of the PMD layer 111 and the corresponding cylindrical contacts 114 and trench contacts 118 formed ). In one example, the trench formation creates a continuous trench for the portion of the first region 196 (FIG. down onto the semiconductor layer 101 to create the grounded conductive shield 105 described above. In an example implementation, cylindrical metal front-level contacts 114 are electrically coupled with one or more electronic circuit components of electronic device 100 (eg, for signal routing in logic circuit 240 of FIG. 2 ). Additionally, additional trench contacts 118 are formed in region 119 ( FIG. 1 ) near the periphery of the illustrated portion of electronic device 100 at 306 in the example of FIG. protection and provide a barrier against the ingress of external ionic contamination at the edge of the die, although this is not a strict requirement for all possible implementations.Method 300 continues at 310 in FIG. 3 with forming metallization structure level 120 on PMD layer 111 . 6 and 7 show an example implementation in which the first metallization structure level 120 is formed at 312 and 314 of FIG. 3 . Metallization level 120 includes metal routing line 122 and first metal line 126 on metal front trench contact 118 and metal routing feature 125 . In addition, the metallization structure level 120 includes a first ILD layer 121 on the PMD layer 111 , a first metal line 126 and a metal routing feature 125 , and a routing via 127 and a first trench via 128 on the first metal line 126 . As discussed above, first trench via 128 and first metal line 126 surround (except in the gap region where metal routing feature 125 passes through gap G as shown in FIG. 1C ) second region 198 of metallization structure level 120 The first region 196 at the first metallization structure level 120 within.At 312 in FIG. 3 , a first metal layer feature ( M1 ) is deposited and patterned. 6 shows an example wherein a process 600 is performed, process 600 deposits a metal layer (for example, deposits aluminum to a thickness of 0.57 μm) on the PMD layer 111 and etches exposed areas of the deposited metal using a patterned etch mask (not shown). Partially to form the metal routing line 122 in the second region (region 198 in FIG. 1 above), and to form the metal routing feature 125 and the first metal line 126 in the first region (196 in FIG. 1). At 314 , a first ILD layer 121 is deposited on the PMD layer 111 . FIG. 7 shows an example where a process 700 is performed, and the process 700 deposits and planarizes the first ILD layer 121 on the PMD layer 111 (for example, deposits silicon dioxide to a thickness of about 1.0 μm on the first metal layer feature and on the PMD layer 111. Silicon dioxide is deposited over the PMD layer 111 to a thickness of about 1.6 μm, followed by planarization by chemical mechanical polishing (CMP).At 316 in FIG. 3 , via openings, such as trenches and cylindrical holes, are etched in the deposited first ILD layer 121 . FIG. 8 shows one example in which an etch process 800 is performed using a patterned etch mask 802 . The etch process 800 forms, in one example, cylindrical holes for intended vias and trenches for intended trenched vias. Processing at 316 also includes filling the etched cylindrical holes and trenches with a conductive metal, such as tungsten, to form cylindrical wiring vias 124 and trench vias 128 . 9 shows an example in which a deposition process 900 is performed to deposit tungsten in etched via holes and trenches to form cylindrical wiring vias 124 and first trench vias 128 over first metal lines 126 to enable conductive shielding. Item 105 continues in metallization structure level 120 . In one example, the process at 310 in FIG. 3 further forms first trench via 128 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not a strict requirement for all possible implementations. In one example, the processing at 310 also includes planarizing after the trenches in the via holes are filled.Method 300 continues at 320 in FIG. 3 with forming metallization structure level 130 on metallization structure level 120 . One example includes forming and patterning a second metal layer ( M2 ) at 322 to form second metal line 136 over first trench via 128 and to form first terminal 106 of capacitor 104 lateral to second metal line 136 Spaced apart by a gap distance 135 and extending over at least a portion of the routing vias 127 in the first portion of the multilevel metallization structure 103 . 10 shows example electronic device 100 after forming second metallization structure level 130 by process 1000 and forming corresponding metallization structure levels 140 and 150 . At 324 in FIG. 3, a second ILD layer 131 is formed extending over the first ILD layer, second metal line 136, and first terminal 106 (e.g., silicon dioxide is deposited over the metal 2 feature and then It is planarized to a thickness of about 2.4 μm or silicon dioxide is deposited over the first ILD layer 121 and then planarized by CMP to a thickness of 2.4 μm+0.6 μm=3.0 μm). At 326, trenches and cylindrical vias are etched in the second ILD layer 131 and filled with tungsten to form second trenched vias 138 and multilevel vias on second metal lines 136. The interconnection via 134 and the second portion 198 of the metallization structure 103 . In the illustrated example, the second metallization level processing at 320 further forms trench via 138 in region 119 near the periphery of the illustrated portion of electronic device 100, although this is not for all possible implementations. strict requirements.An additional metallization level is formed at 330 in FIG. 3 in one example. FIG. 10 further shows metallization level 140 on metallization level 130 described above and process 1000 of forming metallization level 150 with stacked tungsten vias 154 and 158 . Any number of intermediate metallization structure levels may be formed at 330, some of which may include respective metal lines, and each of which includes conductive vias, such as those surrounding the multi-level metallization structure 103 when forming the step-shaped conductive shield 105. Trench access of the first portion 196 . In the illustrated example, the individual metallization structure levels are created by first depositing and patterning a metal line layer (if included, omitted for levels 150 and 170 in the illustrated example), and depositing an ILD layer, Chemical mechanical polishing of the ILD layer to remove topography, etching cylindrical via holes and trenches in the ILD layer and filling the holes and trenches with tungsten, followed by chemical mechanical polishing (such as CMP) to remove unwanted tungsten. FIG. 11 shows a process 1100 of forming an example metallization structure level 160 in an electronic device 100 (eg, at 330 in FIG. 3 ).Fabrication of metallization structure levels 170 and 180 (at 330 and 340 in FIG. 3 ) is illustrated in FIGS. 12-22 and includes process 1200 in FIG. A sixth level portion of the track seal stack 172 and any other sixth metal routing lines (not shown) therebetween and a sixth ILD layer 171 is deposited over the previous fifth ILD layer 161 . 13 and 14 show another deposition process 1300 and 1400 for forming the respective dielectric layers 181 and 182 of the SO bilayer. Deposition process 1300 in FIG. 13 deposits silicon oxynitride 181 to a thickness of about 0.3 μm in one example, and deposition process 1400 in FIG. 14 deposits silicon nitride layer 182 to a thickness of about 0.65 μm. 15 shows process 1500, which forms trenches and cylindrical via holes through layers 171, 181, and 182 (e.g., patterned etch using an etch mask, not shown), and FIG. 16 shows process 1600, process 1600 fills holes and trenches in the multi-level metallization structure 103 with tungsten.17 illustrates an etch process 1700 using an etch mask 1702 after depositing the top metallization layer, which etches exposed portions of the top metallization layer to form the second terminal 108 and metal lines 184 and 186 . Formation of metallization structure level 180 continues in FIG. 18 with etch process 1800 using mask 1802 , which etches layers 182 and 181 to create trenches 193 extending through layer 182 and stopping in layer 181 . Portions of the protective overcoat stack 189 are deposited by the sequence of deposition processes 1900 in FIG. Deposition (PECVD) deposits silicon dioxide to a thickness of 3.6 μm). Next, in one example, the SO bilayer deposited by the process sequence 1900 is chemical mechanically polished to within 1.4 μm of the metallization structure level 180 to remove topography.In FIG. 20 , a deposition process 2000 is performed that deposits a dielectric layer 185 that together with dielectric layer 183 forms a protective overcoat stack 189 for electronic device 100 . In one example, process 2000 deposits dielectric layer 185 as silicon oxynitride to a thickness of about 2.8 μm. In FIG. 21 , an etch process 2100 is performed using an etch mask 2102 . The etch process 2100 etches openings in the protective overcoat stack 189 , including openings exposing the topside 192 of the second terminal 108 of the capacitor 104 . In FIG. 22 , a dispensing or masking process 2200 is performed that forms a polyimide layer 190 (eg, to a thickness of about 10 μm) over portions of the protective overcoat stack 189 to create a stress barrier to ease the protective coating. Mechanical stress at the surface of the dielectric layer 185 of the overcoat stack 189 . As shown in FIG. 22 , the polyimide layer 190 has a gap exposing a top side 192 of the second terminal 108 .Referring also to FIGS. 23-25 , method 300 continues at 350 with separating the first semiconductor die (eg, electronic device 100 in FIG. 2 above) including semiconductor layer 101 and multilevel metallization structure 103 from the wafer. Additionally, the die is attached to the leadframe and a wire bonding process is performed at 350 via process 2300 in FIG. 23 to provide an electrical connection to the second capacitor terminal 108 . 23 and 24 show the packaged electronic device during processing at 350, with the electronic device 100 attached (FIG. 24) to a lead frame having the previously described leads or terminals 201-206, 208-211, and 214-216. First die attach pad 2401 of structure 2400. The die attach process at 350 in this example also includes the respective die attach of dies 251 and 252 (such as FIG. 2 above) to the leadframe. MAT 2402 and 2403. Bond wires are connected (eg, soldered, ultrasonically bonded, etc.) between the conductive features of the dies 100 , 251 , 252 and/or to particular ones of the leads 201 - 206 , 208 - 211 , and 214 - 216 . As shown in FIGS. 2 , 23 and 24 , the wire bonding also bonds the bond wire 188 to the exposed top side 192 of the respective second terminal 108 of the capacitor 104 . In this example, the second ends of bond wires 188 are coupled to corresponding second capacitor plates 108 of dies 251 and 252 respectively to create a series connection between the driver output of electronic device 100 and the circuitry of dies 251 and 252 capacitor coupling. Other electrical connection techniques, such as ball grid array or solder ball connections, etc. to conductive features of the substrate or the like, may be used at 350 to connect the second terminal 108 of the capacitor 104 to the conductive features of the second semiconductor die at 350. form an electrical connection.Method 300 also includes molding and device separation at 360 in FIG. 3 . 25 shows a molded and singulated packaged electronic device 200 comprising a molded packaging structure 2500 (eg, molding compound) enclosing dies 100, 251 and 252, bond wires 188 Portions of the conductive leads or terminals 201 , 202 , 203 , 204 , 205 , 206 , 208 are exposed along one or more sides of the package structure 2500 . The example of FIG. 25 is a quad flat no-leads (QFN) packaged device 200 . In another example, different packaging types and forms are possible, and method 300 also includes lead trimming and shaping in one example to provide gull-wing leads, J-leads, etc. to the finished packaged electronic device.26 shows a graph 2600 of the bottom plate to top plate electric field ratio for different bottom plate to top plate overlap distances 139 and bottom plate to integrated conductive shield 105 ground ring gap distances 135 . The examples show the advantages of controlling the distances 135 and 139 to reduce the ratio of the bottom plate electric field to the top plate electric field below 0.5 to mitigate dielectric breakdown in the electronic device 100 , 200 . The first group of simulated data 2610 to 2614 represents the ratio for a gap distance 135 of 4 μm and includes a value 2610 for an overlap distance 139 of 0 μm, a value 2611 for an overlap distance 139 of 1 μm, a value 2612 for an overlap distance 139 of 2 μm , a value 2613 for an overlap distance 139 of 3 μm and a value 2614 for an overlap distance 139 of 4 μm. The second group of simulated data 2620 to 2624 represent ratios for a gap distance 135 of 3 μm and include a value 2620 for an overlap distance 139 of 0 μm, a value 2621 for an overlap distance 139 of 1 μm, a value 2622 for an overlap distance 139 of 2 μm , the value 2623 for an overlap distance 139 of 3 μm and the value 2624 for an overlap distance 139 of 4 μm. A third group of simulated data 2630 to 2634 represents ratios for a gap distance 135 of 4 μm and includes a value 2630 for an overlap distance 139 of 2 μm, a value 2631 for an overlap distance 139 of 1 μm, a value 2632 for an overlap distance 139 of 2 μm , the value 2633 for an overlap distance 139 of 3 μm and the value 2634 for an overlap distance 139 of 4 μm. A fourth group of simulated data 2640 to 2644 represents the ratio for a gap distance 135 of 1 μm and includes a value 2640 for an overlap distance 139 of 0 μm, a value 2641 for an overlap distance 139 of 1 μm, a value 2642 for an overlap distance 139 of 2 μm , the value 2643 for an overlap distance 139 of 3 μm and the value 2644 for an overlap distance 139 of 4 μm. The fifth group of simulated data 2650 to 2654 represents the ratio for a gap distance 135 of 0.75 μm and includes a value 2650 for an overlap distance 139 of 0 μm, a value 2651 for an overlap distance 139 of 1 μm, a value for an overlap distance 139 of 2 μm 2652, a value 2653 for an overlap distance 139 of 3 μm and a value 2654 for an overlap distance 139 of 4 μm. The simulation data 2660 to 2664 of the sixth group in FIG. 26 represents the ratio for a gap distance 135 of 0.55 μm and includes a value 2660 for an overlap distance 139 of 0 μm, a value 2661 for an overlap distance 139 of 1 μm, an overlap of 2 μm Value 2662 for distance 139, value 2663 for overlap distance 139 of 3 μm and value 2664 for overlap distance 139 of 4 μm. Graph 2600 identifies an example implementation, eg, having a gap distance 135 of 0.5 μm to 1.0 μm (eg, 0.55 μm to 0.75 μm) and an overlap distance of 1.0 μm to 6.0 μm (eg, 2.0 μm to 5.0 μm, eg, about 3 μm) 139 to reduce the bottom plate Epk and improve the isolation capability of the single die stiffener. In one example, the benefit can be enhanced using a gap distance 135 of about 0.55 μm and an overlap distance 139 of about 3 μm. The described example provides a solution by implementing dual design rules for the bottom plate overlap distance 139 and the gap distance 135 to control the space between the first terminal 106 and the ground ring to reduce the bottom plate Epk. Using SO bilayers 181 , 182 with or without trenches 193 provides an additional benefit to reduce Epk at or near the top plate 108 .It was found that some test devices formed using these parameters had significantly greater bipolar surge capability than similar baseline devices. For example, a device with a gap distance 135 of 0.55 μm has more than 10% higher bipolar surge capability compared to a baseline device with a gap distance 135 of 4 μm. (Testing was performed according to EC/EN 61000-4-2 standard, voltage at 50% failure rate.) This improvement provides significant experience with respect to the 8kV minimum surge voltage specified by level 4 of the EC/EN 61000-4-2 standard. Improved margins increase device life and improve end user safety.Unless otherwise stated, a value preceded by "about", "approximately" or "substantially" indicates +/- 10 percent of the stated value. The above examples merely illustrate a few possible implementations of the various aspects of the disclosure, wherein equivalent alterations and/or modifications will occur to others skilled in the art upon the reading and understanding of this specification and the accompanying drawings. Modifications are possible in the described examples, and other implementations are possible within the scope of the claims.
A system and method are disclosed for remote management, including systems and methods for hosting web applications within remote management hardware and/or firmware. In one embodiment, a system includes a microcontroller to configure a processor, the microcontroller including a memory. The system further includes a network interface coupled to the microcontroller, the network interface to send and receive communications with an external device. The system further includes a non-volatile memory to store computer executable instructions to be executed by the microcontroller, and a power supply to provide power to the microcontroller, the network interface, and the non-volatile memory regardless of the power state of the processor, wherein the microcontroller is to provide a web server to receive and process HyperterText Transfer Protocol (HTTP) requests from the external device.
CLAIMSWhat is claimed is:1. A system comprising:a microcontroller to configure a processor, the microcontroller comprising a memory;a network interface coupled to the microcontroller, the network interface to send and receive communications with an external device;a non-volatile memory to store computer executable instructions to be executed by the microcontroller; anda power supply to provide power to the microcontroller, the network interface, and the non-volatile memory regardless of the power state of the processor;wherein the microcontroller to provide a web server to receive and process HyperterText Transfer Protocol (HTTP) requests from the external device.2. The system of claim 1, wherein the HTTP requests to instruct the microcontroller to configure the processor.3. The system of any one of claims 1 to 2, wherein the HTTP requests to specifymanagement operations to be performed by the microcontroller.4. The system of any one of claims 1 to 2, wherein the power state is sleep.5. The system of any one of claims 1 to 2, wherein the power state is soft-off.6. The system of any one of claims 1 to 2, wherein the power state is not active.7. The system of any one of claims 1 to 2, wherein the web server to accept and to process a request to push content into the memory, and wherein, in response to at least one request to get a web page, the web server to dynamically generate a responsive web page reflecting the content stored in the memory.8. The system of any one of claims 1 to 2, the microcontrol ler fu rther com prising a cache memory to store data for use in the dynamical ly generated responsive web page.9. The system of any one of claims 1 to 2, wherein the web server to su pport a web socket bidirectiona l connection with the remote computer.10. The system of any one of claims 1 to 2, wherein the com puter executa ble instructions to fit within the amou nt of memory space contained in the non-volatile memory.11. A method com prising:providing sufficient power to a microcontrol ler, a network interface, and a flash memory to al low them to operate regard less of the power state of a processor;using instructions read from a non-volatile memory by the microcontrol ler to im plement a web server to receive a nd process HTTP requests from a remote com puter.12. The method of claim 11, wherein the HTTP requests to instruct the microcontrol ler to configure the processor.13. The method of a ny one of claims 11 to 12, wherein the HTTP requests to specify management operations to be performed by the microcontrol ler.14. The method of any one of claims 11 to 12, wherein the power state is sleep.15. The method of any one of claims 11 to 12, wherein the power state is soft-off.16. The method of any one of claims 11 to 12, wherein the web server to accept a nd to process a request to push content into the memory, a nd wherein, in response to at least one request to get a web page, the web server to dynamical ly generate a responsive web page reflecting the content stored in the memory.17. The method of any one of claims 11 to 12, wherein the computer executable instructions to fit within the amount of memory space contained in the non-volatile memory.18. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a microcontroller comprising a memory, the microcontroller coupled to a processor, a network interface, a non-volatile memory, and a power supply, wherein the power supply to provide sufficient power to the microcontroller, the network interface, and the non-volatile memory to allow the microcontroller to operate regardless of the power state of the processor, the microcontroller to perform a process of: reading computer-executable instructions from the non-volatile memory; and executing the instructions to provide a web server to receive and process HTTP requests from an external device.19. The non-transitory computer-readable medium of claim 18, wherein the power state is sleep.20. The non-transitory computer-readable medium of claim 18, wherein the power state is soft-off.21. A method comprising:providing sufficient power to a microcontroller, a network interface, and a flash memory to allow them to operate when the power state of a processor is at least one of sleeping and soft-off; andusing instructions read from a flash memory by the microcontroller to implement a web server to receive and process HTTP requests from a remote computer.22. The method of claim 21, wherein the HTTP requests to instruct the microcontroller to configure the processor.23. The method of any one of claims 21 to 22, wherein the HTTP requests to specify management operations to be performed by the microcontroller.24. The method of any one of claims 21 to 22, wherein the web server to accept and to process a request to push content into the memory, and wherein, in response to at least one request to get a web page, the web server to dynamically generate a responsive web page reflecting the content stored in the memory.25. The method of any one of claims 21 to 22, wherein the computer executable instructions to fit within the amount of memory space contained in the flash memory.
SYSTEMS AND METHODS FOR HOSTING WEB APPLICATIONS WITHIN REMOTE MANAGEMENT HARDWARE AND/OR FIRMWARETECHNICAL FIELD[0001] Embodiments described herein generally relate to remote management of computers. In particular, embodiments described generally relate to systems and methods for hosting web applications within remote management hardware and/or firmware.BACKGROUND[0002] Remote management of computers may be enabled by hardware and/or firmware included in them. Remote management would allow for computers, including large groups of computers, to be updated, reconfigured, internationalized, and branded. However, remote management systems may be more likely to be used if they are relatively easy to use and setup, yet include beneficial tools, and do not require complicated third party software to be installed.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The various advantages of the embodiments disclosed herein will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the drawings, in which:[0004] Figure 1 is a block diagram illustrating an embodiment of an out-of-band remote management hardware and/or firmware system;[0005] Figure 2 is a block diagram illustrating another embodiment of a remote out-of-band management platform using remote management hardware and/or firmware;[0006] Figure 3 is a block flow diagram illustrating a process to load remote management hardware and/or firmware with configuration data to be used in subsequently served web pages;[0007] Figure 4 is a flow diagram illustrating an embodiment of a process to use an application hosted within remote management hardware and/or firmware and a web browser to remotely manage a PC; [0008] Figure 5 is an embodiment of a web page of a remote management application loaded into a web browser from remote management hardware and/or firmware;[0009] Figure 6 is a block flow diagram illustrating an embodiment of a process to use a web application to establish a two-way connection with remote management hardware and/or firmware;[0010] Figure 7 is a flow diagram illustrating an embodiment of a process to use a web application to establish a two-way connection with remote management hardware and/or firmware; and[0011] Figure 8 is an embodiment of a process to remotely manage multiple computers using an application loaded into a web browser from remote management hardware and/or firmware.DETAILED DESCRIPTION[0012] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to not obscure the understanding of this description.[0013] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment need not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0014] Embodiments disclosed relate to remotely managed hardware and software (e.g., processors and computer systems). One challenge posed by a remote management system is the degree of complexity in administering the system. For example, setting up and configuring one or more third party software applications may discourage administrators from using a remote management system. Embodiments disclosed herein allow for remote administration of computer systems using a web browser. [0015] This browser based approach allows for remote connection to and operation of components of a computer system having a processor in a sleep or soft-off state. As used herein, a "soft-off" state is when the user sessions in a computer system are shut down. In some embodiments, during a soft-off, user sessions are torn down and restarted on the next boot. In some embodiments, a soft-off occurs when a system restart is requested.[0016] In some embodiments, a web socket is established between the browser and the remote computer system.[0017] Configuration information is pushed onto the remote computer system without involving the computer system's primary processor (e.g., CPU) or operating system.Embodiments disclosed herein utilize a secondary processor and/or firmware that implements a web server and allows for remote configuration of components of the computer system using a web browser. Embodiments disclosed herein allow for configuration of a single remote computer or multiple computers in a datacenter.[0018] Remote management embodiments disclosed herein include a microcontroller (secondary processor) coupled with and able to configure components of a computer system including, for example, a primary processor. The disclosed microcontroller includes a network interface to allow it to communicate with a remote computer, for example an administrator's computer. It also includes a memory to store executable instructions, and a memory to store content. It is coupled to a power supply to receive power even when the primary processor is asleep or in a soft-off state. In operation, the microcontroller, by executing the executable instructions, implements a process of implementing a web server to receive and process a set of at least two types of HyperterText Transfer Protocol (HTTP) requests from the remote computer, the set of requests to cause the microcontroller to administer the primary processor independently of an operating system associated with the processor, and independently of a power state of the primary processor.[0019] To the extent that the microcontroller operates independently of the processor and the processor's operating system, it is referred to herein as an out-of-band (OOB) microcontroller.[0020] Figure 1 is a block diagram illustrating an embodiment of a remote out-of-band management platform using remote management hardware and/or firmware system.Embodiments of this platform have a network connection, such as a network interface card (NIC) 120. N IC 120 may be used to communicate with a remote computer, such as a management computer operated by an administrator.[0021] As shown, platform 100 includes a primary processor 102 (e.g., a CPU), which is connected to random access memory 106 via a memory controller hub (MCH) 104. In some embodiments, not shown, some or all portions of the MCH are incorporated into the processor. Processor 102 may be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. In some embodiments, processor 102 is the main (primary) processor used to run an operating system and to control a computer. As illustrated, remote management hardware and/or firmware 110 allows remote management of processor 102 using a web browser. In some embodiments, remote management hardware and/or firmware 110 allows management of other interfaces on the platform. For example, remote management hardware and/or firmware 110 allows configuration of other processors and controllers included in the platform 100 and coupled to remote management hardware and/or firmware 110.[0022] Though Figure 1 shows only one processor 102, some embodiments include at least one additional processor in the platform 100 and at least one of the processors includes multiple threads, multiple cores, or the like. Some embodiments include many computers, such as all of the computers of a corporate facility or datacenter, in which case each computer includes platform 100, and each computer is managed using a web browser on a remote computer.[0023] As illustrated, processor 102 is further connected to I/O devices via an input/output controller hub (ICH) 108. The ICH may be coupled to various devices, such as a super I/O controller (SIO), keyboard controller (KBC), or trusted platform module (TPM) via a bus 128. In an embodiment, ICH 108 is coupled to non-volatile flash memory 122 via bus 130. In the illustrated embodiment, remote management hardware and/or firmware 110 connects to ICH 108 via bus 132. Remote management hardware and/or firmware 110 is coupled to non-volatile flash memory 122 via bus 130. In some embodiments, processor 102 uses an embedded controller instead of SIO controller.[0024] Remote management hardware and/or firmware 110 may be likened to a"miniature" processor. In some embodiments, like a full capability processor, remote management hardware and/or firmware 110 includes microcontroller 112 which is coupled to a cache memory 114, random access memory (RAM) 116, read-only memory (ROM) 118, and flash memory 122. Cache memory 114 and RAM 116 are volatile memories used by microcontroller 112 to store temporary data at run-time.[0025] ROM 118 and flash memory 122, on the other hand, are non-volatile memories, which in some embodiments are loaded with computer-executable instructions to be executed by microcontroller 112. When remote management hardware and/or firmware 110 operates out-of-band, it does not have access to the computer's operating system, its processor, or its system storage. At least some of the instructions it is to execute, in other words its firmware, are thus to be stored in ROM 118 and/or flash memory 122. In some embodiments, microcontroller 112's firmware is stored in ROM 118. In some embodiments, ROM 118 stores micro-instructions that make up microcontroller 112's instruction set architecture. In alternate embodiments, microcontroller 112's firmware is to be stored in a portion of flash memory 122 labeled as firmware 126. Flash memory 122 also stores a Built- in Operating System (BIOS) 124 for use by microcontroller 112.[0026] In some embodiments, firmware 126 and the BIOS 124 are reprogrammed as needed and with little difficulty. In alternate embodiments, the BIOS and firmware within flash memory 122 are updated when needed. In some embodiments, an administrator operating a web browser on a remote computer reprograms firmware 126 securely using a Transport Layer Security (TLS) protocol or Secure Socket Layer (SSL) protocol.[0027] The storage space afforded by flash memory 122 is not unlimited. The memory size of the firmware 126 and the BIOS 124 is small enough to fit on the flash memory 122. In an exemplary embodiment, flash memory 122 includes 8 Megabits of storage, and the size of the code to implement a web server is less than 60 Kbytes.[0028] OOB μΟοηΐΓθΙΙθΓ 112 includes a network interface, which in some embodiments is a network interface 120. OOB μΟοηΐτοΙΙθΓ 112 is further connected to a power supply 128, which provides power to allow out-of-band communication even when the in-band processor 102 is not active, or fully booted.[0029] In some embodiments, OOB μΟοηΐτοΙΙθΓ 112 uses a basic input output system (BIOS) 124 stored in non-volatile memory 122. In other embodiments, OOB μΟοηΐτοΙΙθΓ 112 boots using instructions stored on and received from a different device (not shown). Remote management hardware and/or firmware 110 may have access to all of the contents of the non-volatile memory 122, including the BIOS portion 124 and a protected portion 126 of the non-volatile memory. In some embodiments, the protected portion 126 of memory is for use by remote management hardware and/or firmware.[0030] OOB μ(ΙοηΐΓθΙΙθΓ 112 in some embodiment uses the protected portion 126 of flash memory 122 to securely store certificates, keys and signatures that are inaccessible by the BIOS, firmware or operating system.[0031] Figure 2 is a block diagram illustrating another embodiment of a remote out-of-band management platform using remote management hardware and/or firmware.Embodiments of this platform have a network connection, such as a network interface card (NIC) 230. N IC 230 may be used to communicate with a remote computer, such as a management computer operated by an administrator.[0032] As shown, platform 200 includes a primary processor 202 (e.g., a CPU), which is connected to dynamic random access memory (DRAM) 210 via a DRAM interface (DRAM l/F) 208. Processor 202 may be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. In some embodiments, processor 202 is the main (primary) processor used to run an operating system and to control a computer. Processor 202 also includes graphics processing unit (GPU) 204 and peripheral component interface (PCI) express for graphics (PEG) 206.[0033] As shown, processor 202 uses desktop management interface (DMI) 244 to connect to platform controller hub (PCH) 212, which includes virtualization engine (VE) 214, random access memory (RAM) 216, remote management hardware and/or firmware 218, Host input/output (I/O) interface 224, and input/output interface (I/O) 226. In someembodiments, PCH 212 does not include VE 214.[0034] In the illustrated embodiment, remote management hardware and/or firmware 218 further includes OOB μΟ)ηΐΓθΙΙθΓ 220 and compression block 222. Remote management hardware and/or firmware 218 allows remote management of processor 202 using a web browser. In some embodiments, remote management hardware and/or firmware 218 allows management of other devices on the platform. For example, remote management hardware and/or firmware 218 allows configuration of other processors and controllers included in the platform 200 and coupled to remote management hardware and/or firmware 210. For example, remote management hardware and/or firmware 218 allows management and configuration of other processors or controllers included in PCH 212. [0035] Though Figure 2 shows only one processor 202, some embodiments include at least one additional processor in the platform 200 and at least one of the processors includes multiple threads, multiple cores, or the like. Some embodiments include many computers, such as all of the computers of a corporate facility or datacenter, in which case each computer includes platform 200, and each computer is managed using a web browser on a remote computer.[0036] As illustrated, PCH 212 is connected to I/O device interfaces, including a super I/O controller (SIO), keyboard controller (KBC), or trusted platform module (TPM) via a bus 242. In an embodiment, PCH 212 is coupled to non-volatile flash memory 228 via serial peripheral interface (SPI) bus 238. In the illustrated embodiment, PCH 212 is further coupled to network interface card 234 and power supply 236. In the illustratedembodiment, remote management hardware and/or firmware 218 is incorporated within PCH 212 and therefore also has access to flash memory 228, NIC 234, and power supply 236.[0037] Remote management hardware and/or firmware 218 may be likened to a"miniature" processor, as it includes microcontroller 220, and is coupled to and able to use random access memory (RAM) 216, and flash memory 222. In some embodiments, RAM 216 includes a cache memory.[0038] When remote management hardware and/or firmware 218 operates out-of-band, it does not have access to the computer's operating system, its processor, or its system storage. At least some of the instructions it is to execute, are thus stored in flash memory 228, which receives sufficient power from power supply 236 to operate. In some embodiments, microcontroller 220's firmware is to be stored in a portion of flash memory 228 labeled as firmware 232. Flash memory 228 also stores a Built-in Operating System (BIOS) 230 that in some embodiments is used by microcontroller 220.[0039] In some embodiments, firmware 232 and the BIOS 230 are reprogrammed as needed. In alternate embodiments, the BIOS and firmware within flash memory 228 are updated when needed. In some embodiments, an administrator operating a web browser on a remote computer reprograms firmware 232 securely using a Transport Layer Security (TLS) protocol or Secure Socket Layer (SSL) protocol.[0040] The storage space afforded by flash memory 228 is not unlimited. The memory size of the firmware 232 and the BIOS 230 is small enough to fit on the flash memory 228. In an exemplary embodiment, flash memory 228 includes 8 Megabits of storage, and the size of the code to implement a web server is less than 60 Kbytes. The sizes of flash memory 220 and the web server code size are not limited to 8 Megabits and 60 Kbytes; in some embodiments one or both of them is larger, and in other embodiments one or both of them is smaller.[0041] OOB μ(ΙοηΐΓθΙΙθΓ 220, N IC 234, and flash memory 228 are coupled to power supply 236, which in some embodiments provides sufficient power for them to operate out-of- band, when processor 202 is in a sleep or soft-off power state.[0042] In some embodiments, remote management hardware and/or firmware 218 includes a compression block 222, which may use compression algorithms, including any lossy or lossless algorithms, for example. In one embodiment, OOB μΟοηΐτοΙΙθΓ 220 sends the compressed contents to a remote computer via NIC 234.[0043] Figure 3 is a block flow diagram illustrating a process to load remote management hardware and/or firmware with configuration data to be used in subsequently served web pages. As shown, remote management hardware and/or firmware 300 includes a μΟ)ηΐΓθΙΙθΓ 302 and web storage 304. In some embodiments, web storage 304 allows administrators operating a remote computer to push blocks of data along with HTTP headers that are served back by HTTP get request. Web storage 304 acts like a generic web server incorporated within the remote management hardware and/or firmware. In some embodiments, μΟ)ηΐΓθΙΙθΓ 302 is a secondary processor that is included on a PCmotherboard and coupled to the processor and other components, for example as shown in Figure 1. As illustrated, the web storage 304 within remote management hardware and/or firmware 300 receives an HTTP PUT request 310 to push content onto web storage 304, at the path labeled as "1," . In some embodiments, the HTTP PUT request originates from a configuration application 306 that is running on a remote computer and is coupled to remote management hardware and/or firmware 300 over a network. Remote management hardware and/or firmware 300 responds at 312, at the path labeled as "2," to acknowledge receipt of the configuration content. Subsequently, remote management hardware and/or firmware 300 receives an HTTP GET request 314 for a web page, at a path labeled as "3." In some embodiments, the HTTP GET request is issued by a web browser 308 running on a remote computer. In other embodiments, the HTTP GET request is received from the local operating system of the same machine. The remote management hardware and/or firmware sends a response 316, at a path labeled as "4,", which serves a web page that reflects the requested content. For example, μΟοηΐΓθΙΙθΓ 302 in some embodiments dynamically generates the responsive web page, and includes relevant portions of the configuration content. The illustrated process may be repeated without limitation in order to configure an unlimited number of configuration settings.[0044] Figure 4 is a flow diagram illustrating an embodiment of a process to load remote management hardware and/or firmware with configuration content to be included in subsequently served web pages. At 402, remote management hardware and/or firmware receives an HTTP PUT request to push configuration content onto web storage. In some embodiments, the HTTP PUT request originates from a remote administrator's computer, with the administrator using a web browser to conduct management operations. At 404, remote management hardware and/or firmware responds to acknowledge receipt of the configuration content. At 406, remote management hardware and/or firmware receives an HTTP GET request for a web page. At 408, remote management hardware and/or firmware sends a response, which serves a web page containing the requested content, and reflects the configuration content to the extent that the configuration content is relevant. The illustrated process may be repeated without limitation in order to configure an unlimited number of configuration settings.[0045] In some embodiments, at 402, the HTTP PUT request pushes HTTP headers onto the remote management hardware and/or firmware in addition to the content. For example, the HTTP PUT request may include a "content type" header. Or, the HTTP PUT request may include a "content-encoding" header. Accordingly, when remote management hardware and/or firmware serves a responsive web page at 408, it applies the content-type and content-encoding to display the content correctly. At 410, it is determined whether any more requests are to be received. If so, the process returns to 406. In not, the process ends.[0046] Furthermore, in some embodiments, remote management hardware and/or firmware store the HTTP headers pushed into it at 402 in a cache memory, so that the headers are quickly and efficiently accessed when remote management hardware and/or firmware serves up web pages.[0047] In some embodiments, remote management hardware and/or firmware stores predefined web pages in its firmware, such as firmware 126 (Figure 1). For example, remote management hardware and/or firmware may store a "logon. html" web page. Remote management hardware and/or firmware may store an "index.html" web page. Remote management hardware and/or firmware may further store web pages linked to the web browser being used by an administrator at the remote computer. Having these web pages ready to serve helps provide an "out-of-the-box" experience.[0048] Figure 5 is an embodiment of a remote management web page displayed on a remote computer and used to administer a computer that incorporates remotemanagement hardware and/or firmware according to embodiments disclosed herein. In some embodiments, web page 502 is generated and served by remote management hardware and/or firmware microcontroller's web server. In some embodiments, web page 502 is a static web page stored in the remote microcontroller's firmware. In alternate embodiments, web page 502 is a static web page stored in firmware. In yet other embodiments, web page 502 is dynamically generated by the microcontroller. As illustrated, web page 502 includes a title bar 504, a menu 506, and computer configuration information 508 for three computers, 510, 512, and 514. In some embodiments, at least part of computer configuration information 510, 512, and 514, consists of configuration data previously pushed into the remote management hardware and/or firmware web storage.[0049] In some embodiments, the web server processes a wide variety of HTTP methods, as defined in various Requests for Comment (RFCs) promulgated by the Internet Engineering Task Force (IETF). For example, the microcontroller's web server may process HTTP commands selected from the following list, which includes a reference to the IETF RFC that details the methods:RFC 2616the HTTP headers in response to a H EAD request SHOU LD be identical to the information sent in response to a GET request. This method can be used for obtaining meta information about the entity implied by the request without transferring the entity-body itself. This method is often used for testing hypertext links for validity, accessibility, and recent modification.POST The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-U RI in the Request-Line. POST is designed to allow a uniform method to cover the following functions:• Annotation of existing resources;• Posting a message to a bulletin board, newsgroup, mailing list, or similar group of articles;• Providing a block of data, such as the result of submitting a form, to a data-handling process;• Extending a database through an append operation.PUT The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-U RI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that U RI. If a new resource is created, the origin server M UST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOU LD be sent to indicate successful completion of the request. If the resource could not be created or modified with the Request-U RI, an appropriate error response SHOU LD be given that reflects the nature of the problem. The recipient of the entity M UST NOT ignore any Content-* (e.g. Content-Range) headers that it does not understand or implement and M UST return a 501 (Not Implemented) response in such cases.DELETE The DELETE method requests that the origin server delete the resource identified by the Request-U RI. This method MAY be overridden by human intervention (or other means) on the origin server. The client cannot be guaranteed that the operation has been carried out, even if the status code returned from the origin server indicates that the action has been completed successfully. However, the server SHOU LD NOT indicate success unless, at the time the response is given, it intends to delete the resource or move it to an inaccessible location.TRACE The TRACE method is used to invoke a remote, application-layer loop- back of the request message. The final recipient of the request SHOULD reflect the message received back to the client as the entity-body of a 200 (OK) response. The final recipient is either the origin server or the first proxy or gateway to receive a Max-Forwards value of zero (0) in the request (see section 14.31). A TRACE request M UST NOT include an entity.CON N ECT This specification reserves the method name CON N ECT for use with a proxy that can dynamically switch to being a tunnel RFC 2518RFC 3253has a single value, the value of a report can depend on additional information specified in the REPORT request body and in the REPORT request headers.CHECKOUT A CHECKOUT request can be applied to a checked-in version- controlled resource to allow modifications to the content and dead properties of that version-controlled resource.CHECKI N A CH ECKI N request can be applied to a checked-out version- controlled resource to produce a new version whose content and dead properties are copied from the checked-out resource.UNCH ECKOUT An U NCH ECKOUT request can be applied to a checked-outversion- controlled resource to cancel the CH ECKOUT and restore the pre- CHECKOUT state of the version-controlled resource.MKWORKSPACE An U NCH ECKOUT request can be applied to a checked-outversion- controlled resource to cancel the CH ECKOUT and restore the pre- CHECKOUT state of the version-controlled resource.UPDATE The UPDATE method modifies the content and dead properties of a checked-in version-controlled resource (the "update target") to be those of a specified version (the "update source") from the version history of that version-controlled resource.LABEL A LABEL request can be applied to a version to modify the labels that select that version. The case of a label name M UST be preserved when it is stored and retrieved. When comparing two label names to decide if they match or not, a server SHOU LD use a case-sensitive U RL- escaped UTF-8 encoded comparison of the two label names.MERGE The M ERGE method performs the logical merge of a specified version (the "merge source") into a specified version-controlled resource (the "merge target"). If the merge source is neither an ancestor nor a descendant of the DAV:checked-in orDAV:checked-out version of the merge target, the M ERGE checks out the merge target (if it is not already checked out) and adds the URL of the merge source to the DAV:merge-set of the merge target. It is then the client's responsibility to update the content and dead properties of the checked-out merge target so that it reflects the logical merge of the merge source into the current state of the merge target. The client indicates that it has completed the update of the merge target, by deleting the merge source URL from the DAV: merge-set of the checked- out merge target, and adding it to the DAV: predecessor-set. As an error check for a client forgetting to complete a merge, the server M UST fail an attempt to CH ECKIN a version-controlled resource with a non-empty DAV: merge-set. BASELINE-CONTROL A collection can be placed under baseline control with aBASELINE-CONTROL request. When a collection is placed under baseline control, the DAV:version-controlled-configuration property of the collection is set to identify a new version- controlled configuration. This version-controlled configuration can be checked out and then checked in to create a new baseline for that collection.MKACTIVITY A MKACTIVITY request creates a new activity resource. A serverMAY restrict activity creation to particular collections, but a client can determine the location of these collections from aDAV:activity- collection-set OPTIONS request.RFC 3648ORDERPATCH The ORDERPATCH method is used to change the ordering semantics of a collection, to change the order of the collection's members in the ordering, or both.[0050] In alternate embodiments, however, the remote management hardware and/or firmware microcontroller's web server is implemented to support a small number of HTTP methods that will allow a minimum number of desired management operations. Web server embodiments that support a small number of HTTP methods require fewer instructions and more easily fit into the ROM firmware space that is available to the remote management hardware and/or firmware microcontroller.[0051] Figure 6 is a block diagram illustrating an embodiment of a process to use a web application to establish a two-way connection with the remote management hardware and/or firmware. At 612, at a path labeled as "1," an application is served by remote management hardware and/or firmware 602 from web storage 604 to a browser 610 running on a remote computer and operated by an administrator. In some embodiments, the application is stored ahead of time on a flash memory, as part of the firmware and shipped with it allowing a manufacturer to customize the application for different types of systems or customers. In some embodiments, remote management hardware and/or firmware is later used to update the application to a newer version. Browser 610 in the illustrated embodiment runs JavaScript code and, at the application running in the browser at 614 makes AJAX (Asynchronous Java and XML) calls back to the remote management hardware and/or firmware, over the path labeled as "2." The AJAX calls are received by remote management hardware and/or firmware's management API WSMAN server 606. As used herein, WSMan server 606 provides methods for creating a session, and enables a socket to be established between Browser 610 and remote management hardware and/or firmware 602. Creating a socket allows a benefit of allowing a full-remote two-way connection between the browser 610 and remote management hardware and/or firmware 602. Having established a socket, the application at 616 makes web socket calls to KVM, over the path labeled as "3." As used herein, KVM refers to a Keyboard Video Mouse which is a remote desktop solution that allows a remote management console to remotely manage a system using remote management hardware and/or firmware 602, even when the processor and its operating system are not functional. KVM allows remote manipulation of BIOS settings. In some embodiments, the implementation of web sockets in the remote management hardware and/or firmware 602 allows out-of-band management of the processor with a KVM session using a web browser on the remote computer with no additional softare installed.[0052] Figure 7 is a flow diagram illustrating an embodiment of a process to use a web application to establish a two-way connection with remote management hardware and/or firmware. At 702, an application is served by remote management hardware and/or firmware from a web storage to a browser on a remote computer. At 704, the browser, which in this embodiment runs JavaScript code, makes AJAX calls back to the remote management hardware and/or firmware, requesting to create a socket. At 706, the AJAX calls are received by remote management hardware and/or firmware's management API WSMAN server. At 708, a socket is created between the browser and the remote management hardware and/or firmware. Creating a socket allows a benefit of allowing a full-remote two-way connection between the browser and the remote management hardware and/or firmware. Having established a socket, the application at 710 makes web socket calls to the remote management hardware and/or firmware's KVM. At 712, if there are any more calls to be made, the process returns to 710. Otherwise, the process ends.[0053] Figure 8 is an embodiment of a process of using a web browser to remotely manage each computer in a cloud of computers. As shown, a remote computer 802, operated for example by an administrator, runs a web browser application to administer a cloud of computers, 806, each of the computers incorporating embodiments of remote management using remote management hardware and/or firmware, as disclosed herein. Theadministrator uses a browser to administer an unlimited number of computers in the cloud. Furthermore, in some embodiments, the computers in cloud 806 implement the remote management embodiments disclosed herein, and are therefore ready to use as soon as they are "out of the box." The administrator uses the browser to perform management operations, and does not load or utilize any third party software.[0054] Accordingly, some embodiments offer the benefit of an out-of-the-box experience, insofar as computers incorporating the enclosed embodiments are administered and managed out-of-the-box, using a web browser running on a remote computer. Enclosed embodiments allow computers to be updated, reconfigured, internationalized, and branded remotely using a web browser. Enclosed embodiments also enable a real-time, two-way socket to be established using a web browser on a remote computer.[0055] Examples[0056] Example 1 provides a system, including a microcontroller to configure a processor, the microcontroller including a memory, a network interface coupled to the microcontroller, the network interface to send and receive communications with an external device, a nonvolatile memory to store computer executable instructions to be executed by the microcontroller, and a power supply to provide power to the microcontroller, the network interface, and the non-volatile memory regardless of the power state of the processor. The microcontroller further to provide a web server to receive and process HyperterText Transfer Protocol (HTTP) requests from the external device.[0057] Example 2 includes the subject matter of example 1. In this example, the HTTP requests are to instruct the microcontroller to configure the processor.[0058] Example 3 includes the subject matter of example 1. In this example, the HTTP requests are to specify management operations to be performed by the microcontroller.[0059] Example 4 includes the subject matter of example 1. In this example, the power state of the processor is sleep.[0060] Example 5 includes the subject matter of example 1. In this example, the power state of the processor is soft-off. [0061] Example 6 includes the subject matter of example 1. In this example, the power state of the processor is not active.[0062] Example 7 includes the subject matter of example 1. In this example, the web server is to accept and to process a request to push content into the memory, and, in response to at least one request to get a web page, the web server is to dynamically generate a responsive web page reflecting the content stored in the memory.[0063] Example 8 includes the subject matter of example 7. In this example, the microcontroller further includes a cache memory to store data for use in the dynamically generated responsive web page.[0064] Example 9 includes the subject matter of example 1. In this example, the web server is to support a web socket bidirectional connection with the remote computer.[0065] Example 10 includes the subject matter of example 1. In this example, the computer executable instructions are to fit within the amount of memory space contained in the nonvolatile memory.[0066] Example 11 is a system for remotely administering a processor. The system includes a microcontroller to configure the processor, the microcontroller including a memory, a network interface coupled to the microcontroller, the network interface to send and receive communications with an external device, a non-volatile memory to store computer executable instructions to be executed by the microcontroller, and means for providing power to the microcontroller, the network interface, and the non-volatile memory to allow them to operate regardless of the power state of the processor. The microcontroller in this example is to provide a web server to receive and process HyperterText Transfer Protocol (HTTP) requests from the external device.[0067] Example 12 includes the subject matter of example 11. In this example, the HTTP requests are to instruct the microcontroller to configure the processor.[0068] Example 13 includes the subject matter of any one of examples 11 to 12. In this example, the HTTP requests are to specify management operations to be performed by the microcontroller.[0069] Example 14 includes the subject matter of any one of examples 11 to 13. In this example, the power state of the processor is sleep.[0070] Example 15 includes the subject matter of any one of examples 11 to 13. In this example, the power state of the processor is soft-off. [0071] Example 16 includes the subject matter of any one of examples 11 to 13. In this example, the power state of the processor is not active.[0072] Example 17 includes the subject matter of any one of examples 11 to 16. In this example, the web server is to accept and to process a request to push content into the memory, and, in response to at least one request to get a web page, the web server is to dynamically generate a responsive web page reflecting the content stored in the memory.[0073] Example 18 includes the subject matter of example 17. In this example, the microcontroller is further to include a cache memory to store data for use in the dynamically generated responsive web page.[0074] Example 19 includes the subject matter of any one of examples 11 to 18. In this example, the web server is to support a web socket bidirectional connection with the remote computer.[0075] Example 20 includes the subject matter of any one of examples 11 to 19. In this example, the computer executable instructions are to fit within the amount of memory space contained in the non-volatile memory.[0076] Example 21 is a method for remotely managing a processor. The method includes providing sufficient power to a microcontroller, a network interface, and a flash memory to allow them to operate regardless of the power state of the processor, using instructions read from a non-volatile memory by the microcontroller to implement a web server to receive and process HTTP requests from a remote computer.[0077] Example 22 includes the subject matter of example 21. In this example, the HTTP requests are to instruct the microcontroller to configure the processor.[0078] Example 23 includes the subject matter of any one of examples 21 to 22. In this example, the HTTP requests are to specify management operations to be performed by the microcontroller.[0079] Example 24 includes the subject matter of any one of examples 21 to 23. In this example, the power state of the processor is sleep.[0080] Example 25 includes the subject matter of any one of examples 21 to 23. In this example, the power state of the processor is soft-off.[0081] Example 26 includes the subject matter of any one of examples 21 to 25. In this example, the web server is to accept and to process a request to push content into the memory, and, in response to at least one request to get a web page, the web server is to dynamically generate a responsive web page reflecting the content stored in the memory.[0082] Example 27 includes the subject matter of any one of examples 21 to 26. In this example, the computer executable instructions are to fit within the amount of memory space contained in the non-volatile memory.[0083] Example 28 provides a non-transitory computer-readable medium containing computer executable instructions that, when executed by a microcontroller including a memory, the microcontroller coupled to a processor, a network interface, a non-volatile memory, and a power supply, wherein the power supply is to provide sufficient power to the microcontroller, the network interface, and the non-volatile memory to allow the microcontroller to operate regardless of the power state of the processor, to perform a process of: reading computer-executable instructions from the non-volatile memory, and executing the instructions to provide a web server to receive and process HTTP requests from an external device.[0084] Example 29 includes the subject matter of example 28. In this example, the power state of the processor is sleep.[0085] Example 30 includes the subject matter of example 28. In this example, the power state of the processor is soft-off.[0086] Example 31 is a method for remotely configuring a processor. The method includes steps for providing sufficient power to a microcontroller, a network interface, and a flash memory to allow them to operate when the power state of the processor is at least one of sleeping and soft-off, and using instructions read from a flash memory by themicrocontroller to implement a web server to receive and process HTTP requests from a remote computer.[0087] Example 32 includes the subject matter of example 31. In this example, the HTTP requests are to instruct the microcontroller to configure the processor.[0088] Example 33 includes the subject matter of any one of examples 31 to 32. In this example, the HTTP requests are to specify management operations to be performed by the microcontroller.[0089] Example 34 includes the subject matter of any one of examples 31 to 33. In this example, the web server is to accept and to process a request to push content into the memory, and, in response to at least one request to get a web page, the web server is to dynamically generate a responsive web page reflecting the content stored in the memory.[0090] Example 35 includes the subject matter of any one of examples 31 to 34. In this example, the computer executable instructions are to fit within the amount of memory space contained in the flash memory.[0091] The above examples include specific combination of features. However, such the above examples are not limited in this regard and, in various implementations, the above examples may include the undertaking only a subset of such features, undertaking a different order of such features, undertaking a different combination of such features, and/or undertaking additional features than those features explicitly listed. For example, all features described with respect to the example methods may be implemented with respect to the example apparatus, the example systems, and/or the example articles, and vice versa.[0092] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps.Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.[0093] In the foregoing specification, specific exemplary embodiments have been disclosed. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[0094] Although some embodiments disclosed herein involve data handling and distribution in the context of hardware execution units and logic circuits, other embodiments can be accomplished by way of a data or instructions stored on a non-transitory machine-readable, tangible medium, which, when performed by a machine, cause the machine to perform functions consistent with at least one embodiment. In one embodiment, functions associated with embodiments of the present disclosure are embodied in machine- executable instructions. The instructions can be used to cause a general-purpose or special- purpose processor that is programmed with the instructions to perform the steps of the at least one embodiment. Embodiments of the present invention may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to the at least one embodiment. Alternatively, steps of embodiments may be performed by specific hardware components that contain fixed-function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components.[0095] Instructions used to program logic to perform the at least one embodiment can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD- ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically ErasableProgrammable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the non-transitory computer- readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
A streaming multiprocessor (SM) included within a parallel processing unit (PPU) is configured to suspend a thread group executing on the SM and to save the operating state of the suspended thread group. A load-store unit (LSU) within the SM re-maps local memory associated with the thread group to a location in global memory. Subsequently, the SM may re-launch the suspended thread group. The LSU may then perform local memory access operations on behalf of the re-launched thread group with the re-mapped local memory that resides in global memory.
1.A computer implemented method for saving an operational state associated with a thread group executing on a processor, the method comprising:Determining that the first portion of the memory allocated to the first thread group resides in the first memory regionInside;Allocating a second portion of the memory in the second memory area;Copying a first portion of the memory to a second portion of the memory;Recording a pointer to the second portion of the memory,Wherein the processing engine is configured to be based on the finger pointing to the second portion of the memoryA memory access operation associated with the first thread group is implemented.2.The computer-implemented method of claim 1, wherein the processing engine includes a table having a plurality of different entries, wherein each entry corresponds to a different thread group, and wherein the pointing to the second portion of the memory is recorded The pointer includes:Identifying a first entry in the table that corresponds to the first thread group;Updating the first entry to reflect that the first portion of the memory is copied to the storageThe second part of the device;An entry indicating the pointer to the second portion of the memory is updated.3.The computer implemented method of claim 2, further comprising:Retrieving the first entry in the table;Determining that the first entry reflects that the first portion of the memory is copied to the memoryThe second part;Accessing a pointer to the second portion of the memory included in the first entry;as well asA memory access operation associated with the first thread group is implemented based on the pointer.4.The computer implemented method of claim 1, wherein the first memory area comprises a local memory resource managed by the processing engine.5.The computer implemented method of claim 1 wherein the second memory region comprises a global memory resource.6.The computer implemented method of claim 5 wherein the second portion of the memory comprises a buffer allocated and managed by a software application executing on the processing engine.7.The computer-implemented method of claim 6, wherein the pointer to the second portion of the memory corresponds to a reference address within the buffer, and a given thread within the thread group is configured to be based on The reference address and a portion of the buffer corresponding to the given thread are accessed based on an offset associated with the given thread.8.The computer implemented method of claim 1, wherein the processing engine is comprised within a series of processing engines residing within a parallel processing unit and configured to execute one or more thread groups simultaneously.9.A computing device configured to hold an operational state associated with a thread group executing on a processing engine, comprising:The processing engine is configured to:Determining that the first portion of the memory allocated to the first thread group resides in the first memory regionInside;Allocating a second portion of the memory in the second memory area;Copying a first portion of the memory to a second portion of the memory;Recording a pointer to the second portion of the memory,Wherein the processing engine is configured to be based on the finger pointing to the second portion of the memoryA memory access operation associated with the first thread group is implemented.10.The computing device of claim 9 wherein said processing engine includes a table comprising a plurality of different entries, wherein each entry corresponds to a different set of threads, and said processing engine records a second to said memory by the following step Part of the pointer:Identifying a first entry in the table that corresponds to the first thread group;Updating the first entry to reflect that the first portion of the memory is copied to the storageThe second part of the device;An entry indicating the pointer to the second portion of the memory is updated.
Techniques for saving and restoring the operation state of a thread groupTechnical fieldThe present invention generally relates to single instruction, multiple data processing (SIMD) and, more particularly, to techniques for saving and restoring the operational state of a thread group.Background techniqueIn a conventional SIMD architecture, a parallel processing unit (PPU) can execute multiple thread groups simultaneously, with each thread within the group executing the same instructions on different portions of the input data. A given thread typically relies on various memory resources, including local memory, shared memory, registers, and the like, when executing instructions. The state of these memory resources is referred to as the "operational state" of the thread.In some circumstances, the PPU can save the operational state of a given thread group to reallocate memory resources consumed by those threads to another thread group. When such a situation occurs, the regular PPU can only copy the operational state of each thread in the thread group to memory. The PPU can then reinitialize the thread group by copying the operational state of each thread from memory back to the corresponding memory resource. With this approach, the PPU can "pause" the mid-execution of the thread group to initiate another thread group that consumes the same resources as the "suspended" thread group.However, the above method is problematic because the speed at which a given thread's operational state can be copied depends on the size of the operational state. When a given thread group includes a large number of threads and the operational state of each thread is relatively large, copying the operational state for each thread within the thread group multiple times may require a large amount of computing resources. Therefore, the total processing throughput of the PPU may drop dramatically.Accordingly, what is needed in the art is a more efficient technique for saving and restoring operational states associated with different thread groups in a parallel processing system.Summary of the inventionOne embodiment of the present invention sets forth a computer-implemented method for saving an operational state associated with executing a thread group on a processor, comprising determining that a first portion of memory allocated to a first thread group resides in a first memory region Distributing a second portion of the memory in the second memory region, copying the first portion of the memory to the second portion of the memory, and recording a pointer to the second portion of the memory, wherein the processing engine is configured to be based on the The two-part pointer implements a memory access operation associated with the first thread group.One advantage of the disclosed technique is that when restoring the operational state of a thread group, the processing engine is not required to copy the local memory associated with the thread group back to the local memory resource previously associated with the thread group, thereby saving and processing the engine Associated computing resources.A second advantage is that subsequent save operations can reuse the second portion of the memory, saving copies during subsequent save operations.DRAWINGSAccordingly, the above-described features of the present invention can be understood in detail, and a more detailed description of the present invention as set forth in the <RTIgt; It is to be understood, however, that the appended claims are in the1 is a block diagram showing a computer system configured to implement one or more aspects of the present invention;2 is a block diagram of a parallel processing subsystem for the computer system of FIG. 1 in accordance with one embodiment of the present invention;3 is a block diagram of a portion of a stream multiprocessor within the general processing cluster of FIG. 2, in accordance with an embodiment of the present invention;4 is a schematic diagram showing the stream multiprocessor of FIG. 3 in more detail, in accordance with an embodiment of the present invention;Figure 5 is a flow diagram of method steps for saving an operational state of a thread group, in accordance with one embodiment of the present invention;Figure 6 is a flow diagram of method steps for restoring an operational state of a thread group, in accordance with one embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth However, it will be apparent to those skilled in the art that the present invention may be practiced without the specific details.System OverviewFIG. 1 is a block diagram showing a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and system memory 104 that communicate via an interconnect path that can include a memory bridge 105. The memory bridge 105 can be, for example, a north bridge chip connected to an I/O (input/output) bridge 107 via a bus or other communication path 106 (e.g., a HyperTransport link). I/O bridge 107, which may be, for example, a south bridge chip, receives user input from one or more user input devices 108 (e.g., a keyboard, mouse) and forwards the input to CPU 102 via communication path 106 and memory bridge 105. Parallel processing subsystem 112 is coupled to memory bridge 105 via a bus or second communication path 113, such as a Peripheral Component Interconnect (PCI) Express, an accelerated graphics port, or a hypertransport link. In one embodiment, parallel processing subsystem 112 is a graphics subsystem that passes pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. System disk 114 is also coupled to I/O bridge 107 and is configurable to store content and applications and data for use by CPU 102 and parallel processing subsystem 112. The system disk 114 provides non-volatile storage for applications and data and may include fixed or removable hard drives, flash memory devices, and CD-ROM (Compact Disc Read Only Memory), DVD-ROM (Digital Versatile Disc-ROM), Blu-ray, HD-DVD (High Definition DVD) or other magnetic, optical or solid state storage devices.Switch 116 provides a connection between I/O bridge 107 and other components such as network adapter 118 and various add-in cards 120 and 121. Other components (not explicitly shown), including Universal Serial Bus (USB) or other port connections, compact disc (CD) drives, digital versatile disc (DVD) drives, film recording equipment, and the like, can also be connected to I/ O bridge 107. The various communication paths shown in Figure 1 including specifically named communication paths 106 and 113 can be implemented using any suitable protocol, such as PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol. And, as is known in the art, connections between different devices may use different protocols.In one embodiment, parallel processing subsystem 112 includes circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, parallel processing subsystem 112 includes circuitry optimized for general purpose processing while preserving the underlying computing architecture, as will be described in greater detail herein. In yet another embodiment, parallel processing subsystem 112 may be integrated with one or more other system components in a single subsystem, such as in conjunction with memory bridge 105, CPU 102, and I/O bridge 107 to form a system on chip (SoC). .It should be understood that the systems shown herein are exemplary and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as needed. For example, in some embodiments, system memory 104 is directly connected to CPU 102 rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, parallel processing subsystem 112 is coupled to I/O bridge 107 or directly to CPU 102 instead of to memory bridge 105. In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated onto a single chip rather than being present as one or more discrete devices. Large embodiments may include two or more CPUs 102 and two or more parallel processing subsystems 112. The specific components shown in this article are optional; for example, any number of cards or peripherals may be supported. In some embodiments, switch 116 is removed and network adapter 118 and add-in cards 120, 121 are directly connected to I/O bridge 107.FIG. 2 illustrates a parallel processing subsystem 112 in accordance with one embodiment of the present invention. As shown, parallel processing subsystem 112 includes one or more parallel processing units (PPUs) 202, each coupled to local parallel processing (PP) memory 204. Typically, the parallel processing subsystem includes U PPUs, where U > (In this context, multiple instances of similar objects are required to be represented by reference numerals identifying the object and numbers in parentheses identifying the instance.) PPU 202 and parallel processing memory 204 may be implemented using one or more integrated circuit devices, such as A programmable processor, an application specific integrated circuit (ASIC), or a memory device, or implemented in any other technically feasible manner.Referring again to FIGS. 1 and 2, in some embodiments, some or all of the PPUs 202 in the parallel processing subsystem 112 are graphics processors having rendering pipelines that can be configured to implement various operations related to: via memory Bridge 105 and second communication path 113 generate pixel data from graphics data supplied by CPU 102 and/or system memory 104, and interact with local parallel processing memory 204 (which may be used as graphics memory, including, for example, conventional frame buffers). To store and update pixel data, pass pixel data to display device 110, and the like. In some embodiments, parallel processing subsystem 112 may include one or more PPUs 202 operating as graphics processors and one or more other PPUs 202 for general purpose computing. These PPUs 202 can be the same or different, and each PPU 202 can have one or more dedicated parallel processing memory devices or no dedicated parallel processing memory devices. One or more PPUs 202 in parallel processing subsystem 112 may output data to display device 110, or each of PPUs 202 in parallel processing subsystem 112 may output data to one or more display devices 110.In operation, CPU 102 is the main processor of computer system 100, controlling and coordinating the operation of other system components. Specifically, the CPU 102 issues a command to control the operation of the PPU 202. In some embodiments, CPU 102 writes a command stream for each PPU 202 into a data structure (not explicitly shown in FIG. 1 or FIG. 2), which may be located in system memory 104, parallel processing memory 204, or Other storage locations accessible to both CPU 102 and PPU 202. A pointer to each data structure is written to a pushbuffer to initiate processing of the command stream in the data structure. The PPU 202 reads the command stream from one or more push buffers and then executes the commands asynchronously with respect to the operation of the CPU 102. The execution priority can be assigned to each pushbuffer by the application via device driver 103 to control the scheduling of different pushbuffers.Referring now back to Figures 2 and 1, each PPU 202 includes an I/O that communicates with the remainder of computer system 100 via a communication path 113 that is coupled to memory bridge 105 (or, in an alternative embodiment, directly to CPU 102). O (input/output) unit 205. The connection of PPU 202 to the rest of computer system 100 can also vary. In some embodiments, parallel processing subsystem 112 can be implemented as a card that can be inserted into an expansion slot of computer system 100. In other embodiments, PPU 202 can be integrated on a single chip with a bus bridge such as memory bridge 105 or I/O bridge 107. In other embodiments, some or all of the components of PPU 202 can be integrated with CPU 102 on a single chip.In one embodiment, communication path 113 is a PCI Express link, as is known in the art, where a dedicated channel is assigned to each PPU 202. Other communication paths can also be used. I/O unit 205 generates packets (or other signals) for transmission over communication path 113, and also receives all incoming packets (or other signals) from communication path 113, directing the incoming packets to the appropriate PPU 202. component. For example, commands related to processing tasks can be directed to host interface 206, while commands related to memory operations (e.g., reads or writes to parallel processing memory 204) can be directed to memory crossbar unit 210. The host interface 206 reads each of the push buffers and outputs the command stream stored in the push buffer to the front end 212.Advantageously, each PPU 202 implements a highly parallel processing architecture. As shown in detail, PPU 202(0) includes a processing cluster array 230 that includes C general processing clusters (GPCs) 208, where C > Each GPC 208 is capable of executing a large number (eg, hundreds or thousands) of threads concurrently, where each thread is an instance of a program. In various applications, different GPCs 208 can be assigned to handle different types of programs or to implement different types of calculations. The allocation of GPC 208 may vary depending on the amount of work generated by each type of program or calculation.The GPC 208 receives the processing tasks to be performed from the work distribution unit within the task/work unit 207. The work distribution unit receives a pointer to a processing task encoded as task metadata (TMD) and stored in memory. The pointer to the TMD is included in the command stream stored as a push buffer and received by the front end unit 212 from the host interface 206. Processing tasks that can be encoded as TMDs include an index of the data to be processed, as well as state parameters and commands that define how the data will be processed (eg, what program will be executed). Task/work unit 207 receives tasks from front end 212 and ensures that GPC 208 is configured to be active before the processing specified by each TMD is initiated. The priority of the execution of the processing task can be specified for each TMD. Processing tasks can also be received from processing cluster array 230. Alternatively, the TMD may include parameters that control the addition of the TMD to the head or tail of the processing task list (or a list of pointers to the processing tasks) to provide another level of control in addition to the priority.Memory interface 214 includes D partition units 215, each of which is directly coupled to a portion of parallel processing memory 204, where D > As shown, the number of partition units 215 is generally equal to the number of dynamic random access memories (DRAMs) 220. In other embodiments, the number of partition units 215 may also not be equal to the number of memory devices. One of ordinary skill in the art will appreciate that DRAM 220 can be replaced with other suitable storage devices and can be of a conventional design. Therefore, the detailed description is omitted. Render targets such as frame buffers or texture maps can be stored across DRAM 220, which allows partition unit 215 to write portions of each render target in parallel to effectively use the available bandwidth of parallel processing memory 204.Any one of GPCs 208 can process data to be written to any DRAM 220 within parallel processing memory 204. The crossbar unit 210 is configured to route the output of each GPC 208 to the input of any of the partition units 215 or to another GPC 208 for further processing. The GPC 208 communicates with the memory interface 214 via the crossbar unit 210 to read or write to various external memory devices. In one embodiment, crossbar unit 210 has a connection to memory interface 214 to communicate with I/O unit 205, and a connection to local parallel processing memory 204 such that processing cores within different GPCs 208 can interface with system memory 104. Or other memory communication that is not local to the PPU 202. In the embodiment shown in FIG. 2, the crossbar unit 210 is directly coupled to the I/O unit 205. The crossbar unit 210 can use a virtual channel to separate the traffic between the GPC 208 and the partition unit 215.Additionally, GPC 208 can be programmed to perform processing tasks associated with a wide variety of applications including, but not limited to, linear and non-linear data transformation, video and/or audio data filtering, modeling operations (eg, applying physical laws to determine objects) Position, velocity, and other properties), image rendering operations (eg, tessellation shaders, vertex shaders, geometry shaders, and/or pixel shader programs), and more. PPU 202 can transfer data from system memory 104 and/or local parallel processing memory 204 to internal (on-chip) memory, process the data, and write the resulting data back to system memory 104 and/or local parallel processing memory 204, where The data may be accessed by other system components, including the CPU 102 or another parallel processing subsystem 112.The PPU 202 can be equipped with any amount of local parallel processing memory 204, including no local memory, and can use local memory and system memory in any combination. For example, in a unified memory architecture (UMA) embodiment, PPU 202 can be a graphics processor. In such an embodiment, dedicated graphics (parallel processing) memory will not be provided or hardly provided, and the PPU 202 will use the system memory in an exclusive or almost exclusive manner. In a UMA embodiment, the PPU 202 can be integrated into a bridge chip or a processor chip, or provided as a discrete chip with a high speed link (eg, PCI Express) via a bridge chip or other means of communication Connect the PPU 202 to the system memory.As indicated above, any number of PPUs 202 can be included in parallel processing subsystem 112. For example, multiple PPUs 202 may be provided on a single card, or multiple cards may be connected to communication path 113, or one or more PPUs 202 may be integrated into a bridge chip. The PPUs 202 in a multiple PPU system may be the same or different from each other. For example, different PPUs 202 may have different numbers of processing cores, different amounts of local parallel processing memory, and the like. Where multiple PPUs 202 are present, those PPUs can be operated in parallel to process data at a higher throughput than is possible with a single PPU 202. A system containing one or more PPUs 202 can be implemented in a variety of configurations and form factors, including desktop computers, laptop or handheld personal computers, servers, workstations, game consoles, embedded systems, and the like.Multiple processing tasks can be executed concurrently on GPC 208 and processing tasks can generate one or more "child" processing tasks during execution. Task/work unit 207 receives tasks and dynamically schedules processing tasks and sub-processing tasks for execution by GPC 208.3 is a block diagram of a Streaming Multiple Processor (SM) 310 within the GPC 208 of FIG. 2, in accordance with one embodiment of the present invention. Each GPC 208 can be configured to execute a large number of threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular input data set. In some embodiments, single instruction, multiple data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single instruction, multi-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads using common instruction units configured to issue instructions to a set of processing engines within each of GPCs 208. Unlike all SIMD execution mechanisms where the processing engine typically executes the same instructions, SIMT execution allows different threads to more easily follow the decentralized execution path by a given thread program. One of ordinary skill in the art will appreciate that the SIMD processing mechanism represents a subset of the functionality of the SIMT processing mechanism.The operation of GPC 208 is advantageously controlled via a pipeline manager (not shown) that distributes processing tasks to one or more Streaming Multiple Processors (SM) 310, where each SM 310 is configured to process one or more thread groups. Each SM 310 includes an instruction L1 cache 370 that is configured to receive instructions and constants from memory via an L1.5 cache (not shown) within GPC 208. The warp scheduler and instruction unit 312 receives instructions and constants from the instruction L1 cache 370 and controls the local register file 304 and the SM 310 functional units in accordance with the instructions and constants. The SM310 function unit includes N exec (execution or processing) units 302 and P load-store units (LSU) 303. The SM functional unit can be pipelined, which allows a new instruction to be issued before the previous instruction is completed. Any combination of functional execution units can be provided. In one embodiment, the functional unit supports a wide variety of operations, including integer and floating point operations (eg, addition and multiplication), comparison operations, Boolean operations (AND, OR, XOR), shifts, and various algebraic functions. Calculations (eg, plane interpolation, trigonometric functions, exponential functions, and logarithmic functions, etc.); and the same functional unit hardware can be used to implement different operations in a balanced manner.As defined previously herein, a series of instructions communicated to a particular GPC 208 constitute a thread, and a set of concurrent execution threads across a parallel processing engine (not shown) within SM 310 is referred to herein as a "warp ( Warp)" or "thread group". As used herein, "thread group" refers to a group of threads that concurrently execute the same program on different input data, one thread of which is assigned to a different processing engine within SM 310. A thread group may include fewer threads than the number of processing engines within the SM 310, in which case some processing engines will be idle during the period in which the thread group is being processed. The thread group may also include more threads than the number of processing engines within the SM 310, in which case processing will occur in successive clock cycles. Because each SM 310 can concurrently support up to G thread groups, the result is that up to G*M thread groups can be executed in GPC 208 including M stream multiprocessors 310 at any given time.In addition, multiple related thread groups can be active simultaneously within the SM 310 (at different stages of execution). This set of thread groups is referred to herein as a "cooperative thread array" ("CTA") or "thread array." The size of a particular CTA is equal to m*k, where k is the number of concurrent execution threads in the thread group and is typically an integer multiple of the number of parallel processing engines within SM 310, and m is the number of concurrently active thread groups within SM 310. The size of the CTA is typically determined by the programmer and the amount of hardware resources available to the CTA, such as memory or registers.In an embodiment of the invention, it may be desirable to use a PPU 202 of a computing system or other processor to perform general purpose computations using a thread array. Each thread in the thread array is assigned a unique thread identifier ("thread ID") that is accessible to the thread during execution of the thread. Thread IDs, which can be defined as one-dimensional or multi-dimensional values, control aspects of thread processing behavior. For example, the thread ID can be used to determine which portion of the input data set the thread will process and/or determine which portion of the output data set the thread will generate or write.The per-thread instruction sequence can include at least one instruction that defines a cooperative behavior between a representative thread of the thread array and one or more other threads. For example, a per-thread instruction sequence may include an instruction to suspend operation for a representative thread at a particular point in the sequence until an instruction such as one or more times of other threads arrives at the particular point, for representative threads to data Instructions stored in one or more shared memory of other threads, for a representative thread to atomically read and update one or more shared memories that are accessed by other thread-based thread IDs Instructions for data in and so on. The CTA program may also include instructions to calculate an address in the shared memory from which the data will be read, which is a function of the thread ID. By defining the appropriate function and providing synchronization techniques, one thread of the CTA can write data to a given location in shared memory and read data from that location by different threads of the same CTA in a predictable manner. Therefore, any desired mode of data sharing between threads can be supported, and any thread in the CTA can share data with any other thread in the same CTA. If there is data sharing between threads of the CTA, its scope is determined by the CTA program; therefore, it should be understood that in a particular application using CTA, the threads of the CTA may or may not actually share data with each other, which The terms "CTA" and "thread array" are used synonymously herein depending on the CTA program.The SM 310 provides on-chip (internal) data storage with different levels of accessibility. Special registers (not shown) are readable to the LSU 303 but are not writable and are used to store parameters that define the "location" of each thread. In one embodiment, the special register includes a register that stores a thread ID per thread (or per exec unit 302 within the SM 310); each thread ID register is only accessible by the respective exec unit 302. The special registers may also include additional registers readable by all threads (or by all LSUs 303) that perform the same processing task represented by TMD 322, which stores CTA identifiers, CTA dimensions, CTA grids. The dimension (or queue location if TMD 322 encodes the queue task instead of the grid task), and the identifier of the TMD 322 to which the CTA is assigned.If TMD 322 is a Grid TMD, execution of TMD 322 will initiate and execute a fixed number of CTAs to process a fixed amount of data stored in queue 525. Specify the number of CTAs as the product of the grid width, height, and depth. A fixed amount of data can be stored in the TMD 322 or the TMD 322 can store pointers to data to be processed by the CTA. The TMD 322 also stores the start address of the program executed by the CTA.If the TMD 322 is a queue TMD, the queue characteristics of the TMD 322 are used, which means that the amount of data to be processed is not necessarily fixed. The queue entries store data for processing by the CTA assigned to TMD 322. Queue entries may also represent subtasks generated by another TMD 322 during thread execution, providing nested parallelism. Usually the execution of a thread or a CTA including a thread is suspended until the execution of the subtask is completed. The queue can be stored in the TMD 322 or separately from the TMD 322, in which case the TMD 322 stores a queue pointer to the queue. Advantageously, data generated by subtasks can be written to the queue while the TMD 322 representing the subtask is executing. The queue can be implemented as a circular queue so that the total amount of data is not limited to the size of the queue.The CTAs belonging to the grid have implicit grid width, height and depth parameters that indicate the location of the respective CTA within the grid. The special registers are written during initialization during response to commands received from the device driver 103 via the front end 212 and the special registers are not changed during execution of the processing task. The front end 212 schedules each processing task for execution. Each CTA is associated with a specific TMD 322 for concurrent execution of one or more tasks. In addition, a single GPC 208 can perform multiple tasks concurrently.A parameter memory (not shown) stores runtime parameters (constants) that can be read by, but not written by, any thread within the same CTA (or any LSU 303). In one embodiment, device driver 103 provides these parameters to the parameter store before booting SM 310 begins executing tasks that use parameters. Any thread within any CTA (or any exec unit 302 within the SM 310) can access the global memory through the memory interface 214. Portions of the global memory can be stored in the L1 cache 320.Each thread uses the local register file 304 as a scratch space; each register is allocated for one thread, and the data in any portion of the local register file 304 is only accessible to the thread to which the register is allocated. The local register file 304 can be implemented as a register file physically or logically divided into P channels, each channel having a certain number of entries (where each entry can store, for example, a 32-bit word). One channel is assigned to each of the N exec units 302 and the P download-storage units LSU 303, and the corresponding entries in the different channels are populated with data for executing different threads of the same program to assist SIMD execution. Different portions of the channel can be assigned to different thread groups in the G concurrent thread groups such that a given entry in the local register file 304 is only accessible to a particular thread. In one embodiment, some entries in the local register file 304 are reserved for storing thread identifiers, implementing one of the special registers. In addition, the consistent L1 cache 375 stores consistent values or constant values for each of the N exec units 302 and the P download-store units LSU 303.Shared memory 306 is accessible to threads within a single CTA; in other words, any location in shared memory 306 is accessible to any thread within the same CTA (or to any processing engine within SM 310). Shared memory 306 can be implemented as a shared register file or shared on-chip cache memory having interconnections that allow any processing engine to read or write to any location in the shared memory. In other embodiments, the shared state space may be mapped to each CTA area of off-chip memory and cached in the L1 cache 320. The parameter memory can be implemented as a designated portion within the same shared register file or shared cache memory that implements shared memory 306, or as a separate shared register or on-chip cache memory to which LSU 303 has read-only access. In one embodiment, the area that implements the parameter memory is also used to store the CTA ID and task ID, as well as the CTA and grid dimensions or queue locations, implementing portions of the special registers. Each LSU 303 in SM 310 is coupled to a uniform address mapping unit 352 that translates the address provided for the load and store instructions specified in the unified memory space into an address in each distinct memory space. Therefore, instructions can be used to access any of the local, shared, or global memory spaces by specifying addresses in the unified memory space.The L1 cache 320 in each SM 310 can be used to cache private per-thread local data as well as per-application global data. In some embodiments, each CTA shared data may be cached in L1 cache 320. LSU 303 is coupled to shared memory 306 and L1 cache 320 via a memory and cache interconnect 380.It should be understood that the core architecture described herein is exemplary and variations and modifications are possible. Any number of processing units, such as SM 310, may be included within GPC 208. Further, as shown in FIG. 2, PPU 202 can include any number of GPCs 208 that are advantageously functionally similar to each other such that execution behavior does not depend on which GPC 208 receives a particular processing task. Further, each GPC 208 uses separate and distinct processing units, the L1 cache, advantageously operates independently of the other GPCs 208 to perform tasks for one or more applications.Those of ordinary skill in the art will appreciate that the architecture depicted in Figures 1-3 in no way limits the scope of the invention, and that the techniques taught herein can be implemented on any suitably configured processing unit, including but not limited to one or Multiple CPUs, one or more multi-core CPUs, one or more PPUs 202, one or more GPCs 208, one or more graphics or dedicated processing units, and the like, without departing from the scope of the invention.As described above, SM 310 is configured to support execution of a plurality of related thread groups included within a particular CTA, where each thread group includes multiple threads. As also described, each thread within a given thread group is configured to perform processing operations using private, per-thread memory resources. In addition to other memory resources, private, per-thread memory resources for threads within a given thread group are collectively referred to herein as "local memory" associated with the thread group and may include a local register heap 304. The local memory for a given thread group resides at a base memory address associated with the thread group, which is referred to herein as a "local memory reference" or alternatively "LMEM reference." In one embodiment, the local memory resides by default in hardware managed memory resources.In some circumstances, the SM 310 can suspend the operation of the CTA and then initiate a new CTA "replace" the suspended CTA, i.e., use similar functional resources. In doing so, the SM 310 is configured to save the operational state of the CTA by re-mapping the local memory associated with the thread group within the suspended CTA to global memory. The SM 310 is also configured to update the LMEM reference associated with each such thread set to reflect the location of the remapped local memory. The SM 310 manages the remapping of the local memory associated with the thread group within the suspended CTA using a pointer table that stores updated LMEM references for each thread group.The SM 310 is also configured to subsequently resume the operational state of the suspended CTA by reinitiating a thread group within the CTA on the functional unit within the SM 310. SM 310 is also configured to retrieve updated LMEM benchmarks for each re-initiated thread group and then implement memory access operations for these thread groups using the updated LMEM benchmark, as described below in connection with Figures 4-6 Described in more detail.Save and restore thread group status4 is a schematic diagram showing the SM 310 of FIG. 3 in more detail, in accordance with an embodiment of the present invention. As shown, SM 310 includes one or more exec units 302 coupled to one or more LSUs 303, similar to those shown in Figure 3C. Exec unit 302 and LSU 303 may be coupled together via a local register file 304 such as that shown in FIG. Thread groups executing within the CTA on SM 310 can utilize execution unit 302 to perform various processing operations and can utilize LSU 303 to implement various memory access operations.As previously described, SM 310 is configured to suspend the CTA executing on SM 310 and save the operational state of the suspended CTA by mapping local memory associated with the thread group within the suspended CTA to global memory. The SM 310 is also configured to manage the remapped local memory using a pointer table such as the pointer table 402 shown in FIG. As shown, pointer table 402 includes a row 410. Each row 410 includes information related to different thread groups. The pointer table 402 also includes columns 404, 406, and 408. Column 404 includes an index configured to execute a thread group on SM 310, and column 406 includes a bit indicating whether local memory associated with these thread groups has been remapped to global memory, column 408 includes pointers for having A pointer to the location in the global memory of the thread group of the local memory being remapped.For a given row 410, column 404 includes an index for a particular thread group and column 406 includes a bit indicating whether local memory associated with the thread group has been remapped. When the local memory associated with the thread group has been remapped, column 408 includes a location in the global memory that resides in the remapped local memory. For example, row 410-2 includes an index "2" within column 404 that uniquely identifies a particular thread group. Row 410-2 includes a "1" in column 406 indicating that the local memory associated with the thread group has been remapped, and includes a location in column 408 that points to the localized memory resident in the remapped local memory. Pointer, "0X60".When the CTA is suspended and the operational state of the CTA is saved, the LSU 303 within the SM 310 is configured to remap the local memory associated with each thread group within the CTA to global memory, as previously described. For a given thread group, the LSU 303 is configured to first determine if the local memory associated with the thread group has been remapped to global memory. In some cases, the global memory for a given thread group may have been previously remapped, such as when the operational state of the thread group was previously suspended. The LSU 303 can determine whether the local memory associated with the thread group has been remapped based on the column 406 of the particular row 410 associated with the thread group.In the event that the local memory associated with the thread group has not been remapped, the LSU 303 initializes a portion of the global memory that is allocated to the local memory associated with the thread group for the thread group. In one embodiment, the allocator is a software program derived from the driver 103 shown in Figure 1 and is configured to allocate a portion of a software managed buffer resident within the global memory. The allocator for the thread group returns a pointer to the allocated portion of the global memory. The LSU 303 can then copy the local memory associated with the thread group to the allocated portion of the global memory. The LSU 303 is configured to update the pointer table 402 for the thread group to indicate that the local memory associated with the thread group has been remapped, such as by setting the bit in column 406 of row 410 corresponding to the thread group to "1". . The LSU 303 is also configured to set the column 408 of row 410 corresponding to the thread group to include the pointer returned by the allocator. The LSU 303 can repeat this process for each thread group within the CTA.Subsequently, when the operational state of the CTA is restored, the SM 310 is configured to re-initiate the threads within the CTA on the functional unit within the SM 310. For a given reinitiated thread group, the LSU 303 is configured to retrieve from the row 410 associated with the thread group a location in the global memory resident global memory that is associated with the thread group associated with the thread group. pointer. When a load and store operation is performed on behalf of a reinitiated thread group, the LSU 303 is configured to implement a memory access operation with the updated LMEM benchmark.By implementing the techniques described above, the SM 310 can suspend the CTA and save the operational state of the CTA, and then resume the operational state after a period of time. When restoring the operational state of a given thread group within the CTA, the SM 310 is not required to recopy the local memory associated with the thread back to the original location of the local memory, thereby reducing the latency at which the operational state of the thread group can be resumed. . Therefore, the SM 310 can exchange the operational states of the respective CTAs in a more rational manner than in the conventional art.Figure 5 is a flow diagram of method steps for saving an operational state of a thread group, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-4, it will be understood by those skilled in the art that any system configured to carry out the method steps in any order is within the scope of the invention.As shown, method 500 begins at step 502, where one of LSUs 303 within SM 310 of FIG. 3 determines whether local memory associated with a given thread group has been remapped to global memory. The thread group can be included in the CTA executing on the SM 310. If the LSU 303 determines that the local memory associated with the thread group has been remapped to global memory, the method 500 ends. Otherwise, if LSU 303 determines that the local memory associated with the thread group has not been remapped to local memory, then method 500 proceeds to step 504.At step 504, the LSU 303 initializes the allocator for the thread group. In one embodiment, the allocator is a software program derived from the driver 103 shown in FIG. The allocator is configured to allocate an area within the global memory that is capable of storing the contents of the local memory associated with the thread group. In one embodiment, the allocator allocates a portion of the buffer to a thread group residing in global memory and associated with SM 310. At step 506, the LSU 303 receives a pointer for the thread group from the dispatcher. The pointer represents the base address of the portion of the global memory allocated by the allocator to the thread group at step 504.At step 508, the LSU 303 copies the local memory associated with the thread group to a location in the global memory corresponding to the pointer returned by the allocator at step 506. At step 510, the LSU 303 updates the pointer table within the SM 310, such as the pointer table 402 shown in Figure 4, to indicate that the local memory associated with the thread group is remapped to the global memory. The LSU 303 is also configured to update the pointer table at step 510 to reflect the location in the global memory to which the local memory was remapped. Method 500 then ends.By implementing method 500, LSU 303 is configured to remap the local memory associated with the thread group to global memory when the operational state of the CTA including the thread group is saved. In effect, in general, the LSU 303 can implement the method 500 for each different thread group within the CTA when saving the operational state of the CTA.Figure 6 is a flow diagram of method steps for restoring an operational state of a thread group, in accordance with one embodiment of the present invention. Although the method steps are described in connection with the systems of Figures 1-4, it will be understood by those of ordinary skill in the art that any system configured to perform the method steps in any order is within the scope of the invention.As shown, method 600 begins in step 602, where SM 310 reinitiates a thread group on a functional unit within SM 310. The SM 310 may have previously suspended the CTA including the thread group, and in doing so, preserves the operational state of the thread group by re-mapping the local memory associated with the thread group, such as by implementing the method described above in connection with FIG. 500.At step 604, one of the LSUs 303 within the SM 310 retrieves a pointer to the remapped local memory associated with the reinitiated thread group from the pointer table 402, shown in FIG. 4, within the pointer table. At step 606, the LSU 303 implements a local memory access operation for the re-initiated thread group with an LMEM reference updated to reflect a pointer to the remapped local memory. Method 600 then ends.By implementing method 600, SM 310 is configured to reinitiate a thread group within the CTA and to resume the operational state of the thread group by causing the thread group to implement a memory access operation using the remapped local memory. In fact, in general, different LSUs 303 can implement method 600 for each different thread group within the CTA when restoring the operational state of the CTA.In summary, a stream multiprocessor (SM) included in a parallel processing unit (PPU) is configured to suspend a thread group executing on the SM and save the operational state of the suspended thread group. A load-store unit (LSU) within the SM remaps the local memory associated with the thread group to a location in the global memory. The SM can then re-initiate the suspended thread group. The LSU can then implement local memory access operations on behalf of the reinitiated thread group using the remapped local memory residing in global memory.Advantageously, when restoring the operational state of the thread group, the LSU is not required to copy the local memory associated with the thread group back to the SM, thereby saving computing resources associated with the SM.One embodiment of the invention can be implemented as a program product for use with a computer system. The program of the program product defines various functions of the embodiments, including the methods described herein, and can be embodied on a variety of computer readable storage media. Exemplary computer readable storage media include, but are not limited to: (i) non-writable storage media (eg, read-only memory devices within a computer, such as a compact disk read-only memory (CD-ROM) readable by a CD-ROM drive) Disk, flash memory, read only memory (ROM) chip or any type of solid state non-volatile semiconductor memory) on which permanent information is stored; and (ii) writable storage medium (eg, within a disk drive or hard drive) A floppy disk or any type of solid state random access semiconductor memory) on which the changeable information is stored.The invention has been described above with reference to specific embodiments. However, it will be understood by those skilled in the art that various modifications and changes may be made without departing from the spirit and scope of the invention as set forth in the appended claims. Accordingly, the foregoing description and drawings are to be regarded asTherefore, the scope of the embodiments of the invention is set forth in the claims
The invention relates to accelerated network packet processing. Devices and techniques for accelerated packet processing are described herein. The device can match an action to a portion of a network data packet and accelerate the packet-processing pipeline for the network data packet through the machine by processing the action.
1.A method that includes:A set of classification rules is accessed through a single root I/O virtualized network interface controller, each of the classification rules in the set of classification rules includes access from at least one L2 (Tier 2), L3 (Tier 3), or L4 (Tier 3) 4) A classification rule specifying at least one header field in the packet header;determining, by the single root input/output virtualized network interface controller, an action for the packet based on at least a portion of the set of classification rules and the packet;sending an action for the packet by the single root input/output virtualized network interface controller to a virtual switch; andAn encapsulated virtual extensible local area network (VXLAN) packet is decapsulated from within the packet by the single root input/output virtualized network interface controller.2.3. The method of claim 2, wherein the set of classification rules is programmed by the virtual switch external to the single root input/output virtualized network interface controller.3.3. The method of claim 2, wherein sending, by the single root input/output virtualized network interface controller to the virtual switch, the action for the packet comprises via the single root input/output virtualized network interface control A single root I/O virtualized physical function is sent.4.3. The method of claim 2, wherein sending, by the single root input/output virtualized network interface controller to the virtual switch, an action for the packet comprises appending the action to the packet.5.3. The method of claim 2, further comprising storing, by the single root input/output virtualized network interface controller, the set of classification rules.6.6. The method of claim 5, wherein storing, by the single root input/output virtualized network interface controller, the set of classification rules comprises classifying the classification by the single root input/output virtualized network interface controller The set of rules is stored in a ternary content addressable memory (TCAM).7.The method of claim 2, wherein the virtual switch comprises an Open Virtual Switch (OVS).8.3. The method of claim 2, wherein the virtual switch exchanges packets between virtual machines.9.3. The method of claim 2, wherein the virtual switch exchanges packets between containers.10.2. The method of claim 2, wherein the classification rules comprise rules expressed in a P4 (Programming Protocol Independent Packet Processor) language.11.2. The method of claim 2, wherein each classification rule in the set of classification rules includes a rule capable of specifying at least one header field including a virtual local area network (VLAN) field.12.2. The method of claim 2, wherein each classification rule in the set of classification rules includes a rule capable of specifying at least one header field including a Multi-Protocol Label Switching (MPLS) switching field.13.3. The method of claim 2, further comprising receiving, by the virtual switch, an action for the packet from the single root input/output virtualized network interface controller, and performing virtual switching of the packet based on the data .14.A network interface controller, comprising:a first interface, capable of receiving packets;a second interface capable of coupling the network interface controller to a multi-core processor system; andcircuit for:Accessing a set of classification rules, each classification rule in the set of classification rules including a rule capable of specifying at least one header field from at least one L2 (level 2), L3 (level 3), or L4 (level 4) packet header ;determining an action for the grouping based on at least a portion of the set of classification rules and the grouping;wherein the actions include decapsulating an encapsulated virtual extensible local area network (VXLAN) packet from within the packet based on programming of the network interface controller by a virtual switch; andperform the action.15.15. The network interface controller of claim 14, wherein the set of classification rules is to be programmed by a virtual switch external to the network interface controller.16.15. The network interface controller of claim 14, wherein the network interface controller comprises an Ethernet controller.17.15. The network interface controller of claim 14, wherein the network interface controller comprises a network interface controller with single root input/output virtualization capabilities.18.18. The network interface controller of claim 17, wherein the circuitry includes a mechanism for sending an action for the packet to the virtual switch via a single root input/output virtualized physical function of the network interface controller circuit.19.15. The network interface controller of claim 14, wherein the circuitry includes circuitry for appending the action to the packet.20.15. The network interface controller of claim 14, wherein the circuitry includes circuitry for including the action in the packet.21.15. The network interface controller of claim 14, wherein the circuitry includes circuitry for storing the set of classification rules.22.21. The network interface controller of claim 21, wherein the circuitry includes circuitry for storing the set of classification rules in a ternary content addressable memory (TCAM).23.15. The network interface controller of claim 14, wherein the virtual switch comprises an open virtual switch (OVS).24.15. The network interface controller of claim 14, wherein the classification rules comprise rules expressed in a P4 (Programming Protocol Independent Packet Processor) language.25.15. The network interface controller of claim 14, further comprising a processor for executing the instructions.26.The network interface controller of claim 25, further comprising a memory.27.A multi-core server comprising:A multi-core processor capable of implementing a virtual switch that programs a network interface controller with classification rules, each of which can be grouped from at least one L2 (Tier 2), L3 (Tier 3), or L4 (Tier 4) at least one header field is specified in the header; anda network interface controller coupled to the multi-core processor, the network interface controller including circuitry for:access said classification rules;determining an action for the grouping based on at least a portion of the classification rule and the grouping;wherein the actions include decapsulating an encapsulated virtual extensible local area network (VXLAN) packet from within the packet based on programming of the network interface controller by a virtual switch; andperform the action.28.28. The server of claim 27, wherein the network interface controller comprises an Ethernet controller.29.28. The server of claim 27, wherein the network interface controller comprises a network interface controller with single root input/output virtualization capabilities.30.28. The server of claim 27, wherein the circuitry comprises for sending the action for the packet to the virtual switch for the packet via a single root input/output virtualized physical function of the network interface controller virtual switching circuit.31.28. The server of claim 27, wherein the circuitry includes circuitry for associating the action with the packet, and the virtual switch is further for accessing the action.32.31. The server of claim 31, wherein the circuitry includes circuitry to append the action to the packet.33.31. The server of claim 31, wherein the circuitry includes circuitry for storing the set of classification rules.34.34. The server of claim 33, wherein the circuitry includes circuitry for storing the set of classification rules in a ternary content addressable memory (TCAM).35.The server of claim 31, wherein the virtual switch comprises an Open Virtual Switch (OVS).36.The server of claim 31, further comprising software instructions for the virtual switch.37.The server of claim 31, wherein the classification rules comprise rules expressed in a P4 (Programming Protocol Independent Packet Processor) language.38.A computer program product disposed on a non-transitory computer readable medium comprising instructions for causing a network interface controller to:Accessing a set of classification rules, each classification rule in the set of classification rules including a classification capable of specifying at least one header field from at least one L2 (level 2), L3 (level 3), or L4 (level 4) packet header rule;Decapsulate the encapsulated virtual extensible local area network (VXLAN) packet from within the packet;determining an action for the packet based on at least a portion of the set of classification rules and the packet; andActions for the packet are sent to a virtual switch via the single root input/output virtualized physical function of the network interface controller for virtual switching of the packet.39.39. The computer program product of claim 38, the set of classification rules programmed by the virtual switch external to the network interface controller.
Accelerates network packet processingThis application is a divisional application, the name of the invention of the parent case is "accelerated network packet processing", the filing date is November 17, 2016, and the application number is 201680075637.4.This application claims the benefit of priority from US Patent Application Serial No. 14/977,810, filed on December 22, 2015, which is hereby incorporated by reference in its entirety.technical fieldEmbodiments described herein generally relate to the processing of data packets sent or received over a network. Some embodiments relate to hardware acceleration of data packet processing.Background techniqueWhen combined with specialized hardware functions, the hardware switch provides networking capabilities, including packet switching, security, deep packet inspection, and other capabilities. More recently, there has been a trend to provide virtual switches and virtual functions for execution on high-capacity computer architectures. The industry has been working to improve coordination between virtual switches to best utilize the throughput benefits provided by hardware switches along with the flexibility and power of virtual switches.Description of drawingsIn the drawings, which are not necessarily to scale, similar numbers may depict similar components in different views. Similar numbers with different letter suffixes may represent different instances of similar components. The drawings generally illustrate the various embodiments discussed in this document by way of example and not limitation.1 illustrates components of a single system deployed for implementing multiple switching platforms for accelerated network packet processing, in accordance with some embodiments.2 illustrates components of a system deployed for implementing a virtual environment for accelerated network packet processing in accordance with some embodiments.Figure 3 illustrates a control device used to accelerate network processing in accordance with some embodiments.4 illustrates a method for accelerating network packet processing in accordance with some embodiments.5 illustrates a system for accelerating network packet processing in accordance with some embodiments.detailed descriptionSome network packet processing solutions have focused on hardware, using top-of-rack (ToR) switches and dedicated function hardware to provide network functions (including packet switching, security, deep packet inspection, and others). However, customers may experience reduced functionality caused by hardware limitations such as limited memory, limited tri-state content addressable memory (TCAM), reduced total number of supported data streams, etc. Furthermore, hardware switches may be overly restrictive in terms of packet parsing and hardware switches may exhibit a general lack of platform flexibility and configurability.Therefore, the trend in the industry has provided software-defined networking (SDN) for decoupling network functions from the underlying hardware, which can help increase agility and reduce costs. Similarly, network functions virtualization (NFV) can replace fixed-function hardware with an implementation fully deployed in software that runs more cost-effectively on general-purpose, standards-based servers, high-volume servers, and so on. However, such software-defined systems may not take advantage of some desirable properties of hardware switches.These software-based solutions include various software-based abstractions of the underlying physical architecture. For example, a virtual switch allows one or more virtual machines (VMs) to communicate with each other. A virtual network function (VNF) may include one or more VMs (running different operating systems (OS)) executing on one or more high-capacity hardware server fabrics, hardware switches, hardware storage and/or cloud infrastructure. The VNF process is used to provide specialized network processing in place of custom network equipment.A wide variety of application programming interfaces (APIs) and software platforms exist in the industry for permitting network automation utilizing virtual switches. A significant benefit of these approaches is the ability to define and customize packet processing rules and corresponding actions at the user level of the operating system (OS). One problem with these approaches is the under-utilization of the underlying physical hardware switches, since the bulk of the rule matching and action identification of packet processing is done within the kernel space of the OS and not on the underlying hardware switches, which have Significantly better processing throughput.One approach associated with better utilization of the underlying switch fabric is Single Root Input/Output (I/O) Virtualization (SR-IOV). In the case of SR-IOV, an interface is provided that allows device adapters to separate hardware resources between packet processing functions. However, this is a binary method, in this case the characteristics are: on or off. Furthermore, SR-IOV activation changes the configuration and management of this architecture. Therefore, SR-IOV is a packet processing "offload" solution, not a packet acceleration solution.Embodiments provide a way to accelerate existing virtual switch solutions to better utilize the underlying physical switch hardware without changing or modifying how existing virtual switch solutions interact with upper layers of device management and configuration.Packet processing coordinates and manages multiple data plane components in a fine-grained manner to take advantage of desirable characteristics of both hardware switching and SDN/NFV usage. Control plane components include mechanisms to determine where traffic (eg, data packets or flows) should be directed, while data plane components include mechanisms to forward traffic to those destinations. Embodiments provide control plane methods, systems and apparatus for accelerating packet processing for multiple data plane components. The data plane components may include, by way of non-limiting example, Data Plane Development Kit (DPDK) components, Field Programmable Gate Array (FPGA) components, and Red, available from Intel, Santa Clara, CA, among other components. Rock Canyon (RRC)/FM10K swap components. Methods according to various embodiments may coordinate the utilization of these and other components in a dynamic and flexible manner based on user-defined and user-configured actions to reduce or minimize energy consumption or increase speed and performance. In an embodiment, the control plane can switch hardware by offloading simple fast packet processing pipelines from software-based switches or virtual switches, while providing more complex processing on a CPU-based software data plane.1 illustrates components of a single network 100 deploying multiple switching platforms for implementing methods in accordance with some embodiments. The description of the embodiment presents only those components necessary to understand the depicted embodiment, so that other components are foreseeable without departing from the teachings herein.The system 100 implements techniques with enhanced matching action acceleration in existing software-based packet processing pipelines (series of packet processing stages) for optimized increased bandwidth, lower latency and jitter, and lower central Processing unit (CPU) consumption. Match action processing refers to the mechanism by which the system 100 achieves packet processing acceleration. Matching action resources found in system 100 perform this acceleration. Existing software-based control planes 101 and 102 (such as OpenStack, OpenDaylight, etc.) do not require modification in order to achieve the acceleration proposed herein. Furthermore, existing virtualization constructs such as paravirtualization (Virtio) 113 and 133 (providing virtual environment management) need not be modified to use the architecture underlying system 100, which includes optimizations for packet acceleration.There are no additional interfaces or control planes that need to be learned by the user in order to implement the optimizations presented herein for accelerated packet processing/communication between devices such as VMs managed by VNFs 110, 120 and 130.Specifically, a network interface controller (NIC) provides one or more novel physical functions (PF 141) and one or more virtual functions (VFs 142 and 143). These functions 141-143 are responsive to one or more novel match action (eg P4) tables and cooperate with existing control planes 101 and 102 and their existing API commands for use in software-based match action networks Packet processing is offloaded to the underlying switch fabric of NIC 140 to perform enhanced network packet processing between VNFs (110, 120, and 130). This processing occurs below the operating system (OS) kernel stack 115 on the NIC 140 , PF 141 and/or VFs 142 and 143 within the TCAM.Functions 141-143 provide: 1) virtual switch (vSwitch) acceleration 118, 2) vSwitch offload 126, 3) VNF acceleration 124, 4) paravirtualization (virtualization (VM)) acceleration 135.Note that while system 100 depicts functions 141-143 supporting four types of acceleration (118, 124, 126, and 135); this is not necessarily the case in every instance. That is, in other embodiments, the NIC may implement and/or be configured to support one of the four types of acceleration (118, 124, 126, and 135) ) in all or various combinations.The functions ( 141 , 142 and 143 ) handled by matching actions and the four types of network packet acceleration ( 118 , 124 , 126 and 135 ) supported are as follows (refer to system 100 of FIG. 1 ).Note that VNF 110 is programmed (on the underlying hardware of system 100) to process a wide variety of software, such as and by way of example, network applications 111 (performing one or more dedicated network packet processing operations), data Project Development Kit 122 (DPDK 122) API and Virtualization Services (Paravirtualization 113). VNF 120 includes web application 121 , DPDKAPI option 122 and NIC VF driver 123 . VNF 130 includes network applications 131, DPDK API options 132, and virtualization services (paravirtualization 123).VNFs 110, 120 and 130 are located above vSwitch 114. vSwitch 114 sits above kernel stack 115 . Access through the vSwitch is used to match and access PF 141 through link 117 . Access by NIC VF driver 123 of VNF 120 to access VF 142 is through link 125 (for direct TLV lookups through VF 142). For access to VF 143 by paravirtualized instance 133 of VNF 130 through links 134 and 137 up to 136 , access for paravirtualized TLV driver lookup table 134 and matching TLV lookup table 137 goes directly to VF 143 .vSwitch Acceleration.The acceleration of the packet processing pipeline for vSwitch 114 occurs by employing metadata generated in response to matching action tables (eg, P4 file(s)).P4 files describe the ability to match action pipelines. In one embodiment, the structure of the match action table is a P4 file (defined by the p4.org open source programming language and format) which is:[Resolver]-[Ingress Quality of Service (QoS)]-[Filtering]-[Tunnel/Network Address Translation (NAT)]-[Replication]-[Egress QoS].In one embodiment, the parser is also in P4 format. This also provides the frame format (packet payload format) for matching.Additionally, P4 provides mechanisms for defining matching files. The non-linear settings of the file can appear as follows:L1 L2 L3 L4 >L5 Source Port Destination Media Access Control (DMAC) Source Internet Protocol (IP) Destination L4 Virtual Network Index (VNI) Source Virtual Port Source Media Access Control (SMAC) Destination IP (4/6) Source L4 Network Service Header (NSH) Path Ethernet Protocol Transmission Control Protocol (TCP) NSH Service Outermost Virtual Local Area Network (VLAN) L4 TCAM Second Outermost VLAN Outermost Multiprotocol Label Switching (MPLS) Second MPLSThe parse tree also identifies the format of the frame used for inner header tunnel encapsulation:Inner Header Inner L2 Inner L3 Inner L4 VNI DMAC Source IP (4/6) Destination L4 SMAC Destination IP (4/6) Source L4 Ethernet Protocol Outermost VLANP4 also provides a mechanism for specifying supported actions and which tables support which actions. An example set of non-limiting actions could be as follows:Basic Actions Modify Actions Tunnel/NAT Actions Replication Count Set VLAN/VPRI Virtual Extensible Local Area Network (VXLAN) Mirror Grant/Negate/Drop Push VLAN Decapsulate VXLAN Multicast Forward to Port Pop VLAN Encapsulate NSH Sample Forward to Virtual Port Set Differentiated Services Code Point (DSCP) Decapsulate NSH Cross-Port Propagation Route Set_soft_idvSwitch acceleration using metadata.The host Ethernet controller (eg, NIC 140 ) first processes the received-received frame on the system 100, and then the host Ethernet controller passes the processed frame all the way through the various software layers of the system 100 for use in Further additional processing. On the receive path, the Ethernet controller can preprocess the frame and associate it with additional frame metadata. The semantics of this preprocessing are based on matching actions. The following example illustrates a process by which vSwitch 114 (such as Open vSwitch (OvS) 114 ) pipeline processing is accelerated using the embodiments presented herein.TCAM pre-classification.The host controller pushes the received packet frame to the TCAM in the pipeline. The vSwitch 114 programs a set of matching rules into the TCAM (using and existing vSwitch 114 APIs, such as those associated with OvS). This results in a certain amount of metadata being set including the results of the TCAM lookup. On a match, the PF 141 appends the result (extra metadata and either in the encapsulated VLAN or in the soft_id_value) to the received frame. The software pipeline uses this result embedded in the metadata in the received frame and can avoid having to implement this TCAM lookup within the software pipeline.In one embodiment and in the case of OvS (114), the enhanced patch for OvS 114 detects this added metadata in the packet. If no metadata is present in the packet header, OvS 114 continues with its normal processing path. However, when metadata is present in the packet header, OvS 114 skips 117 its typical software-based TCAM lookup processing within its pipeline (PF 141 has previously appended TCAM lookup results into NIC 140 before OvS 114 processes the packet instructions).Tunnel unsealed.Host controller 140 receives the frame and vSwitch 114 maps the outer VXLAN, VXLAN-Generic Protocol Extensions (GPE)+NSH, Generic Network Virtualization Encapsulation (Geneve), or Network Virtualization with Generic Routing Extensions (NVGRE) header to metadata Rules in programming. In one embodiment, PF 141 processes these rules to: 1) match rules on outer headers (L2, L3, VNI, and service headers); 2) decapsulate matched outer headers, and 3) add Some additional metadata for signaling removed headers. The pipeline process detects the removed outer header and processes the inner header accordingly.In the case of the OvS 114 (vSwitch 114), the PF 141 uses a decapsulation process in conjunction with the TCAM pre-classification process for providing metadata about the outer header and metadata about the rules matched in the TCAM. When used in combination, TCAM pre-classification rules are applied to inner headers.vSwitch tap.The click interface can be used to monitor packet processing by copying some or all of the frames (associated with the interface) and sending the frames to different locations. This unduly stresses the software and hardware resources of the system 100 and is intrusive to vSwitch 114 processing. Therefore, the teachings presented herein are particularly beneficial in this scenario. The system 100 implements this copying, which is a functional (141-143) process within the NIC 140, using match rules mapped to mirror actions in the match action table.In one embodiment, the vSwitch 114 sends the frame and then processes to speed up multicast replication, tunnel encapsulation, and monitoring in the pipeline occur after the frame appears. In this way, the execution of packet acceleration occurs during transmission of the packet.vSwitch removed.vSwitch offload 126 is an extension of vSwitch acceleration 118 when the following conditions are true for a given set of traffic (network packets handled by system 100):1)The acceleration pipeline has a direct connection 125 to the VM (associated with the traffic managed within the VNF 120); and2)The accelerated pipeline has the capability to fully handle traffic (those handled by the vSwitch 114 data plane) using all the same rules.Although identified as "unloaded", rules within vSwitch 114 never leave vSwitch 114; here, processing pushes and copies rules into the acceleration pipeline and pushes statistics for those rules back to vSwitch 114, making the rules Retained in the software table. A common configuration where vSwitch offload 126 can happen is when using OvS114 to enforce the rules. Rules are applied by separate control plane software layers; eg OpenStack and OpenDaylight (eg third party APIs 101 and 102). The control flow is as follows:1)The tunnel manager enumerates virtual switches (multiple instances of vSwitch 114 are not shown in Figure 1) and creates tunnels between them in a complete mesh topology (each vSwitch is connected to every other vSwitch). Note that, in some embodiments, the creation of the tunnel occurs when the vSwitches need to talk to each other (delayed handling).2)When two VMs/containers want to connect to each other (from VNFs 110-130), the population of forwarding rules to the VM/container's corresponding vSwitch occurs. Some systems will populate these rules via virtual L2 learning; other systems will specify rules directly from the centralized controller.In addition to tunneling rules (as discussed above), vSwitch 114 may implement access control lists (ACLs), service function classification, service function forwarding, basic ingress QoS, and connection tracking.VNF acceleration.In both vSwitch acceleration 118 and vSwitch deinstallation 126, vSwitch 114 is executing in host system 100 and has full control over the underlying devices associated with system 100 for acceleration processing. Here, the attached VM/container does not have any control over any kind of acceleration "from the network". The VM/container does not have any control over what happens inside the vSwitch 114. VNF 120 uses the same semantics as vSwitch 114 for which tables are found to be available, how many entries are available in each table, and what matching action rules to use for application. However, the constraints of VNF 120 occur in the following way:1)Host system 100 trusts VNF 120 for acceleration (otherwise, host system 100 cannot grant requests from VNF 120).2)The rules of the application or VNF occur on the traffic sent to or received from the VNF 120 . Therefore, the VNF acceleration 124 focuses on grooming the traffic before sending it to the VNF 120 . Therefore, VNF 120 cannot add rules to affect traffic to/from other VFs (eg, VF 143), other PFs 141, or physical uplink ports.3)VNF 120 has no visibility beyond its own interface. Therefore, VNF 120 cannot add rules that affect traffic to/from other VNFs (110 and/or 130), PF 141, VFs 141 and 143, or physical uplink ports.4)VNF 120 may not have the same set of actions available to it. That is, VNF functions are generally limited to VNF 120 related functions such as ACLs (drop, count and police), directing packets to queues, prioritizing packets to queues, and marking packets with metadata for preprocessing.In one embodiment, a VNF 120 has multiple queues associated with the VNF 120 . The VNF 120 is programmed in the following manner (using eg the Data Plan Development Kit 122 (DPDK 122) API):1)Configure default Receive Site Scaling (RSS) rules to spread traffic across multiple queues. Here, VNF 120 may associate one CPU core per queue to scale packet processing across multiple flows.2)Configure a set of FlowDirector® (Intel® Packet Directing Product) rules that act as exceptions to the RSS default propagation rules, placing specific flow types and megaflows into specific flows or given specific priorities. When RSS cannot be delivered efficiently with RSS, the configuration of FlowDirector® helps handle this traffic. This configuration of FlowDirector® also helps handle high-priority traffic.3)Configure a set of filtering rules that drop or police traffic. This can protect the VNF 120 from unwanted traffic, or from receiving traffic at a rate that is too high.4)In a manner similar to vSwitch acceleration 118, VNF 120 may also correlate 16-bit software identifiers (IDs) on flows based on matching action rules.This can speed up certain forwarding paths within VNF 120 (such as TCAM processing).Paravirtualized acceleration.Paravirtualized acceleration 135 is a variant of vSwitch offload 126, it still requires the hardware of system 100 to be able to fully forward and execute packets to process frames, but there is no direct connection between VNF 130/VM and the underlying hardware (134 and 137) connect. Instead, VNF 130 interfaces with paravirtualized drivers 136 (via software). In this scenario, there is a software loop (this could be OvS using DPDK netdev, or some other entity that copies frames to/from VF 143 and paravirtualized queues within VF 130/VM. The presented utilizes DPDK The scenarios where netdev implements OvS are as follows.A discussion of paravirtualized acceleration is provided with reference to FIG. 2 .Although the embodiment of Figure 2 is within the context of paravirtualization, other embodiments may be deployed utilizing any available virtualization management system.2 illustrates components of a system 200 deployed for implementing a virtual environment (such as paravirtualization) that accelerates inter-device communication, according to some embodiments. The description of the embodiment presents only those components necessary for an understanding of the depicted embodiment, so that other components are foreseeable without departing from the teachings herein.Again, system 200 illustrates a number of VNFs (210, 220, and 230), each with a network application (211, 221, and 231), DPDK options API (212, 222, and 232), and an instance of paravirtualization ( Virtualization Management Services - 213, 223 and 233). Each paravirtualized instance ( 214 , 223 and 233 ) connected to the corresponding vHost ( 241 , 242 and 243 ) has the capability for paravirtualized TLV lookup via 214 , 224 and 234 .vHosts ( 241 , 242 and 245 ) reside in the OS with vSwitch 240 and services provided by DPDK 244 . Kernel stack 250 resides in the OS under vSwitch 240 and DPDK 244. Kernel stack 250 has direct access to PF 261 and DPDK 244 and vSwitch 240 have direct access to VFs 262-265. Matching TLV lookup configuration happens through 266-268. Link 251 provides offload operations to PF 261, which are part of NIC 260 and VFs 262-265.Under OvS, instantiation of N (their number) DPDK netdevs (Linux virtual non-action netdevs - functions for obtaining state and capabilities) occurs. Each netdev corresponds to a VF within the hardware (such as VFs 261 and 262). So the arrangement of ports under OvS looks like this:1)Attached as a regular non-DPDK PF acceleration port to the OvS kernel datapath. These interfaces are used when the stream needs to be processed by the kernel (requiring TCP, IP tables, etc.).2)Default Rule VF(s): Between 0 VFs and 4 VFs, hardware for sending frames (which are processed packets that cannot be forwarded) to the DPDK userspace pipeline for processing is accessible . These VFs (such as VFs 142 and 143) have similar functionality to the PF 141 ports, except that the packet processing data plane is in DPDK (with higher performance but without the kernel stack).In one embodiment, if the underlying Red Rock Canyon (RRC) switch components available from Intel® of Santa Clara, CA can support 50G bandwidth on the VF (142 or 143), by combining multiple The VFs (142 and 143) connect to paravirtualized queues to allocate additional assignments of bandwidth.3)Paravirtualized alias (alias) VFs: The remaining VFs (142 or 143 or 261-265 of Figure 2) are used as paravirtualized alias ports and are under the control of the DPDK Userspace Poll Mode Driver (PMD).When a new VNF 130/VM to OvS attach occurs, the assignment of this VF 130 appears as a paravirtualized alias VF. When this happens, OvS with DPDK netdev implements a set of "Hardware Offload Virtual (HOV)" paths between its virtual Host (vHost) implementation and the paravirtualized alias VF. The software logic is installed as follows:1)If all the rules for grouping from this VNF 130/VM have been put into hardware, the zero-copy from the vHost goes directly to the corresponding paravirtualized alias VF.2)Otherwise, if part of the set of rules for packets from this VNF 130/VM has been put into hardware, the frame is processed through the OvS user space pipeline and pushed into the corresponding paravirtualized alias VF on completion .3)Otherwise the packet is passed through the OvS packet processing pipeline. Can be processed via PF netdevs (forwarding frames directly out of the port), or via DPDK VF 130 (in this case picked up by the hardware using the embedded switch below it or pushed into the kernel via the Kernel NIC Interface (KNIC) transport uplink) to send packets into the network.4)In the other direction, all packets received on the paravirtualized alias VF are sent directly from the hardware into the software paravirtualized queue (via vHost) on the HOV path (in one embodiment, a third-party application may provide for a zero-copy implementation that does so).In hardware, exploration of the reverse path occurs and the process looks similar to the following:1)If the application of all rules for packets coming to this VF 130/VM occurs, the frame is forwarded to the VF 130 corresponding to this VM. The PMD picks up the frame and pushes it to the VM via the HOV path.2)If the application of a partial set of rules on the packet occurs, or if the packet is lost in the hardware table, the frame is forwarded to the default rules VF for processing. These frames are picked up by PMD and processed in the DPDK user space pipeline.3)Optionally, specific flows requiring kernel processing are captured to the standard kernel processing PF 141.Rules that divide traffic between paravirtualized aliases, default user space, and kernel data paths preserve session ordering when programming hardware. This is a rule that a particular 5-tuple (destination IP, source IP, L4 destination, L4 source, protocol) is not forwarded via one path and then another frame with the same 5-tuple is forwarded via another path.Inverse processing used on the software side: vHost arbitrates between frames directly from paravirtualized aliases, DPDK user space and the kernel datapath. Since these are already in session order, only the scheduling function is handled. Paravirtualized aliases have the highest processing priority, followed by DPDK userspace and then the kernel path.In this scenario, VNF 130 may request acceleration by sending a tag length value (TLV) via paravirtualization. If the VNF 130 has aliased VFs, these TLVs are converted to VNF acceleration requests through the VF mailbox. For this reason and in one embodiment, the configuration provides support for multi-queue paravirtualization, as this allows VNF 130 to enable RSS and FlowDirector® to spread traffic to multiple queues within the VM. For VNFs using multiple cores, this allows the hardware to spread traffic to multiple cores so that the connection between VNF 130 and VF 130 appears to be a direct connection.In scenarios where the DPDK or kernel datapath fully handles this traffic, processing by system 100 occurs in two ways:1)Inline: The first data path to process a frame (hardware packets from network, software for packets from vHost) can minimally process the frame and send it to the main data plane (userspace, kernel or hardware) ). This is a very efficient mechanism for choosing between data paths, as it does not "bounce" frames between data paths, but it does require a certain amount of processing through the first data path. The first data path may be redundant or overhead in addition to the main data path that does the processing.2)Springback: In this scenario software "fast-paths" the frame into the hardware, and if that hardware can't process the frame itself, then it loops the frame back to DPDK or the kernel. This has a lower amount of software overhead, but takes advantage of the additional Peripheral Component Interconnect Express (PCIe) bandwidth as the frame bounces off the hardware back into the software.In one embodiment, the above-mentioned techniques are implemented as methods, apparatus, and systems for computing device architectures that implement accelerated inter-device communications or operations, such as on-chip VMware communications for a single device architecture. This provides acceleration for one, all or a combination of: 1) vSwitch acceleration using metadata, 2) vSwitch for offload of paravirtualized connected VMs, 3) offload for NSH service chain vSwitch, and/or 4) Multi-layer traffic pre-classification (VNF acceleration and NSH service chaining).These and other embodiments of accelerating inter-device communications are presented below with reference to Figures 3-5.Figure 3 illustrates a control device 300 used to accelerate network processing in accordance with some embodiments. The control device 300 includes a memory 301 , a processor 302 with instructions 303 , a switch interface 304 , and one or more data plane interfaces 305 . The control device 300 interacts with switching silicon 310 (through switching interface 304) and with one or more data plane processors 320 (through data plane interface(s) 305).In one embodiment, the control device 300 is a NIC.In one embodiment, the control device 300 is the NIC 140 of FIG. 1 .In one embodiment, the control device 300 is the NIC 260 of FIG. 2 .In one embodiment, the control device 300 is integrated in and/or interfaced to a multi-core processor. In one embodiment, the multi-core processor is a server.In one embodiment, the control device 300 operates in multiple heterogeneous and virtualized processing environments with a wide variety of VMs, operating systems, and the like.The control device 300 includes a processor 302 for performing the functions described herein. It will be appreciated that any or all of the functions performed by processor 302 may be performed using hardware, software, firmware, or any combination thereof, on one or more processing cores (eg, Intel® architecture core 114 or a core of control device 300).In one embodiment, the processor 302 performs the processing described above with respect to PF 141 and VFs 142 and 143 of FIG. 1 .In one embodiment, processor 302 performs the processing described above with respect to pF 261 and VFs 262-265.In one embodiment, the processor 302 may match the action referenced from the table to a portion of the data in the network packet.In one embodiment, the processor 302 is programmed at least in part by an application programming interface (API). In one embodiment, the API is provided in a format supported by one or more of DPDK, OvS, OpenDaylight, and OpenStack.In one embodiment, the processor 302 is programmed to respond to one or more tables or files that identify the action to which the processor 302 responds for the purpose of performing predefined processing based on the particular identified action.In one embodiment, a user interface allows a user to access an API to populate a table or file with actions.In one embodiment, the file or table is the match action file discussed above with reference to Figures 1-2.The processor 302 may match the action reference from the file/table to a portion of the packet header for the received network packet. The processor 302 cooperates with the memory 301 to process the actions identified by the action references to speed up the packet processing pipeline for network packets.In one embodiment, the memory 301 is random access memory (RAM) on the control device 300 .The processing action is not necessarily a single operation; rather, the action may trigger the processor 302 to perform a series of predefined operations.In one embodiment, the processor 302 performs a ternary CAM lookup on certain packet data when processing actions. This is discussed above in Figures 1-2, and is specifically related to the speedup using packet pre-classification processing performed at the NIC.In one embodiment, the processor 302, while processing actions, decapsulates the outer tunnel encapsulation header for the network packet, removes the header, and adds metadata to the header indicating that the header has been removed. This process is discussed above with reference to FIG. 1 .In one embodiment, the processor 302 copies packet payloads (frames) from network packets when processing actions, and then sends the copied payloads to a network location independent of where the processing packets are used to process The network location of the resource is defined or identified within the pipeline's resource. This scenario describes network traffic mirroring and monitoring operations with significant improvements in processing throughput, as targeting occurs on the control device 300 rather than at upper layers within the network architecture (such as OS layers for specific VMs, etc.) Parsing, processing and sending of mirrored packets.In one embodiment, the processor 302 copies the network packet forwarding rules from the first virtual switch to the second virtual switch when processing the action. The scenarios discussed above refer to the discussion of vSwitch removal acceleration and Figure 1.The processor 302 is configured to communicate with the underlying switch silicon 310 via the switch interface 304 .4 illustrates a method 400 for accelerating network packet processing in accordance with some embodiments. The method 400 is implemented as a executable represented by one or more software modules (hereinafter the method 400 is referred to as a "packet accelerator") and executed by one or more hardware processors from a non-transitory computer-readable storage medium Execute the instruction.In one embodiment, executable instructions representing a packet accelerator are stored on a non-transitory computer-readable storage medium, and when executed by one or more computing devices, perform the method 400 process.In one embodiment, the packet accelerator is configured for execution as firmware on the NIC. In one embodiment, the packet accelerator is PF 141 of FIG. 1 . In one embodiment, the packet accelerator is VF(s) 142 and/or 143 of FIG. 1 . In one embodiment, the packet accelerator is PF 261 of FIG. 2 . In one embodiment, the packet accelerator is the VF(s) 262-265 of FIG. 2 .In one embodiment, the packet accelerator is configured to operate on one or more virtual switches, one or more physical switches, one or more device interfaces, one or more virtual device interfaces, and/or one or more execution within the system.In one embodiment, the packet accelerator is configured for execution within one or more independent and virtual environments (execution on system 100 and/or system 200 of FIGS. 1 and 2, respectively).In one embodiment, the packet accelerator is configured for execution of the control device 300 of FIG. 3 .At 410, the packet accelerator matches a portion of the network data packet to an action in the match action table. For example, the processing and file structure of matching action files and matching to network packets are discussed in FIG. 1 above.At 420, the packet accelerator accelerates the processing of network packets through the machine by performing actions as part of a packet processing pipeline for network data packets. That is, action processing accelerates packet processing in a number of ways that can include processing for: 1) offloading processing of software-based resources to hardware-based resources; 2) reorganizing processing of software-based resources; and/or 3) reorganize how software-based resources access hardware-based resources.According to one embodiment, at 421, the packet accelerator inserts metadata into the network data packet as an indication that part of the packet processing has been handled/processed when the action is processed. For example, the pre-classification process and the TCAM process presented above in Figures 1-2 provide a discussion related to this process.In one embodiment, at 422, the packet accelerator copies packet frames for network data packets and sends the copied packet frames to a location independent of the location associated with the packet processing pipeline. This is the packet mirroring case discussed above with reference to FIG. 1 .In one embodiment, at 423, the packet accelerator assigns network data packets to queues associated with a particular processing core. This may require additional configuration dependencies and processing presented above with the discussion of paravirtualized accelerated processing of FIG. 2 .In one embodiment at 423 and 424, the packet accelerator filters network packets in response to filtering rules.In one embodiment at 424 and 425, the packet accelerator sets a resource identifier on the network data packet that identifies the resource for processing the network data packet when processing the network data packet through the packet processing pipeline. Thus, the tables, metadata and/or packet processing pipeline control structures provide a mechanism by which different heterogeneous software and/or hardware resources help to pass the packet processing pipeline (network(s)/devices) to Network data packets are accelerated, and these mechanisms are identified as resource identifiers for network data packets. The packet accelerator is configured to identify resource identifiers and cause corresponding resources to be processed for acceleration of pipeline processing of packets through the network/device(s).In one embodiment, at 426, the packet accelerator performs various replacement processes depending on whether the traffic rules are completely offloaded to hardware or other conditions. Such a situation is discussed with reference to the paravirtualization assumption of FIG. 2 . Thus, this packet accelerator can: 1) zero-copy rules for processing network data packets into aliased virtualization functions; 2) through the user space of the OS and when pushing frames to aliased virtualization functions has been done Process frames for network data packets; or 3) pass network data packets through the user space pipeline for the OS.Examples as discussed herein may include, or operate on, logic or many components, modules or mechanisms. A module is a tangible entity (eg, hardware) capable of performing specific operations and may be configured or arranged in a certain manner. In one example, circuits may be arranged as modules in a specific manner (eg, internally or with respect to external entities such as other circuits). In one example, one or more processors or one or more computer systems (eg, standalone, client, or server) of control device 300 may be controlled by firmware or software (eg, instructions 303 ( FIG. 3 ), application portions or applications). At least a portion of a computer system) is configured as a module operative to perform specified operations. In one example, the software may reside on at least one machine-readable medium. In one example, when executed by the underlying hardware of a module (eg, control device 300 ), the software may include instructions 303 ( FIG. 3 ) to cause the hardware to perform specific operations.The term "module" is understood to include a tangible entity that is physically constructed, specifically configured (eg, hardwired), or temporarily (eg, instantaneously) configured (eg, programmed) to operate in a specific manner or perform the descriptions described herein. An entity that is at least part of any operation. Considering the example in which the module is temporarily configured, the module need not be instantiated at any point in time. For example, where the modules include a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as corresponding different modules at different times. The software may configure the hardware processor accordingly to, for example, constitute a particular module at one instance of time and constitute a different module at a different instance of time. The term "application, process, or service" or variations thereof may be used herein to include routines, program modules, programs, components, etc., broadly, and may be used in various system configurations (including single-processor or multi-processor systems, microprocessor-based electronics, single-core or multi-core systems, combinations thereof, etc.). Thus, the term "application, process or service" may be used to refer to an embodiment of software or hardware arranged to perform at least a portion of any of the operations described herein.Although a machine-readable medium may include a single medium, the term "machine-readable medium" may include a single medium or multiple media (eg, centralized or distributed databases, and/or associated caches and servers).The term "machine-readable medium" may include any one or more of the techniques capable of storing, encoding, or carrying instructions 303 for execution by a machine (eg, control device 300 or any other module) and causing the machine to perform the disclosure Or any medium capable of storing, encoding or carrying the data structures used by or associated with such instructions. In other words, the processor 302 (FIG. 3) may include instructions and thus may be referred to as a computer-readable medium in the context of various embodiments. Other non-limiting examples of machine-readable media may include solid-state memory, and optical and magnetic media. Specific examples of machine-readable media may include: non-volatile memory, such as semiconductor memory devices (eg, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), and flash memory devices ); magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.Further progress through a communications network may be performed using a transmission medium utilizing any of a number of transport protocols such as Frame Relay, Internet Protocol (IP), TCP, User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), etc. The instruction is transmitted or received 303 . Example communication networks may include local area networks (LANs), wide area networks (WANs), packet data networks (eg, the Internet), mobile telephone networks (eg, including code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA) ) and Orthogonal Frequency Division Multiple Access (OFDMA) and cellular networks (such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), CDMA2000 1x* standards and channel access methods for Long Term Evolution (LTE)), plain old Telephony (POTS) networks, and wireless data networks (including, for example, IEEE 802.11 standards (WiFi), IEEE 802.16 standards (WiMax®), and other Institute of Electrical and Electronics Engineers (IEEE) 802 family of standards), peer-to-peer (P2P) networks , or other protocols now known or later developed.The term "transmission medium" should be understood to include any intangible medium capable of storing, encoding or carrying instructions for execution by hardware processing circuitry and including digital or analog communication signals, or other means used to facilitate communication of such software Intangible medium.5 illustrates a system 500 for accelerating network packet processing in accordance with some embodiments. The system 500 is shown in a greatly simplified format with only certain components illustrated that are necessary for understanding the system 500 .In one embodiment, the system is a multi-core server 510 .In one embodiment, the multi-core server 510 is configured to perform vSwitch acceleration in the manner discussed above with reference to FIG. 1 .In one embodiment, the multi-core server 510 is configured to perform vSwitch offload acceleration in the manner discussed above with reference to FIG. 1 .In one embodiment, the multi-core server 510 is configured to perform VNF acceleration in the manner discussed above with reference to FIG. 1 .In one embodiment, the multi-core server 510 is configured to perform paravirtualized acceleration in the manner discussed above with reference to FIGS. 1 and/or 2 .In one embodiment, the multi-core server 510 is configured to perform selective network packet acceleration for: vSwitch acceleration, vSwitch offload acceleration, VNF acceleration and / or paravirtualized acceleration.The system 500 includes means for matching a portion of a network packet to an action and means for processing the action to speed up the packet processing pipeline, and optionally means for configuring the means for matching and the means for processing .The apparatus is for matching a portion of a network packet to an action during processing of the network packet.In one embodiment, the means for matching includes a matching action table 521 or file, such as the P4 file discussed above with reference to FIG. 1 .In one embodiment, the means for matching includes a memory, such as memory 301 of FIG. 3 .In one embodiment, the means for matching includes volatile or non-volatile memory on a NIC, such as NIC 140 or 260 of respective maps 1 and 2 .In one embodiment, the means for matching includes volatile or non-volatile memory accessible on one or more devices representing system 500 .In one embodiment, the means for matching includes a combination of memory and storage accessible on one or more devices representing system 500 .The means for processing is configured to process actions (obtained from the means for matching). The action is processed to speed up the pipeline of packet processing for the network packet associated with or assigned to the network packet.In one embodiment, the means for processing is one or more of: one or more device driver interfaces 540, one or more virtual interfaces 540, one or more virtual switches 550, one or more OS kernel processing ( 560 ), and/or NIC with physical switch 570 .In one embodiment, the means for processing are various combinations of the component devices and modules illustrated in FIGS. 1 , 2 and/or 3 .In one embodiment, the system 500 includes means for configuring the means for processing into a user-defined (or user-defined) action.In one embodiment, the means for configuring is the API 520. In one embodiment, the API 520 is an example of a specific configuration of the components illustrated in Figures 1 and 2 using APIs provided using OvS, DPDK, OpenStack, OpenDaylight, and/or paravirtualization.In one embodiment, the means for configuring also provides means for configuring the means for matching. In one embodiment, the API 520 is a means for configuring and allowing the creation of a table of files for actions in the matching action table/file 521 .Additional Notes and Examples:Example 1 includes subject matter (such as a control device, inter-plane control device, control plane processor, computer device, and or any other electrical device, device, or processor) that includes memory and processing circuitry. The processing circuitry is configured to match an action reference from a table to a portion of data in a network data packet. The processing circuitry is further configured to process the action identified by the action reference in cooperation with a memory that accelerates a packet processing pipeline for network data packets.In Example 2, the subject matter of Example 1 can optionally include wherein when the processing circuitry processes the action, the processing circuitry is further configured to perform a ternary CAM lookup on the portion of the data and according to a prediction of the network data packet Classification inserts results from lookups into network groupings.In Example 3, the subject matter of any of Examples 1-2 can optionally include wherein when the processing circuitry processes the action, the processing circuitry is further configured to decapsulate the outer tunnel-encapsulated header of the network data packet, from The network data packet removes the tunnel-encapsulated header and adds metadata to the header of the network data packet indicating that the tunnel-encapsulated header was removed from the network data packet.In Example 4, the subject matter of any of Examples 1-3 can optionally include wherein when the processing circuitry processes the action, the processing circuitry is further configured to copy the packet payload from the network data packet and to copy the copied The packet payload is sent to a location independent of the location of the processing pipeline that processes the packet.In Example 5, the subject matter of Examples 1-4 can optionally include wherein when the processing circuitry processes the action, the processing circuitry is further configured to copy the data packet forwarding rules from the first virtual switch to the second virtual switch .In Example 6, the subject matter of any of Examples 1-5 can optionally include wherein the processing circuitry is a physical function integrated in a control device.In Example 7, the subject matter of any of Examples 1-5 can optionally include wherein the processing circuitry is a virtualized function programmed into the control device.In Example 8, the subject matter of any of Examples 1-7 can optionally include a data plane interface configured to forward network data packets to one of: an OS kernel stack, a virtual switch, and a device driver .In Example 9, the subject matter of any of Examples 1-8 can optionally include wherein the controlling device is a network interface controller (NIC).In Example 10, the subject matter of any of Examples 1-9 can optionally include wherein the control device is interfaced and integrated into a multi-core hardware server.Example 11 includes subject matter such as a machine-readable medium including instructions that, when executed on a machine (such as a control device, an inter-plane control device, a control plane processor, a computing device, a NIC card, etc.), cause the machine to cause a network A portion of the data packet is matched to an action in the matching action table and the processing of the network data packet is accelerated by the machine by executing the action as part of a packet processing pipeline for the network data packet.In Example 12, the subject matter of Example 11 can optionally include wherein the instructions to accelerate further include inserting metadata indicating that a portion of the packet processing pipeline is processed when the action is processed, into the network data packet.In Example 13, the subject matter of any of Examples 11-12 can optionally include wherein the instructions to accelerate further include copying the packet frame for the network data packet and sending the copied packet frame to a Packets instructions for the location of the associated location of the processing pipeline.In Example 14, the subject matter of any of Examples 11-13 can optionally include wherein the instructions to accelerate further include instructions to assign network data packets to queues associated with particular processing cores of the machine.In Example 15, the subject matter of Example 14 can optionally include wherein the instructions for accelerating further include instructions for filtering network data packets in response to the filtering rules.In Example 16, the subject matter of Example 15 can optionally include wherein the instructions to speed up further include instructions to set a resource identifier on the network data packet when the network data packet is processed through the packet processing pipeline, the resource identifier Resources for processing are identified for network data packets.Example 17 includes a system (eg, server, computer, group of collaborative computers, etc.) comprising means for matching a portion of a network packet to an action and for processing the action to correlate a network packet based on processing the action A device for accelerated pipeline processing of linked packets.Example 18 includes the subject matter of Example 17, and optionally, wherein a network packet is sent from a first virtual machine (VM) to a second VM, and each of the VMs executing on the same multi-core server executes the means for matching and means for processing.Example 19 includes the subject matter of any of Examples 16-17, and optionally further includes means for configuring the means for processing into a user-defined action.Example 20 includes the subject matter of any of Examples 16-17, and optionally wherein the means for processing is one of: a virtual network switch, a hardware switch, a kernel process, a device driver, and a virtualization interface.The above detailed description includes references to the accompanying drawings which form a part of the detailed description. This figure shows diagrammatically a specific embodiment that may be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, examples of the elements shown or described are also contemplated. Furthermore, it is also contemplated that those elements (or one or more thereof) shown or described be used, either with respect to the particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof) shown or described herein. aspects) in any combination or permutation.The publications, patents, and patent documents mentioned in this document are hereby incorporated by reference in their entirety as if individually incorporated by reference. To the extent that there are inconsistent uses between this document and those documents so incorporated by reference, the use in the incorporated reference(s) is supplementary to the use of this document; for incompatible inconsistencies, this document The usage in the documentation shall prevail.In this document, the terms "a" or "an" (as commonly found in patent documents) are used to include one or more independent of any other instance or use of "at least one" or "one or more" One. In this document, the term "or" is used to refer to a non-exclusive or, such that "A or B" includes "A but not B", "B but not A", and "A and B", unless otherwise instruct. In the appended claims, the terms "including" and "in which" are used as the plain English equivalents of the corresponding terms "including" and "wherein." Furthermore, in the following claims, the terms "comprising" and "comprising" are open ended, ie, a system, apparatus, article, or process that includes elements other than those listed after such term in a claim is still are considered to be within the scope of this claim. Furthermore, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to imply a numerical order for their objects.The above description is intended to be illustrative rather than restrictive. For example, the above examples (or one or more aspects thereof) may be used in conjunction with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is intended to allow the reader to quickly ascertain the nature of the technical disclosure, and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the above Detailed Description, various features may be grouped together to streamline the present disclosure. However, the claims may not recite features disclosed herein, as embodiments may include subsets of said features. Further, embodiments may include fewer features than those disclosed in specific examples. Thus, the following claims are hereby incorporated into the Detailed Description, with the claims standing on their own as separate embodiments. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Methods, systems, and devices related to content-addressable memory for signal development caching are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may also include storage, such as a content-addressable memory, configured to store a mapping between addresses of the signal development cache and addresses of the memory array. In various examples, accessing the memory device may include determining and storing a mapping between addresses of the signal development cache and addresses of the memory array, or determining whether to access the signal development cache or the memory array based on such a mapping.
CLAIMSWhat is claimed is:1. A method, comprising:receiving, at a memory device comprising a memory array, a read command from a requesting device indicating a first address, the first address associated with an address of the memory array;determining that the first address of the memory array corresponds to a second address, the second address associated with an address of a signal development cache of the memory device; andaccessing the signal development cache to retrieve information associated with the read command based at least in part on the determining.2. The method of claim 1, further comprising:identifying an association between a bitmap corresponding to an address of the signal development cache and a bitmap corresponding to the first address, wherein determining whether the first address corresponds to the second address comprises identifying the association.3. The method of claim 2, wherein the identifying occurs at a content- addressable memory of the memory device.4. The method of claim 1, wherein accessing the signal development cache to retrieve the information associated with the read command comprises:coupling a plurality of cache elements of the signal development cache with a plurality of sense amplifiers; andsensing, at the plurality of sense amplifiers, logic signals based at least in part on the coupling.5. The method of claim 1, wherein accessing the signal development cache is performed without accessing the memory array to retrieve the information.6. The method of claim 1, further comprising:receiving, at the memory device, a second read command from the requesting device indicating a third address of the memory array; determining that the third address does not correspond to an address of the signal development cache of the memory device; andaccessing the memory array to retrieve information associated with the second read command.7. The method of claim 6, wherein accessing the memory array to retrieve the information associated with the second read command comprises:determining a mapping between a plurality of memory cells of the memory array that correspond to the third address and a plurality of cache elements of the signal development cache;storing the mapping between the plurality of memory cells and the plurality of cache elements based at least in part on the determining;transferring the information of the plurality of memory cells to the plurality of cache elements based at least in part on storing the mapping;sensing, at the plurality of sense amplifiers, logic signals associated with the information and from the cache elements based at least in part on transferring the information to the plurality of cache elements.8. The method of claim 7, wherein accessing the memory array to retrieve the information associated with the second read command comprises:coupling the plurality of memory cells with the plurality of cache elements based at least in part on determining the mapping between a plurality of memory cells of the memory array that correspond to the third address and the plurality of cache elements of the signal development cache; andcoupling the plurality of cache elements with a plurality of sense amplifiers.9. The method of claim 7, wherein storing the mapping comprises:storing an association between a bitmap corresponding to a fourth address of the plurality of cache elements and a bitmap corresponding to the third address.10. The method of claim 1, further comprising:outputting, to the requesting device, the information retrieved from the signal development cache without accessing the memory array.11. The method of claim 1, further comprising: outputting an indicator to the requesting device, the indicator indicating when information being retrieved from the signal development cache or from the memory array, is based at least in part on a mapping between the memory array and the signal development cache.12. The method of claim 1, further comprising:performing content verification for the read command based at least in part on the accessing.13. A method, comprising:receiving, at a memory device, a write command from a requesting device, the write command associated with a plurality of logic states for storing in the memory device;storing, at one or more addresses of a signal development cache of the memory device, a plurality of signal states associated with the plurality of logic states;determining a mapping between the one or more addresses of the signal development cache and one or more addresses of a memory array of the memory device;writing the plurality of logic states to the one or more addresses of the memory array based at least in part on the plurality of signal states and the mapping; andstoring the mapping between the one or more addresses of the signal development cache and the one or more addresses of the memory array.14. The method of claim 13, wherein the memory array comprises a plurality of domains each comprising a respective plurality of word lines, and wherein determining the mapping comprises:determining a first mapping between a first of the one or more addresses of the signal development cache and an address of a word line of the first of the plurality of domains; anddetermining a second mapping between a second of the one or more addresses of the signal development cache and an address of a word line of the second of the plurality of domains.15. The method of claim 13, further comprising:receiving a read command from the requesting device after receiving the write command; determining that the read command is associated with information stored in at least one of the one or more addresses of the signal development cache;accessing the signal development cache to retrieve information associated with the read command based at least in part on determining that the read command is associated with the at least one of the one or more addresses of the signal development cache.16. The method of claim 15, wherein accessing the signal development cache is performed without accessing the memory array to retrieve the information associated with the read command.17. The method of claim 13, wherein storing the mapping comprises: storing the mapping at a content-addressable memory of the memory device.18. An apparatus, comprising:a memory array comprising a plurality of memory cells;a signal development cache comprising a plurality of cache elements different than the plurality of memory cells and configured to store signaling associated with information exchange with a sense amplifier array; anda content-addressable memory configured to store a mapping between one or more addresses of the signal development cache and one or more addresses of the memory array.19. The apparatus of claim 18, wherein, to store the mapping, the content- addressable memory is configured to store an association between a bitmap corresponding to an address of the signal development cache and a bitmap corresponding to an address of the memory array.20. The apparatus of claim 18, wherein:the memory array comprises a plurality of word lines each associated with a respective subset of the plurality of memory cells;the signal development cache comprises a plurality of cache lines each associated with a respective subset of the plurality of cache elements; andto store the mapping, the content-addressable memory is configured to store a mapping between a cache line of the plurality of cache lines and a respective word line of the plurality of word lines.21. The apparatus of claim 18, wherein:the memory array comprises a plurality of domains each associated with a respective subset of a plurality of word lines;the signal development cache comprises a plurality of cache lines each associated with a respective subset of the plurality of cache elements; andto store the mapping, the content-addressable memory is configured to store a mapping between a cache line of the plurality of cache lines and a respective domain of the plurality of domains and one of the respective plurality of word lines.22. The apparatus of claim 18, further comprising:a first selection component operable to selectively couple the memory array with the signal development cache based at least in part on the mapping.23. The apparatus of claim 18, further comprisingthe sense amplifier array comprising a plurality of sense amplifiers, each sense amplifier of the plurality of sense amplifiers configured to output a logic state based at least in part on sensing an input signal from the signal development cache.24. The apparatus of claim 23, further comprising:a second selection component operable to selectively couple the signal development cache with the sense amplifier array based at least in part on the mapping.25. The apparatus of claim 18, wherein the content-addressable memory is configured to:receive a read command from a requesting device; andaccess the signal development cache based at least in part on the read command and the mapping.26. The apparatus of claim 25, wherein the content-addressable memory is configured to output an indicator, the indicator indicating when information being retrieved from the signal development cache or from the memory array is based at least in part on a mapping between the memory array and the signal development cache.27. The apparatus of claim 18, wherein the content-addressable memory is configured to determine the mapping based at least in part on a write command received from a requesting device.28. The apparatus of claim 18, wherein the content-addressable memory is configured to determine the mapping based at least in part on a read command received from a requesting device.29. The apparatus of claim 18, wherein the content-addressable memory is configured to determine the mapping based at least in part on transferring signaling associated with logic states of the memory array to the signal development cache.30. An apparatus, comprising:a memory array comprising a plurality of memory cells;a signal development cache comprising a plurality of cache elements different than the plurality of memory cells and configured to temporarily store signaling associated with information exchange with a sense amplifier array; anda controller configured to:receive a read command from a requesting device indicating a first address, the first address associated with an address of the memory array;determine that the first address of the memory array corresponds to a second address, the second address associated with an address of a signaldevelopment cache of the memory device;access the signal development cache to retrieve information associated with the read command based at least in part on the determining.31. An apparatus, comprising:a memory array comprising a plurality of memory cells;a signal development cache comprising a plurality of cache elements different than the plurality of memory cells and configured to temporarily store signaling associated with information exchange with a sense amplifier array; anda controller configured to:receive a write command from a requesting device, the write command associated with a plurality of logic states for storing in the memory array; store, at one or more addresses of a signal development cache of the memory device, a plurality of signal states associated with the plurality of logic states;determine a mapping between the one or more addresses of the signal development cache and one or more addresses of the memory array;write the plurality of logic states to the one or more addresses of the memory array based at least in part on the plurality of signal states and the mapping; andstore the mapping between the one or more addresses of the signal development cache and the one or more addresses of the memory array.
CONTENT-ADDRESSABLE MEMORY FOR SIGNAL DEVELOPMENT CACHINGIN A MEMORY DEVICECROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Provisional Patent Application No. 62/783,388 by Yudanov et al., entitled“MULTIPLEXED SIGNALDEVELOPMENT IN A MEMORY DEVICE” and filed December 21, 2018, which is assigned to the assignee hereof and is expressly incorporated by reference in its entirety.BACKGROUND[0002] The following relates generally to memory systems and more specifically to content-addressable memory for signal development caching in a memory device.[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing different states of a memory device. For example, binary memory devices have two logic states, often denoted by a logic“1” or a logic“0”. In other memory devices, more than two logic states may be stored. To access the stored information, a component of the electronic device may read, or sense, the stored logic state in the memory device. To store information, a component of the electronic device may write, or program, the logic state in the memory device.[0004] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), self- selecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example memory device that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein. [0006] FIG. 2 illustrates an example circuit that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein.[0007] FIG. 3 illustrates an example circuit that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein. [0008] FIGs. 4A and 4B illustrate examples of read operations that support content- addressable memory for signal development caching in accordance with examples as disclosed herein.[0009] FIGs. 5A and 5B illustrate examples of write operations that support content- addressable memory for signal development caching in accordance with examples as disclosed herein.[0010] FIG. 6 illustrates an example of a signal development component that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein.[0011] FIG. 7 illustrates an example of a sense amplifier that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein.[0012] FIGs. 8A through 8C show block diagrams of systems that support content- addressable memory for signal development caching in accordance with examples as disclosed herein. [0013] FIG. 9 illustrates an example of a system that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein.[0014] FIG. 10 illustrates an example of a process flow that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein.[0015] FIG. 11 shows a block diagram of a memory device that supports content- addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. [0016] FIG. 12 shows a block diagram of a memory device that supports content- addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein.[0017] FIGs. 13 and 14 show flowcharts illustrating a method or methods that support content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0018] Different latencies associated with different components used in a memory access operation, or different latencies otherwise associated with portions of a memory access operation, may cause delays in performing the memory access operation. For example, when a latency associated with developing a signal based on accessing a memory cell (e.g., an operation that includes coupling a memory cell with a signal development component) is longer in duration than a latency associated with generating an output signal at a sense amplifier (e.g., a sensing or latching operation at the sense amplifier), a memory device may be able to generate output signals more quickly than it can perform underlying signal development operations upon which the output signals are based. For a memory device that has a single signal development component for each sense amplifier (e.g., a 1 : 1 mapping of signal development components and sense amplifiers), the throughput of the memory device may therefore be limited by the latency or cycle duration associated with the signal development component or signal development operations, which may affect latency- sensitive applications.[0019] In accordance with examples as disclosed herein, a memory device may include a signal development cache having a set of cache elements (e.g., signal storage elements) that may be selectively coupled with or decoupled from sense amplifiers of the memory device. For example, an array of sense amplifiers may be coupled with a selection component (e.g., a multiplexer (MUX), a transistor network, a transistor array, a switching network, a switching array), and the selection component may be coupled with a set of signal development cache elements that may each be associated with one or more memory cells of the memory device. In some examples, cell access signals (e.g., cell read signals, cell write signals) may be developed (e.g., based at least in part on a coupling with or other accessing of a respective memory cell) at each of the signal development cache elements independently from others of the signal development cache elements. As used herein, a“set” may include one or more elements (e.g., one element, two elements, three elements, and so on).[0020] In some examples (e.g., in a read operation), signal development cache elements may each be coupled with a respective memory cell or access line during overlapping time intervals, such that multiple cell access signals (e.g., multiple cell read signals associated with the respective memory cell or access line of each of the respective signal development components) may be generated during the overlapping time intervals. A signal development cache element may subsequently be coupled with the sense amplifier via the selection component to generate a sense or latch signal (e.g., an output signal of the sense amplifier, based on a respective cell access signal), which may be associated with a particular logic state that was stored by a respective memory cell (e.g., associated with the respective cell access signal). In examples where cell access signals have been developed at multiple signal development cache elements, the multiple signal development cache elements may be coupled with the sense amplifier in a sequential manner to generate sense or latch signals in a sequential manner.[0021] To organize information (e.g., signal states, cache states) stored at a signal development cache, a memory device may include a storage component configured to store a mapping between addresses of a memory array and addresses of a signal development cache. For example, such a storage component may include a mapping of a row of memory cells, or portion thereof, to a row of signal development cache elements. Such a storage component may include or otherwise be referred to as a content-addressable memory (CAM), which may include storage elements of various architectures that may be the same as, or different than, a storage architecture used in the associated memory array or a storage architecture used in the signal development cache. In response to access commands, a memory device may refer to such a stored mapping to determine whether or how to access information (e.g., signal states, cached signals) of a signal development cache or an array of memory cells. For example, storage that includes a mapping between addresses of a signal development cache and a memory array may be used to evaluate whether data corresponding to an address of an access command (e.g., a read command, a write command) is stored in the signal development cache. In some examples, accessing data in a signal development cache instead of accessing the same data in the memory array may decrease latency of performing the access operation. [0022] Features of the disclosure introduced above are further described with reference to FIGs. 1 through 3 in the context of memory arrays and memory circuits that support content- addressable memory for signal development caching. Specific examples are then described with reference to FIGs. 4A through 5B, which illustrate particular read operations and write operations that support content-addressable memory for signal development caching. Further examples of circuits, components, and arrangements that may support the described operations are described with reference to FIGs. 6 through 8. These and other features of the disclosure are further described with respect to FIGs. 9 through 14, which illustrate apparatus diagrams, system diagrams, and flowcharts that support content-addressable memory for signal development caching.[0023] FIG. 1 illustrates an example memory device 100 that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The memory device 100 may also be referred to as an electronic memory apparatus. The memory device 100 may include memory cells 105 that are programmable to store different states such as memory states, which may be referred to herein as logic states.In some cases, a memory cell 105 may be programmable to store two logic states, denoted a logic 0 and a logic 1. In some cases, a memory cell 105 may be programmable to store more than two logic states. Additionally or alternatively, a memory cell 105 may be programmable to store a memory state based on an analog or stochastic operation (e.g., related to a neural network), where the memory state correspond to information other than a logic 0 or a logic 1. In some examples, the memory cells 105 may include a capacitive memory element, a ferroelectric memory element, a material memory element, a resistive element, a self- selecting memory element, a thresholding memory element, or any combination thereof.[0024] The set of memory cells 105 may be part of a memory section 110 of the memory device 100 (e.g., including an array of memory cells 105), where in some examples a memory section 110 may refer to a contiguous tile of memory cells 105 (e.g., a contiguous set of elements of a semiconductor chip). In some examples, a memory section 110 may refer to the smallest set of memory cells 105 that may be biased in an access operation, or a smallest set of memory cells 105 that share a common node (e.g., a common plate line, a set of plate lines that are biased to a common voltage). Although a single memory section 110 of the memory device 100 is shown, various examples of a memory device in accordance with examples as disclosed herein may have a set of memory sections 110. In one illustrative example, a memory device 100, or a subsection thereof (e.g., a core of a multi-core memory device 100, a chip of a multi-chip memory device) may include 32“banks” and each bank may include 32 sections. Thus, a memory device 100, or subsection thereof, according to the illustrative example may include 1,024 memory sections 110.[0025] In some examples, a memory cell 105 may store an electric charge representative of the programmable logic states (e.g., storing charge in a capacitor, capacitive memory element, capacitive storage element). In one example, a charged and uncharged capacitor may represent two logic states, respectively. In another example, a positively charged and negatively charged capacitor may represent two logic states, respectively. DRAM or FeRAM architectures may use such designs, and the capacitor employed may include a dielectric material with linear or para-electric polarization properties as an insulator. In some examples, different levels of charge of a capacitor may represent different logic states (e.g., supporting more than two logic states in a respective memory cell 105). In some examples, such as FeRAM architectures, a memory cell 105 may include a ferroelectric capacitor having a ferroelectric material as an insulating (e.g., non-conductive) layer between terminals of the capacitor. Different levels of polarization of a ferroelectric capacitor may represent different logic states (e.g., supporting two or more logic states in a respective memory cell 105). In some examples, ferroelectric materials have non-linear polarization properties.[0026] In some examples, a memory cell 105 may include a material portion, which may be referred to as a memory element, a memory storage element, a self-selecting memory element, or a self-selecting memory storage element. The material portion may have a variable and configurable electrical resistance or other characteristic that is representative of different logic states. For example, a material that can take the form of a crystalline atomic configuration or an amorphous atomic configuration (e.g., able to maintain either a crystalline state or an amorphous state over an ambient operating temperature range of the memory device 100) may have different electrical resistances depending on the atomic configuration. A more-crystalline state of the material (e.g., a single crystal, a collection of a relatively large crystal grains that may be substantially crystalline) may have a relatively low electrical resistance, and may alternatively be referred to as a“SET” logic state. A more-amorphous state of the material (e.g., an entirely amorphous state, some distribution of relatively small crystal grains that may be substantially amorphous) may have a relatively high electrical resistance, and may alternatively be referred to as a“RESET” logic state. Thus, a voltage applied to such a memory cell 105 may result in different current flow depending on whether the material portion of the memory cell 105 is in the more-crystalline or the more-amorphous state. Accordingly, the magnitude of the current resulting from applying a read voltage to the memory cell 105 may be used to determine a logic state stored by memory cell 105.[0027] In some examples, a memory element may be configured with various ratios of crystalline and amorphous areas (e.g., varying degrees of atomic order and disorder) that may result in intermediate resistances, which may represent different logic states (e.g., supporting two or more logic states in a respective memory cell 105). Further, in some examples, a material or a memory element may have more than two atomic configurations, such as an amorphous configuration and two different crystalline configurations. Although described herein with reference to an electrical resistance of different atomic configurations, a memory device may use some other characteristic of a memory element to determine a stored logic state corresponding to an atomic configuration, or combination of atomic configurations.[0028] In some cases, a memory element in a more-amorphous state may be associated with a threshold voltage. In some examples, electrical current may flow through a memory element in the more-amorphous state when a voltage greater than the threshold voltage is applied across the memory element. In some examples, electrical current may not flow through a memory element in the more-amorphous state when a voltage less than the threshold voltage is applied across the memory element. In some cases, a memory element in a more-crystalline state may not be associated with a threshold voltage (e.g., may be associated with a threshold voltage of zero). In some examples, electrical current may flow through a memory element in the more-crystalline state in response to a non-zero voltage across the memory element.[0029] In some cases, a material in both the more-amorphous state and the more- crystalline state may be associated with threshold voltages. For example, self-selecting or thresholding memory may be based on differences in a threshold voltage of a memory cell between different programmed states (e.g., by way of different compositional distributions). The logic state of a memory cell 105 having such a memory element may be set by biasing or heating the memory element to a temperature profile over time that supports forming a particular atomic configuration, or combination of atomic configurations.[0030] A memory device 100 may include a three-dimensional (3D) memory array, where a plurality of two-dimensional (2D) memory arrays (e.g., decks, levels) are formed on top of one another. In various examples, such arrays may be divided into a set of memory sections 110, where each memory section 110 may be arranged within a deck or level, distributed across multiple decks or levels, or any combination thereof. Such arrangements may increase the number of memory cells 105 that may be placed or created on a single die or substrate as compared with 2D arrays, which in turn may reduce production costs or increase the performance of a memory device 100, or both. The decks or levels may be separated by an electrically insulating material. Each deck or level may be aligned or positioned so that memory cells 105 may be approximately aligned with one another across each deck, forming a stack of memory cells 105.[0031] In the example of memory device 100, each row of memory cells 105 of the memory section 110 may be coupled with one of a set of first access lines 120 (e.g., a word line (WL), such as one of WLi through WLM), and each column of memory cells 105 may be coupled with one of a set of second access lines 130 (e.g., a digit line (DL), such as one of DLi through DLN). In some examples, a row of memory cells 105 of a different memory section 110 (not shown) may be coupled with one of a different plurality of first access lines 120 (e.g., a word line different from WLi through WLM), and a column of memory cells 105 of the different memory section 110 may be coupled with one of a different plurality of second access lines 130 (e.g., a digit line different from DLi through DLN). In some cases, first access lines 120 and second access lines 130 may be substantially perpendicular to one another in the memory device 100 (e.g., when viewing a plane of a deck of the memory device 100, as shown in FIG. 1). References to word lines and bit lines, or their analogues, are interchangeable without loss of understanding or operation.[0032] In general, one memory cell 105 may be located at the intersection of (e.g., coupled with, coupled between) an access line 120 and an access line 130. This intersection, or an indication of this intersection, may be referred to as an address of a memory cell 105. A target or selected memory cell 105 may be a memory cell 105 located at the intersection of an energized or otherwise selected access line 120 and an energized or otherwise selected access line 130. In other words, an access line 120 and an access line 130 may be energized or otherwise selected to access (e.g., read, write, rewrite, refresh) a memory cell 105 at their intersection. Other memory cells 105 that are in electronic communication with (e.g., connected to) the same access line 120 or 130 may be referred to as untargeted or non- selected memory cells 105.[0033] In some architectures, the logic storing component (e.g., a capacitive memory element, a ferroelectric memory element, a resistive memory element, other memory element) of a memory cell 105 may be electrically isolated from a second access line 130 by a cell selection component, which, in some examples, may be referred to as a switching component or a selector device. A first access line 120 may be coupled with the cell selection component (e.g., via a control node or terminal of the cell selection component), and may control the cell selection component of or associated with the memory cell 105. For example, the cell selection component may be a transistor and the first access line 120 may be coupled with a gate of the transistor (e.g., where a gate node of the transistor may be a control node of the transistor). Activating the first access line 120 of a memory cell 105 may result in an electrical connection or closed circuit between the logic storing component of the memory cell 105 and its corresponding second access line 130. The second access line 130 may then be accessed to read or write the memory cell 105.[0034] In some examples, memory cells 105 of the memory section 110 may also be coupled with one of a plurality of third access lines 140 (e.g., a plate line (PL), such as one of PLi through PLN). Although illustrated as separate lines, in some examples, the plurality of third access lines 140 may represent or be otherwise functionally equivalent with a common plate line, a common plate, or other common node of the memory section 110 (e.g., a node common to each of the memory cells 105 in the memory section 110), or other common node of the memory device 100. In some examples, the plurality of third access lines 140 may couple memory cells 105 with one or more voltage sources for various sensing and/or writing operations including those described herein. For example, when a memory cell 105 employs a capacitor for storing a logic state, a second access line 130 may provide access to a first terminal or a first plate of the capacitor, and a third access line 140 may provide access to a second terminal or a second plate of the capacitor (e.g., a terminal associated with an opposite plate of the capacitor as opposed to the first terminal of the capacitor, a terminal otherwise on the opposite side of a capacitance from the first terminal of the capacitor). In some examples, memory cells 105 of a different memory section 110 (not shown) may be coupled with one of a different plurality of third access lines 140 (e.g., a set of plate lines different from PLi through PLN, a different common plate line, a different common plate, a different common node), which may be electrically isolated from the illustrated third access line 140 (e.g., plate lines PLi through PLN).[0035] The plurality of third access lines 140 may be coupled with a plate component 145, which may control various operations such as activating one or more of the plurality of third access lines 140, or selectively coupling one or more of the plurality of third access lines 140 with a voltage source or other circuit element. Although the plurality of third access lines 140 of the memory device 100 are shown as substantially parallel with the plurality of second access lines 130, in other examples, a plurality of third access lines 140 may be substantially parallel with the plurality of first access lines 120, or in any other configuration.[0036] Although the access lines described with reference to FIG. 1 are shown as direct lines between memory cells 105 and coupled components, access lines may be associated with other circuit elements, such as capacitors, resistors, transistors, amplifiers, voltage sources, switching components, selection components, and others, which may be used to support access operations including those described herein. In some examples, an electrode may be coupled with (e.g., between) a memory cell 105 and an access line 120, or with (e.g., between) a memory cell 105 and an access line 130. The term electrode may refer to an electrical conductor, or other electrical interface between components, and in some cases, may be employed as an electrical contact to a memory cell 105. An electrode may include a trace, wire, conductive line, conductive layer, conductive pad, or the like, that provides a conductive path between elements or components of memory device 100.[0037] Access operations such as reading, writing, rewriting, and refreshing may be performed on a memory cell 105 by activating or selecting a first access line 120, a second access line 130, and/or a third access line 140 coupled with the memory cell 105, which may include applying a voltage, a charge, or a current to the respective access line. Access lines 120, 130, and 140 may be made of conductive materials, such as metals (e.g., copper (Cu), silver (Ag), aluminum (Al), gold (Au), tungsten (W), titanium (Ti)), metal alloys, carbon, or other conductive or semi-conductive materials, alloys, or compounds. Upon selecting a memory cell 105, a resulting signal (e.g., a cell access signal, a cell read signal) may be used to determine the logic state stored by the memory cell 105. For example, a memory cell 105 with a capacitive memory element storing a logic state may be selected, and the resulting flow of charge via an access line and/or resulting voltage of an access line may be detected, converted, or amplified to determine the programmed logic state stored by the memory cell 105.[0038] Accessing memory cells 105 may be controlled through a row component 125 (e.g., a row decoder), a column component 135 (e.g., a column decoder), or a plate component 145 (e.g., a plate driver), or a combination thereof. For example, a rowcomponent 125 may receive a row address from the memory controller 170 and select or activate the appropriate first access line 120 based on the received row address. Similarly, a column component 135 may receive a column address from the memory controller 170 and select or activate the appropriate second access line 130. Thus, in some examples, a memory cell 105 may be accessed by selecting or activating a first access line 120 and a second access line 130. In some examples, such access operations may be accompanied by a plate component 145 biasing one or more of the third access lines 140 (e.g., biasing one of the third access lines 140 of the memory section 110, biasing all of the third access lines 140 of the memory section, biasing a common plate line of the memory section 110 or the memory device 100, biasing a common node of the memory section 110 or the memory device 100), which may be referred to as“moving the plate” of memory cells 105, the memory section 110, or the memory device 100. In various examples, any one or more of the row component 125, the column component 135, or the plate component 145 may be referred to as, or otherwise include access line drivers or access line decoders.[0039] In some examples, the memory controller 170 may control the operation (e.g., read operations, write operations, rewrite operations, refresh operations, discharge operations, dissipation operations, equalization operations) of memory cells 105 through the various components (e.g., row component 125, column component 135, plate component 145, sense component 150). In some cases, one or more of the row component 125, the column component 135, the plate component 145, and the sense component 150 may be co-located or otherwise included with the memory controller 170. In some examples, any one or more of a row component 125, a column component 135, or a plate component 145 may also be referred to as a memory controller or circuit for performing access operations of the memory device 100. In some examples, any one or more of a row component 125, a column component 135, or a plate component 145 may be described as controlling or performing operations for accessing a memory device 100, or controlling or performing operations for accessing the memory section 110 of the memory device 100.[0040] The memory controller 170 may generate row and column address signals to activate a desired access line 120 and access line 130. The memory controller 170 may also generate or control various voltages or currents used during the operation of memory device 100. Although a single memory controller 170 is shown, a memory device 100 may have more than one memory controller 170 (e.g., a memory controller 170 for each of a set of memory sections 110 of a memory device 100, a memory controller 170 for each of a number of subsets of memory sections 110 of a memory device 100, a memory controller 170 for each of a set of chips of a multi-chip memory device 100, a memory controller 170 for each of a set of banks of a multi -bank memory device 100, a memory controller 170 for each core of a multi-core memory device 100, or any combination thereof), where different memory controllers 170 may perform the same functions and/or different functions.[0041] Although the memory device 100 is illustrated as including a single row component 125, a single column component 135, and a single plate component 145, other examples of a memory device 100 may include different configurations to accommodate a memory section 110 or a set of memory sections 110. For example, in various memory devices 100 a row component 125 may be shared among a set of memory sections 110 (e.g., having subcomponents common to all of the set of memory sections 110, havingsubcomponents dedicated to respective ones of the set of memory sections 110), or a row component 125 may be dedicated to one memory section 110 of a set of memory sections 110. Likewise, in various memory devices 100, a column component 135 may be shared among a set of memory sections 110 (e.g., having subcomponents common to all of the set of memory sections 110, having subcomponents dedicated to respective ones of the set of memory sections 110), or a column component 135 may be dedicated to one memory section 110 of a set of memory sections 110. Additionally, in various memory devices 100, a plate component 145 may be shared among a set of memory sections 110 (e.g., havingsubcomponents common to all of the set of memory sections 110, having subcomponents dedicated to respective ones of the set of memory sections 110), or a plate component 145 may be dedicated to one memory section 110 of a set of memory sections 110.[0042] In general, the amplitude, shape, or duration of an applied voltage, current, or charge may be adjusted or varied, and may be different for the various operations discussed in operating the memory device 100. Further, one, multiple, or all memory cells 105 within memory device 100 may be accessed simultaneously. For example, multiple or all memory cells 105 of memory device 100 may be accessed simultaneously during a reset operation in which all memory cells 105, or a group of memory cells 105 (e.g., the memory cells 105 of a memory section 110), are set to a single logic state.[0043] A memory cell 105 may be read (e.g., sensed) by a sense component 150 when the memory cell 105 is accessed (e.g., in cooperation with the memory controller 170) to determine a logic state stored by the memory cell 105. For example, the sense component 150 may be configured to sense a current or charge through the memory cell 105, or a voltage resulting from coupling the memory cell 105 with the sense component 150 or other intervening component (e.g., a signal development component between the memory cell 105 and the sense component 150), responsive to a read operation. The sense component 150 may provide an output signal indicative of (e.g., based at least in part on) the logic state stored by the memory cell 105 to one or more components (e.g., to the column component 135, the input/output component 160, the memory controller 170). In various memory devices 100, a sense component 150 may be shared among a set or bank of memory sections 110 (e.g., having subcomponents common to all of the set or bank of memory sections 110, having subcomponents dedicated to respective ones of the set or bank of memory sections 110), or a sense component 150 may be dedicated to one memory section 110 of a set or bank of memory sections 110.[0044] In some examples, during or after accessing a memory cell 105, the logic storage portion of memory cell 105 may discharge, or otherwise permit electrical charge or current to flow via its corresponding access lines 120, 130, or 140. Such charge or current may result from biasing, or applying a voltage, to the memory cell 105 from one or more voltage sources or supplies (not shown) of the memory device 100, where such voltage sources or supplies may be part of a row component 125, a column component 135, a plate component 145, a sense component 150, a memory controller 170, or some other component (e.g., a biasing component). In some examples, a discharge of a memory cell 105 may cause a change in the voltage of the access line 130, which the sense component 150 may compare to a reference voltage to determine the stored state of the memory cell 105. In some examples, a voltage may be applied to a memory cell 105 (e.g., using the corresponding access line 120 and access line 130) and the presence or magnitude of a resulting current may depend on the applied voltage and the resistance state of a memory element of the memory cell 105, which the sense component 150 may use to determine the stored state of the memory cell 105.[0045] In some examples, when a read signal (e.g., a read pulse, a read current, a read voltage) is applied across a memory cell 105 with a material memory element storing a first logic state (e.g., a SET state, associated with a more-crystalline atomic configuration), the memory cell 105 conducts current due to the read pulse exceeding a threshold voltage of the memory cell 105. In response to, or based at least in part on this, the sense component 150 may therefore detect a current through the memory cell 105 as part of determining the logic state stored by the memory cell 105. When a read pulse is applied to the memory cell 105 with the memory element storing a second logic state (e.g., a RESET state, associated with a more-amorphous atomic configuration), which may occur before or after the application of a read pulse across a memory cell 105 with a memory element storing a first logic state, the memory cell 105 may not conduct current due to the read pulse not exceeding the threshold voltage of the memory cell 105. The sense component 150 may therefore detect little or no current through the memory cell 105 as part of determining the stored logic state.[0046] In some examples, a threshold current may be defined for sensing the logic state stored by a memory cell 105. The threshold current may be set above a current that may pass through the memory cell 105 when the memory cell 105 does not threshold in response to the read pulse, but equal to or below an expected current through the memory cell 105 when the memory cell 105 does threshold in response to the read pulse. For example, the threshold current may be higher than a leakage current of the associated access lines 120, 130, or 140. In some examples, a logic state stored by a memory cell 105 may be determined based at least in part on a voltage (e.g., across a shunt resistance) resulting from the current driven by a read pulse. For example, the resulting voltage may be compared relative to a reference voltage, with a resulting voltage less than the reference voltage corresponding to a first logic state and a resulting voltage greater than the reference voltage corresponding to a second logic state.[0047] In some examples, more than one voltage may be applied when reading a memory cell 105 (e.g., multiple voltages may be applied during portions of a read operation). For example, if an applied read voltage does not result in current flow, one or more other read voltages may be applied (e.g., until a current is detected by sense component 150). Based at least in part on assessing the read voltage that resulted in current flow, the stored logic state of the memory cell 105 may be determined. In some cases, a read voltage may be ramped (e.g., smoothly increasing higher in magnitude) until a current flow or other condition is detected by a sense component 150. In other cases, predetermined read voltages may be applied (e.g., a predetermined sequence of read voltages that increase higher in magnitude in a stepwise manner) until a current is detected. Likewise, a read current may be applied to a memory cell 105 and the magnitude of the voltage to create the read current may depend on the electrical resistance or the total threshold voltage of the memory cell 105.[0048] A sense component 150 may include various switching components, selection components, multiplexers, transistors, amplifiers, capacitors, resistors, voltage sources, or other components to detect, convert, or amplify a difference in sensing signals (e.g., a difference between a read voltage and a reference voltage, a difference between a read current and a reference current, a difference between a read charge and a reference charge), which, in some examples, may be referred to as sensing or latching or generating a sense or latch signal. In some examples, a sense component 150 may include a collection of components (e.g., circuit elements, circuitry) that are repeated for each of a set of access lines 130 connected to the sense component 150. For example, a sense component 150 may include a separate sensing circuit or circuitry (e.g., a separate sense amplifier, a separate signal development component) for each of a set of access lines 130 coupled with the sense component 150, such that a logic state may be separately detected for a respective memory cell 105 coupled with a respective one of the set of access lines 130. In some examples, a reference signal source (e.g., a reference component) or generated reference signal may be shared between components of the memory device 100 (e.g., shared among one or more sense components 150, shared among separate sensing circuits of a sense component 150, shared among access lines 120, 130, or 140 of a memory section 110).[0049] The sense component 150 may be included in a device that includes the memory device 100. For example, the sense component 150 may be included with other read and write circuitry, decoding circuitry, or register circuitry of the memory that may be coupled with or to the memory device 100. In some examples, the detected logic state of a memory cell 105 may be output through a column component 135 or an input/output component 160 as an output. In some examples, a sense component 150 may be part of a column component 135, a row component 125, or a memory controller 170. In some examples, a sense component 150 may be connected to or otherwise in electronic communication with a column component 135, a row component 125, or memory controller 170.[0050] Although a single sense component 150 is shown, a memory device 100 (e.g., a memory section 110 of a memory device 100) may include more than one sense component 150. For example, a first sense component 150 may be coupled with a first subset of access lines 130 and a second sense component 150 may be coupled with a second subset of access lines 130 (e.g., different from the first subset of access lines 130). In some examples, such a division of sense components 150 may support parallel (e.g., simultaneous) operation of multiple sense components 150. In some examples, such a division of sense components 150 may support matching sense components 150 having different configurations orcharacteristics to particular subsets of the memory cells 105 of the memory device (e.g., supporting different types of memory cells 105, supporting different characteristics of subsets of memory cells 105, supporting different characteristics of subsets of access lines 130).[0051] Additionally or alternatively, two or more sense components 150 may be coupled (e.g., selectively coupled) with a same set of access lines 130 (e.g., for component redundancy). In some examples, such a configuration may support maintaining functionality to overcome a failure or otherwise poor or degraded operation of one of the redundant sense components 150. In some examples, such a configuration may support the ability to select one of the redundant sense components 150 for particular operational characteristics (e.g., as related to power consumption characteristics, as related to access speed characteristics for a particular sensing operation, as related to operating memory cells 105 in a volatile mode or a non-volatile mode).[0052] In some memory architectures, accessing a memory cell 105 may degrade or destroy a logic state stored by one or more memory cells 105 of the memory section 110, and rewrite or refresh operations may be performed to return the original logic state to the memory cells 105. In DRAM or FeRAM, for example, a capacitor of a memory cell 105 may be partially or completely discharged or depolarized during a sense operation, thereby corrupting the logic state that was stored in the memory cell 105. In PCM, for example, sense operations may cause a change in the atomic configuration of a memory cell 105, thereby changing the resistance state of the memory cell 105. Thus, in some examples, the logic state stored in a memory cell 105 may be rewritten after an access operation. Further, activating a single access line 120, 130, or 140 may result in the discharge of all memory cells 105 coupled with the activated access line 120, 130, or 140. Thus, several or all memory cells 105 coupled with an access line 120, 130, or 140 associated with an access operation (e.g., all cells of an accessed row, all cells of an accessed column) may be rewritten after the access operation.[0053] In some examples, reading a memory cell 105 may be non-destructive. That is, the logic state of the memory cell 105 may not need to be rewritten after the memory cell 105 is read. For example, in non-volatile memory such as PCM, accessing the memory cell 105 may not destroy the logic state and, thus, the memory cell 105 may not require rewriting after accessing. However, in some examples, refreshing the logic state of the memory cell 105 may or may not be needed in the absence or presence of other access operations. For example, the logic state stored by a memory cell 105 may be refreshed at periodic intervals by applying an appropriate write, refresh, or equalization pulse or bias to maintain the stored logic state. Refreshing the memory cell 105 may reduce or eliminate read disturb errors or logic state corruption due to a charge leakage or a change in an atomic configuration of a memory element over time.[0054] A memory cell 105 may be set or written or refreshed by activating the relevant first access line 120, second access line 130, and/or third access line 140 (e.g., via a memory controller 170). In other words, a logic state may be stored in the memory cell 105 (e.g., via a cell access signal, via a cell write signal). Row component 125, column component 135, or plate component 145 may accept data, for example, via input/output component 160, to be written to the memory cells 105. In some examples, a write operation may be performed at least in part by a sense component 150, or a write operation may be configured to bypass a sense component 150.[0055] In the case of a capacitive memory element, a memory cell 105 may be written by applying a voltage to a capacitor, and then isolating the capacitor (e.g., isolating the capacitor from a voltage source used to write the memory cell 105, floating the capacitor) to store a charge in the capacitor associated with a desired logic state. In the case of ferroelectric memory, a ferroelectric memory element (e.g., a ferroelectric capacitor) of a memory cell 105 may be written by applying a voltage with a magnitude high enough to polarize the ferroelectric memory element (e.g., applying a saturation voltage) with a polarization associated with a desired logic state, and the ferroelectric memory element may be isolated (e.g., floating), or a zero net voltage or bias may be applied across the ferroelectric memory element (e.g., grounding, virtually grounding, or equalizing a voltage across the ferroelectric memory element). In the case of PCM, a memory element may be written by applying a current with a profile that causes (e.g., by way of heating and cooling) the memory element to form an atomic configuration associated with a desired logic state.[0056] The sense component 150 may include multiple signal development components that may be selectively coupled with or decoupled from respective ones of a set of the sense amplifiers. For example, a sense amplifier of the sense component 150 may be coupled with a selection component of the sense component 150, and the selection component may be coupled with a set of signal development components of the sense component 150 that may be associated with one or more memory cells 105 or one or more access lines (e.g., one or more access lines 130) of the memory device 100. In some examples, cell access signals may be developed at each of the signal development components independently from others of the signal development components.[0057] In some examples, signal development components of the sense component 150 may each be coupled with a respective memory cell during overlapping time intervals, such that multiple cell access signals (e.g., cell read signals, cell write signals, each associated with the respective memory cell of each of the respective signal development components) may be generated during the overlapping time intervals. In examples where cell access signals have been developed at multiple signal development components (e.g., in read operations of multiple memory cells 105, in a multi-cell read operation), the multiple signal development components may be coupled with the sense amplifier (e.g., in a sequential manner, in a step wise manner) to generate sense or latch signals of the sense amplifier based at least in part on the cell access signals (e.g., in a sequential manner, in a step-wise manner). In examples where a sequence of sense or latch signals is associated with writing or re-writing a set of memory cells 105 (e.g., in write or refresh operations of multiple memory cells 105, in a multi-cell write or refresh operation), multiple signal development components may be coupled with the sense amplifier (e.g., in a sequential manner, in a step-wise manner) to generate multiple cell access signals based at least in part on the sense or latch signals of the sense amplifier (e.g., in a sequential manner, in a step-wise manner). In some examples, the multiplexed signal development components of the sense component 150 may compensate for parts of a signal development component or portions of an access operation that are associated with different latency, which may reduce the impact of access serialization.[0058] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0059] FIG. 2 illustrates an example circuit 200 that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein. Circuit 200 may include a memory cell 105-a and a sense component 150-a, which may be examples of a memory cell 105 and a sense component 150 described with reference to FIG. 1. Circuit 200 may also include a word line 205, a digit line 210, and a plate line 215, which, in some examples, may correspond to a first access line 120, a second access line 130, and a third access line 140, respectively (e.g., of a memory section 110), as described with reference to FIG. 1. In some examples, the plate line 215 may be illustrative of a common plate line, a common plate, or another common node for the memory cell 105-a and another memory cell 105 (not shown) of a same memory section 110. Circuit 200 illustrates circuitry that may support the described techniques for content-addressable memory for signal development caching.[0060] The sense component 150-a may include a sense amplifier 290 (e.g., an amplifier component, an input/output amplifier, a“latch”), which may include a first node 291 and a second node 292. In various examples, the first node 291 and the second node 292, may be coupled with different access lines of a circuit (e.g., a signal line 285 and a reference line 275 of the circuit 200, respectively), or may be coupled with a common access line of a different circuit (not shown). In some examples, the first node 291 may be referred to as a signal node, and the second node 292 may be referred to as a reference node. The sense amplifier 290 may be associated with (e.g., coupled with, coupled to) one or more input/output (I/O) lines (e.g., I/O line 295), which may include an access line coupled with a column component 135 via input/output component 160 described with reference to FIG. 1. Although the sense amplifier 290 is illustrated as having a single I/O line 295, a sense amplifier in accordance with examples as disclosed herein may have more than one I/O line 295 (e.g., two I/O lines 295). In various examples, other configurations and nomenclature for access lines and/or reference lines are possible in accordance with examples as disclosed herein.[0061] The memory cell 105-a may include a logic storage component (e.g., a memory element, a storage element, a memory storage element), such as a capacitor 220 that has a first plate, cell plate 221, and a second plate, cell bottom 222. The cell plate 221 and the cell bottom 222 may be capacitively coupled through a dielectric material positioned between them (e.g., in a DRAM application), or capacitively coupled through a ferroelectric material positioned between them (e.g., in a FeRAM application). The cell plate 221 may be associated with a voltage, Vpiate, and cell bottom 222 may be associated with a voltage,Vbottom, as illustrated in the circuit 200. The orientation of cell plate 221 and cell bottom 222 may be different (e.g., flipped) without changing the operation of the memory cell 105-a. The cell plate 221 may be accessed via the plate line 215 and cell bottom 222 may be accessed via the digit line 210. As described herein, various logic states may be stored by charging, discharging, or polarizing the capacitor 220.[0062] The capacitor 220 may be in electronic communication with the digit line 210, and the stored logic state of the capacitor 220 may be read or sensed by operating various elements represented in circuit 200. For example, the memory cell 105-a may also include a cell selection component 225 which, in some examples, may be referred to as a switching component or a selector device coupled with or between an access line (e.g., the digit line 210) and the capacitor 220. In some examples, a cell selection component 225 may be considered to be outside the illustrative boundary of the memory cell 105-a, and the cell selection component 225 may be referred to as a switching component or selector device coupled with or between an access line (e.g., the digit line 210) and the memory cell 105-a.[0063] The capacitor 220 may be selectively coupled with the digit line 210 when the cell selection component 225 is activated (e.g., by way of an activating logical signal or voltage), and the capacitor 220 can be selectively isolated or decoupled from the digit line 210 when the cell selection component 225 is deactivated (e.g., by way of a deactivating logical signal or voltage). A logical signal or other selection signal or voltage may be applied to a control node 226 (e.g., a control node, a control terminal, a selection node, a selection terminal) of the cell selection component 225 (e.g., via the word line 205). In other words, the cell selection component 225 may be configured to selectively couple or decouple the capacitor 220 (e.g., a logic storage component) and the digit line 210 based on a logical signal or voltage applied via the word line 205 to the control node 226.[0064] Activating the cell selection component 225 may be referred to as selecting the memory cell 105-a in some examples, and deactivating the cell selection component 225 may be referred to as deselecting the memory cell 105-a in some examples. In some examples, the cell selection component 225 is a transistor (e.g., an n-type transistor) and its operation may be controlled by applying an activation or selection voltage to the transistor gate (e.g., a control or selection node or terminal). The voltage for activating the transistor (e.g., the voltage between the transistor gate terminal and the transistor source terminal) may be a voltage greater than the threshold voltage magnitude of the transistor (e.g., a positive activation or selection voltage). The voltage for deactivating the transistor may be a voltage less than the threshold voltage magnitude of the transistor (e.g., a ground or negative deactivation or deselection voltage).[0065] The word line 205 may be used (e.g., by a row component 125) to activate or deactivate the cell selection component 225. For example, a selection voltage applied to the word line 205 (e.g., a word line logical signal or a word line voltage) may be applied to the gate of a transistor of cell selection component 225, which may selectively connect or couple the capacitor 220 with the digit line 210 (e.g., providing a conductive path between the capacitor 220 and the digit line 210). A deselection or deactivation voltage applied to the word line 205 may be applied to the gate of the transistor of cell selection component 225, which may selectively disconnect, decouple, or isolate the capacitor 220 from the digit line 210. In some examples, activating the cell selection component 225 may be referred to as selectively coupling the memory cell 105-a with the digit line 210, and deactivating the cell selection component 225 may be referred to as selectively decoupling or isolating the memory cell 105-a from the digit line 210.[0066] In other examples, the positions of the cell selection component 225 and the capacitor 220 in the memory cell 105-a may be switched, such that cell selection component 225 may be coupled with or between the plate line 215 and the cell plate 221, and the capacitor 220 may be coupled with or between the digit line 210 and the other terminal of the cell selection component 225. In such an example, the cell selection component 225 may remain connected (e.g., in electronic communication) with the digit line 210 through the capacitor 220. This configuration may be associated with alternative timing and biasing for access operations.[0067] In examples that employ a ferroelectric capacitor 220, the capacitor 220 may or may not fully discharge upon connection to or coupling with the digit line 210. In various schemes, to sense the logic state stored by a ferroelectric capacitor 220, a voltage may be applied to the plate line 215 and/or the digit line 210, and the word line 205 may be biased (e.g., by activating the word line 205) to select the memory cell 105-a. In some cases, the plate line 215 and/or the digit line 210 may be virtually grounded and then isolated from the virtual ground, which may be referred to as a floating condition, an idle condition, or a standby condition, prior activating the word line 205.[0068] Operation of the memory cell 105-a by varying the voltage of the cell plate 221 (e.g., via the plate line 215) may be referred to as“moving the cell plate.” Biasing the plate line 215 and/or the digit line 210 may result in a voltage difference (e.g., the voltage of the digit line 210 minus the voltage of the plate line 215) across the capacitor 220. The voltage difference may accompany a change in the stored charge on capacitor 220, where the magnitude of the change in stored charge may depend on the initial state of the capacitor 220 (e.g., whether the initial logic state stored a logic 1 or a logic 0). In some schemes, the change in the stored charge of the capacitor 220, or some portion of such a charge, may be used by the sense component 150-a to determine the logic state stored by the memory cell 105-a (e.g., in a charge transfer sensing scheme). In some schemes, the change in the stored charge of the capacitor 220 may cause a change in the voltage of the digit line 210, which may be used by the sense component 150-a to determine the logic state stored by the memory cell 105-a. A cell access signal may refer to a signal generated while the memory cell 105-a is selected or activated (e.g., while coupled with the signal development component), which may include a cell read signal in a read operation of the memory cell 105-a, or a cell write signal in a write operation, a rewrite operation, or a refresh operation of the memory cell 105-a. In various examples, a cell access signal may be referred to as a cell coupling signal or a cell charge sharing signal.[0069] In some examples, the digit line 210 may be coupled with additional memory cells 105 (not shown), which each may be coupled with different word lines 205 (not shown). In other words, different memory cells 105 that are coupled with the digit line 210 may, in some examples, be selected or activated based at least in part on different word line logical signals.[0070] The digit line 210 may have properties that result in an intrinsic capacitance 230 (e.g., on the order of picofarads (pF), which may in some cases be non-negligible), which may couple the digit line 210 with a voltage source 240-a having a voltage Vo. The voltage source 240-a may represent a common ground or virtual ground voltage, or the voltage of an adjacent access line of the circuit 200 (not shown). Although illustrated as a separate element in FIG. 2, the intrinsic capacitance 230 may be associated with properties distributed throughout the digit line 210 or another part of the circuit 200. [0071] In some examples, the intrinsic capacitance 230 may depend on physical characteristics of the digit line 210, including conductor dimensions (e.g., length, width, thickness) of the digit line 210. The intrinsic capacitance 230 may also depend oncharacteristics of adjacent access lines or circuit components, proximity to such adjacent access lines or circuit components, or insulation characteristics between the digit line 210 and such access lines or circuit components. Thus, a change in voltage of digit line 210 after selecting or activating the memory cell 105-a may depend on the net capacitance of (e.g., associated with) the digit line 210. In other words, as charge flows along the digit line 210 (e.g., to the digit line 210, from the digit line 210), some finite charge may be stored along the digit line 210 (e.g., in the intrinsic capacitance 230, in another capacitance coupled with the digit line 210), and the resulting voltage of the digit line 210 may depend on the net capacitance of the digit line 210.[0072] The circuit 200 (e.g., the sense component 150-a) may include a signaldevelopment component 250, which may be an example of a signal development component or signal development circuit coupled with or between the memory cell 105-a and the sense amplifier 290. In some examples, an access line associated with a signal development component 250 (e.g., an access line coupled with an input/output of the signal development component 250, an access line coupled with or between the signal development component 250 and the sense amplifier 290) may be referred to as a signal development line (SDL) (e.g., signal development line 255, a“cacheline” (CL)). The signal development component 250 may amplify or otherwise convert signals (e.g., cell access signals) of the digit line 210 and the signal development line 255. For example, for a read operation, the signal development component 250 may generate or be otherwise associated with generating a cell read signal based at least in part on being coupled with the capacitor 220 (e.g., prior to a sensing operation of the sense amplifier 290), which may include a charge sharing between the signal development component 250 and the capacitor 220. In another example, for a write operation, a rewrite operation, or a refresh operation, the signal development component 250 may generate or be otherwise associated with generating a cell write signal for the capacitor 220 (e.g., based at least in part on being coupled with the sense amplifier 290, in response to a write command, a refresh command, a rewrite command, or a read command), which may include a charge sharing between the signal development component 250 and the capacitor 220. [0073] In some examples, the signal development component 250 may include a signal storage element such as capacitor (e.g., a signal development cache element, an integrator capacitor, an amplifier capacitor (AMPCap), which may in some cases alternatively be referred to as a“fast cap”) or another type of charge storage element configured to store a signal or signal state different than a logic state stored at a memory cell 105 (e.g., different than a logic state stored at the memory cell 105-a). Additionally or alternatively, the signal development component 250 may include, a transistor, an amplifier, a cascode, or any other charge or voltage conversion or amplification component. For example, the signal development component 250 may include a charge transfer sensing amplifier (CTSA), which in some examples may include a transistor having a gate terminal coupled with a voltage source.[0074] Although the sense component 150-a is illustrated with a single signaldevelopment component 250, the sense component 150-a may include one or more additional signal development components 250 (not shown) to form a set of signal development components 250 (e.g., a signal development cache) in accordance with examples as disclosed herein. Each of the set of signal development components 250 of the sense component 150-a may be associated with (e.g., configured to be selectively coupled with or decoupled from, configured to develop cell access signals for) one or more memory cells 105 or one or more digit lines 210, which may or may not include the memory cell 105-a or the digit line 210.For example, each signal development component 250 of the set of signal development components 250 may be selectively coupled with or decoupled from one or more digit lines 210 of a memory section 110 of a memory array. In examples where a respective one of the signal development components 250 is coupled with more than one memory cell 105 or more than one digit line 210, any of the memory cells 105 or digit lines 210 may be selectively coupled with or decoupled from the respective signal development component 250 by a selection component (e.g., a digit line selection component, a multiplexer, a transistor network, a transistor array, a switching network, a switching array, not shown) between the respective signal development component 250 and the associated memory cells 105 or digit lines 210.[0075] The sense component 150-a may also include a selection component 280 (e.g., a signal development component selection component, a multiplexer, a transistor network, a transistor array, a switching network, a switching array) coupled with or between a set of signal development components 250 (e.g., with or between a set of signal development lines 255) and the sense amplifier 290. The selection component 280 may be configured to selectively couple or decouple any of the set of signal development components 250 or signal development lines 255 with the sense amplifier 290. The selection component 280 may be associated with an access line, such as the signal line 285, for conveying signals (e.g., voltage, charge, current) between the selection component 280 and the sense amplifier 290. The output of the selection component 280 (e.g., in a read operation), for example, may be an output signal (e.g., a signal conveyed via the signal line 285) that is based at least in part on an input signal (e.g., a signal conveyed from a signal development component 250 selected by the selection component 280, a signal conveyed by a signal development line 255 selected by the selection component 280). In some examples, the output signal of the selection component 280 may be equal to, or substantially equal to the input signal of the selection component 280 (e.g., where Vsig = VSDL). Although described in the context of an input signal via a signal development line 255 and an output signal via a signal line 285, the interpretation of input and output may be reversed in certain access operations that employ the circuit 200 (e.g., in a write operation, a rewrite operation, a refresh operation).[0076] In a read operation, the voltage of the signal line 285 after selecting the memory cell 105-a (e.g., a cell read signal, after coupling the memory cell 105-a or the digit line 210 with the signal development component 250, after selecting the signal development component 250 at the selection component 280) may be compared to a reference (e.g., a voltage of the reference line 275) by the sense component 150-b to determine the logic state that was stored in the memory cell 105-a (e.g., to generate a sense or latch signal). In some examples, a voltage of the reference line 275 may be provided by a reference component 270. In other examples, the reference component 270 may be omitted and a reference voltage may be provided, for example, by accessing the memory cell 105-a or the digit line 210 to generate the reference voltage (e.g., in a self-referencing access operation). Other operations may be used to support selecting and/or sensing the memory cell 105-a.[0077] In some examples, the circuit 200 may include a bypass line 260 that may permit bypassing (e.g., selectively bypassing) the signal development component 250 or some other portion of a circuit between the memory cell 105-a and the sense amplifier 290. In some examples, the bypass line 260 may be selectively enabled or disabled by way of a switching component 265. In other words, when the switching component 265 is activated, the digit line 210 may be coupled with the signal development line 255 or the selection component 280 via the bypass line 260 (e.g., coupling the memory cell 105-a with the selection component 280 or some other portion of a circuit between the memory cell and the sense amplifier 290).[0078] In some examples, when the switching component 265 is activated, the signal development component 250 may be selectively isolated or decoupled from one or both of the digit line 210 or the signal development line 255 (e.g., by another switching component or selection component, not shown). When the switching component 265 is deactivated, the digit line 210 may be selectively coupled with the signal development line 255 or the selection component 280 via the signal development component 250. In other examples, one or more additional selection components (not shown) may be used to selectively couple the memory cell 105-a (e.g., the digit line 210) with one of the signal development component 250 (e.g., via the signal development line 255) or the bypass line 260.[0079] Additionally or alternatively, in some examples, a switching or selection component may be used to selectively couple the selection component 280 with one of the signal development component 250 (e.g., via the signal development line 255) or the bypass line 260. In some examples, a selectable bypass line 260 may support generating a cell access signal (e.g., a cell read signal) for detecting a logic state of the memory cell 105-a by using the signal development component 250, and generating a cell access signal (e.g., a cell write signal) to write a logic state to the memory cell 105-a that bypasses the signal development component 250.[0080] Some examples of a memory device that supports multiplexed signal development may share a common access line (not shown) between a memory cell 105 and a sense amplifier 290 to support generating a sense signal and a reference signal from the same memory cell 105. In one example, a common access line between a signal development component 250 and a sense amplifier 290 may be referred to as a“common line,” and the common access line may take the place of the signal line 285 and the reference line 275 illustrated in circuit 200.[0081] In such examples, the common access line may be connected to the sense amplifier 290 at two different nodes (e.g., a first node 291 and a second node 292, as described herein). In some examples, a common access line may permit a self-referencing read operation to share, in both a signal generating operation and a reference generating operation, components that may exist between the sense amplifier 290 and a memory cell 105 being accessed. Such a configuration may reduce the sensitivity of the sense amplifier 290 to operational variations of various components in a memory device, such as memory cells 105, access lines (e.g., a word line 205, a digit line 210, a plate line 215), signal development circuits (e.g., signal development component 250), transistors, voltage sources 293 and 294, and others.[0082] Although the digit line 210, the signal development line 255, and the signal line 285 are identified as separate lines, the digit line 210, the signal development line 255, the signal line 285, and any other lines connecting a memory cell 105 with a sense amplifier 290 may be referred to as a single access line in accordance with examples as disclosed herein. Constituent portions of such an access line may be identified separately for the purposes of illustrating intervening components and intervening signals in various exampleconfigurations.[0083] The sense amplifier 290 may include various transistors or amplifiers to detect, convert, or amplify a difference in signals, which may include or otherwise be referred to as generating a sense signal or a latch signal. For example, the sense amplifier 290 may include circuit elements that receive and compare a sense signal voltage (e.g., a cell read signal, Vsig) at the first node 291 with a reference signal voltage (e.g., Vref) at the second node 292. An output of the sense amplifier 290 (e.g., a sense or latch signal) may be driven to a higher (e.g., a positive voltage) or a lower voltage (e.g., a negative voltage, a ground voltage) based on the comparison at the sense amplifier 290.[0084] For example, if the first node 291 has a lower voltage than the second node 292, the output of the sense amplifier 290 may be driven to a relatively lower voltage of a low voltage source 293 (e.g., a voltage of VL, which may be a ground voltage substantially equal to Vo or a negative voltage). A sense component 150 that includes the sense amplifier 290, or an I/O component 160 that is coupled with such a sense component 150, may latch the output of the sense amplifier 290 to determine the logic state stored in the memory cell 105-a (e.g., detecting a logic 0 when the first node 291 has a lower voltage than the second node 292).[0085] If the first node 291 has a higher voltage than the second node 292, the output of the sense amplifier 290 may be driven to the voltage of a high voltage source 294 (e.g., a voltage of VH). A sense component 150 that includes the sense amplifier 290, or an I/O component 160 that is coupled with such a sense component 150, may latch the output of the sense amplifier 290 to determine the logic state stored in the memory cell 105-a (e.g., detecting a logic 1 when the first node 291 has a higher voltage than the second node 292). The latched output of the sense amplifier 290, corresponding to the detected logic state of memory cell 105-a, may then be output via one or more input/output (I/O) lines (e.g., I/O line 295).[0086] To perform a write operation, rewrite operation, or refresh operation on the memory cell 105-a, a voltage (e.g., a cell write signal) may be applied across the capacitor 220. Various methods may be used. In one example, the cell selection component 225 may be selected or activated through the word line 205 (e.g., by selecting or activating the word line 205) to electrically connect the capacitor 220 to the digit line 210. A voltage may be applied across capacitor 220 by controlling the voltage of the cell plate 221 (e.g., through the plate line 215) and the cell bottom 222 (e.g., through the digit line 210). In some examples, write operations, rewrite operations, or refresh operations may be based at least in part on a sense or latch signal at the sense amplifier 290, which may be based on a signal received via the I/O line 295 (e.g., a write signal, a refresh signal) or based on a signal generated at the sense amplifier 290 (e.g., a rewrite signal).[0087] For example, to write a logic 0, the cell plate 221 may be taken high (e.g., applying a positive voltage to the plate line 215), and the cell bottom 222 may be taken low (e.g., grounding the digit line 210, virtually grounding the digit line 210, applying a negative voltage to the digit line 210). The opposite process may be performed to write a logic 1, where the cell plate 221 is taken low and the cell bottom 222 is taken high. In some cases, the voltage applied across the capacitor 220 during a write operation may have a magnitude equal to or greater than a saturation voltage of a ferroelectric material in the capacitor 220, such that the capacitor 220 is polarized, and thus maintains a charge even when the magnitude of applied voltage is reduced, or if a zero net voltage is applied across the capacitor 220. In some examples, the sense amplifier 290 or the signal development component 250 may be used to perform the write operations, which may include coupling the low voltage source 293 or the high voltage source 294 with the digit line. When the sense amplifier 290 is used to perform the write operations, the signal development component 250 may or may not be bypassed (e.g., by applying a write signal via the bypass line 260).[0088] The circuit 200, including the sense component 150-a, the cell selection component 225, the signal development component 250, the switching component 265, the reference component 270, the selection component 280, or the sense amplifier 290 may include various types of transistors. For example, the circuit 200 may include n-type transistors, where applying a relative positive voltage to the gate of the n-type transistor that is above a threshold voltage for the n-type transistor (e.g., an applied voltage having a positive magnitude, relative to a source terminal, that is greater than a threshold voltage) enables a conductive path between the other terminals of the n-type transistor (e.g., the source terminal and a drain terminal).[0089] In some examples, an n-type transistor may act as a switching component, where the applied voltage is a logical signal that is used to selectively enable conductivity through the transistor by applying a relatively high logical signal voltage (e.g., a voltagecorresponding to a logic 1 state, which may be associated with a positive logical signal voltage supply), or to selectively disable conductivity through the transistor by applying a relatively low logical signal voltage (e.g., a voltage corresponding to a logic 0 state, which may be associated with a ground or virtual ground voltage, or a negative voltage). In some examples where a n-type transistor is employed as a switching component, the voltage of a logical signal applied to the gate terminal may be selected to operate the transistor at a particular working point (e.g., in a saturation region or in an active region).[0090] In some examples, the behavior of an n-type transistor may be different (e.g., more complex) than a logical switching, and selective conductivity across the transistor may also be a function of varying source and drain voltages. For example, the applied voltage at the gate terminal may have a particular voltage level (e.g., a clamping voltage, a control voltage) that is used to enable conductivity between the source terminal and the drain terminal when the source terminal voltage is below a certain level (e.g., below the gate terminal voltage minus the threshold voltage). When the voltage of the source terminal voltage or drain terminal voltage rises above the certain level, the n-type transistor may be deactivated such that the conductive path between the source terminal and drain terminal is opened.[0091] Additionally or alternatively, the circuit 200 may include p-type transistors, where applying a relative negative voltage to the gate of the p-type transistor that is above a threshold voltage for the p-type transistor (e.g., an applied voltage having a negative magnitude, relative to a source terminal, that is greater than a threshold voltage) enables a conductive path between the other terminals of the p-type transistor (e.g., the source terminal and a drain terminal). [0092] In some examples, an p-type transistor may act as a switching component, where the applied voltage is a logical signal that is used to selectively enable conductivity by applying a relatively low logical signal voltage (e.g., a voltage corresponding to a logical“1” state, which may be associated with a negative logical signal voltage supply), or to selectively disable conductivity by applying a relatively high logical signal voltage (e.g., a voltage corresponding to a logical“0” state, which may be associated with a ground or virtual ground voltage, or a positive voltage). In some examples where a p-type transistor is employed as a switching component, the voltage of a logical signal applied to the gate terminal may be selected to operate the transistor at a particular working point (e.g., in a saturation region or in an active region).[0093] In some examples, the behavior of a p-type transistor may be different (e.g., more complex) than a logical switching by the gate voltage, and selective conductivity across the transistor may also be a function of varying source and drain voltages. For example, the applied voltage at the gate terminal may have a particular voltage level that is used to enable conductivity between the source terminal and the drain terminal so long as the source terminal voltage is above a certain level (e.g., above the gate terminal voltage plus the threshold voltage). When the voltage of the source terminal voltage falls below the certain level, the p-type transistor may be deactivated such that the conductive path between the source terminal and drain terminal is opened.[0094] A transistor of the circuit 200 may be a field-effect transistor (FET), including a metal oxide semiconductor FET, which may be referred to as a MOSFET. These, and other types of transistors may be formed by doped regions of material on a substrate. In some examples, the transistor(s) may be formed on a substrate that is dedicated to a particular component of the circuit 200 (e.g., a substrate for the sense amplifier 290, a substrate for the signal development component 250, a substrate for the memory cell 105-a), or thetransistor(s) may be formed on a substrate that is common for particular components of the circuit 200 (e.g., a substrate that is common for the sense amplifier 290, the signal development component 250, and the memory cell 105-a). Some FETs may have a metal portion including aluminum or other metal, but some FETs may implement other non-metal materials such as polycrystalline silicon, including those FETs that may be referred to as a MOSFET. Further, although an oxide portion may be used as a dielectric portion of a FET, other non-oxide materials may be used in a dielectric material in a FET, including those FETs that may be referred to as a MOSFET. [0095] In some examples, different portions of the circuit 200, or different operations that use portions of the circuit 200, may be associated with different latencies. For example, in one portion of an access operation (e.g., a first sub-operation, a first set of sub-operations), a cell access signal may be developed by coupling the memory cell 105-a with the signal development component 250 (e.g., based at least in part on activating or selecting the cell selection component 225, based at least in part on activating another switching component, isolation component, or selection component between the memory cell 105-a and the signal development component 250). In some examples, the cell access signal may be developed based at least in part on, or may be otherwise associated with a charge sharing between the memory cell 105-a (e.g., the capacitor 220) and the signal development component 250 (e.g., charge flowing from the capacitor 220 to the signal development component 250, charge flowing from the signal development component 250 to the capacitor 220). In some examples (e.g., in a read operation), the developed cell access signal (e.g., a cell read signal) or charge sharing may be based at least in part on a logic state stored by the memory cell 105-a. In some examples (e.g., in a write operation, a rewrite operation, a refresh operation), the developed cell access signal (e.g., a cell write signal) or charge sharing may be based at least in part on a developed sense or latch signal (e.g., at the sense amplifier 290, at the signal line 285). As disclosed herein, the charge sharing between the memory cell 105-a and the signal development component 250 may be associated with a change in voltage of the digit line 210, or a change in voltage of the signal development line 255, or both.[0096] The development of a cell access signal for an access operation may be associated with a latency, which may refer to an amount of time (e.g., a duration) for developing the cell access signal, a delay between initiating a cell access signal development operation and a cell access signal reaching a threshold level suitable for subsequent portions of the access operation (e.g., in a read operation), or a delay between initiating a cell access signal development operation and a memory cell 105 being written with a logical value (e.g., in a write operation, a rewrite operation, or a refresh operation). In some examples (e.g., in a read operation) the duration or latency may be referred to as a“row-to-column address delay,” and in some examples (e.g., in a write operation) the duration or latency may be referred to as a “row precharge delay,” which may be longer or shorter than a row-to-column address delay.[0097] In some examples, the sharing of charge between the memory cell 105-a, the digit line 210 (e.g., intrinsic capacitance 230) and the signal development component 250 may be associated with a time constant behavior (e.g., a time constant behavior of a change in voltage VDL, a time constant behavior of a change in voltage VSDL), or otherwise include a logarithmic or exponential behavior. The duration or latency for developing the cell access signal may refer to a duration between a coupling or activation operation (e.g., a selection or activation of the cell selection component 225, a selection or activation of another component configured to selectively couple the memory cell 105-a and the signal development component 250) and the digit line 210 or signal development line 255 reaching a steady state voltage, or the digit line 210 or signal development line 255 reaching a threshold proportion of a steady state voltage (e.g., 95% of a steady state voltage, 99% of a steady state voltage).[0098] In some examples, the duration or latency for developing a cell access signal may be expressed as a time constant (e.g., a duration of time for reaching 63% of a change between initial voltage and steady state voltage), or expressed as a multiple of time constants. For example, the duration or latency for developing the cell access signal may be expressed as a duration of 3 time constants, or a duration otherwise associated with the cell access signal being within 5% of a steady state value. In another example, the duration or latency for developing the cell access signal may be expressed as a duration of 5 time constants, or a duration otherwise associated with the cell access signal being within 1% of a steady state value.[0099] In some examples, charge sharing behavior and associated time constants or other latency may be based at least in part on a capacitance of the memory cell 105-a, the signal development component 250, or other capacitance between the memory cell 105-a and the signal development component 250 (e.g., intrinsic capacitance, such as intrinsic capacitance 230). For example, a relatively high capacitance of the digit line 210 (e.g., a relatively high intrinsic capacitance 230) may be associated with a relatively high latency (e.g., a relatively long duration to develop a cell read signal), and a relatively low capacitance of the digit line 210 may be associated with a relatively low latency (e.g., a relatively short duration to develop a cell read signal). In another example, a relatively high capacitance of memory cell 105-a (e.g., capacitor 220) may be associated with a relatively low latency (e.g., a relatively short duration to develop a cell read signal), and a relatively low capacitance of the memory cell 105-a may be associated with a relatively high latency (e.g., a relatively long duration to develop a cell read signal).[0100] Although described with reference to time constant behavior, a duration or latency associated with developing a cell access signal may additionally or alternatively include other behaviors such as ramped, stepped, or oscillating (e.g., underdamped) behaviors. In some examples, developing a cell access signal may include a set of operations, such as a set of coupling, isolating, activating, deactivating, selecting, or deselecting operations, and a duration or latency associated with developing the cell access signal may include the associated circuit behaviors of each of the set of operations. For example, developing a cell access signal may include activating switching or selection components along the digit line 210 or signal development line 255, activating switching or selection components between the digit line or signal development line and another component (e.g., selectively coupling a voltage source (not shown) with the digit line 210 or the signal development line 255), or other operations or combinations of operations.[0101] In another portion of the access operation (e.g., a second sub-operation, a second set of sub-operations), a sense signal (e.g., a latch signal, an output signal, an input/output signal) may be developed by activating the sense amplifier 290 (e.g., based at least in part on selectively coupling the signal development component 250 with the sense amplifier 290, based at least in part on selectively coupling the sense amplifier with one or both of the low voltage source 293 or the high voltage source 294). In some examples, the sense signal may be developed based at least in part on, or may be otherwise associated with a charge sharing between the signal development component 250 and the sense amplifier 290. In some examples (e.g., in a read operation), the sense signal or charge sharing may be based at least in part on the developed cell access signal (e.g., at the signal development component 250, at the signal development line 255). As described herein, the charge sharing between the signal development component 250 and the sense amplifier 290 may be associated with a change in voltage of the I/O line 295, which may be based at least in part on a comparison between voltage Vsig and voltage Vref. (e.g., an output of VL when VSigis less than Vref, an output of VH when Vsig is greater than Vref).[0102] The development of a sense or latch signal for an access operation may also be associated with a latency, which may refer to an amount of time for developing the sense or latch signal, or a delay between initiating a sense or latch signal generation operation and a sense or latch signal reaching a threshold level suitable for subsequent portions of the access operation (e.g., an output indicative of a logic state stored by the memory cell 105-a). For example, the sharing of charge between the signal development component 250 and the sense amplifier 290 may also be associated with a time constant behavior (e.g., a time constant behavior of a change in voltage of the I/O line 295), or other logarithmic or exponential behavior. The duration or latency for developing the sense or latch signal may refer to a duration between a coupling or activation operation (e.g., a selection or activation of a switching component or selection component, such as the selection component 280, configured to selectively couple the signal development component 250 with the sense amplifier 290, a coupling of the sense amplifier 290 with one or both of the low voltage source 293 or the high voltage source 294) and the I/O line 295 reaching a steady state voltage, or the I/O line 295 reaching a threshold proportion of a steady state voltage (e.g.,90% of a steady state voltage, 95% of a steady state voltage).[0103] The duration or latency for developing a sense or latch signal may also be expressed as a time constant, or as a multiple of time constants. Although described with reference to time constant behavior, a duration or latency associated with developing a sense or latch signal may additionally or alternatively include other behaviors such as ramped, stepped, or oscillating (e.g., underdamped) behaviors. In some examples, developing a sense or latch signal may include a set of operations, such as a set of coupling, isolating, activating, deactivating, selecting, or deselecting operations, and a duration or latency associated with developing the sense or latch signal may include the associated circuit behaviors of each of the set of operations.[0104] In some examples of the circuit 200, a latency associated with developing a cell access signal may be longer in duration than a latency associated with generating a sense or latch signal. For example, a charge sharing between the signal development component 250 and the memory cell 105-a may be associated with a different amount of charge, or a slower transfer of charge, than a charge sharing between the signal development component 250 and the sense amplifier 290. In other words, the signal development component 250 or the memory cell 105-a may be associated with or be otherwise considered as relatively high latency portions of the circuit 200 and the sense amplifier 290 may be associated with or considered as a relatively low latency portion of the circuit 200. In such examples, the circuit 200 may support performing input or output operations more quickly than performing signal development operations.[0105] In accordance with examples as disclosed herein, a memory device 100 that includes the circuit 200 may couple each of a set of signal development components 250 with a respective memory cell 105 during overlapping time intervals, such that multiple cell access signals (e.g., associated with the respective memory cell 105 of each of the respective signal development components 250) may be generated during the overlapping time intervals. Each of the set of signal development components 250 may be selectively coupled with the sense amplifier 290 via the selection component 280 (e.g., in a sequential order) to generate a sequence of sense or latch signals at the sense amplifier 290, or vice versa. For example, in a read operation or set of read operations, the sequence of sense or latch signals generated at the sense amplifier 290 may be based on respective cell access signals (e.g., cell read signals) developed during overlapping time intervals at the set of signal development components 250, which may be associated with particular logic states stored by respective memory cells 105. Thus, as disclosed herein, a memory device 100 that includes the circuit 200 may include signal development components 250 that are multiplexed via the selection component 280, which in some examples may compensate for portions of an access operation that are associated with different latencies.[0106] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0107] FIG. 3 illustrates an example circuit 300 that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein. It is to be understood that circuit 300 is merely one illustrative example, and that many implementations, including other specific circuits and topologies, are possible while adhering to the principles and techniques disclosed herein, as will be appreciated by one of ordinary skill in the art. [0108] Circuit 300 includes a set of memory cells 105-b (e.g., memory cells 105-b-l 11 through 105-b-srm) and a sense component 150-b. Although the memory cells 105-b are illustrated as including a capacitor and a cell selection component, memory cells 105-b in accordance with examples as disclosed herein may include various configurations (e.g., with or without cell selection components) and various types of logic storage elements (e.g., a capacitive memory element, a ferroelectric memory element, a material memory element, a resistive memory element, a thresholding memory element, other memory element) to support various types of memory devices (e.g., DRAM memory devices, FeRAM memory devices, PCM devices, chalcogenide memory devices). Circuit 300 illustrates circuitry that may support the described techniques for content-addressable memory for signaldevelopment caching.[0109] The sense component 150-b may include a set of signal development components 250-a (e.g., signal development components 250-a-l through 250-a-s), each associated with one or more of the memory cells 105-b. The sense component 150-b may also include a selection component 280-a (e.g., a signal development component selection component, a MUX, a transistor network, a transistor array, a switching network, a switching array) that is coupled with the set of signal development components 250-a (e.g., via signal development lines 255-a-l through 255-a-s). The selection component 280-a may be configured to selectively couple a selected one of the signal development components 250-a (e.g., a selected one of the signal development lines 255-a) with a sense amplifier 290-a of the sense component 150-b (e.g., via signal line 285-a, in response to a logical or selection signal, such as a signal development component multiplexing (SDCM) signal). The sense amplifier 290-a may exchange (e.g., communicate, receive, transmit) input or output signals with other components of a memory device (e.g., an input/output component 160) via the I/O line 295-a.[0110] In the example of circuit 300, the memory cells 105-b may be arranged according to a set of domains 310-a (e.g., domains 310-a-l through 310-a-s). In other words, the circuit 300 may illustrate an example of a set of memory cells 105-b that are divided across or otherwise associated with 5 domains. In the example of circuit 300, each of the domains 310-a may be associated with (e.g., coupled with) one of the signal development components 250-a (e.g., domain 310-a-l being associated with signal development component 250-a-l). However, in various examples of circuitry that supports the described techniques, a domain 310 may be associated with more than one signal development component 250, or a signal development component 250 may be associated with more than one domain 310, or both. [0111] Although the example domains 310-a of circuit 300 are described with reference to certain characteristics, alternative definitions or organizations of domains may also be utilized in support of the described techniques. As one such example, memory cells 105 or access lines (e.g., word lines 205, digit lines 210, plate lines 215) of a domain may be organized or subdivided in a different manner than the domains 310-a illustrated in the circuit 300, or a domain may be defined in a different manner than the domains 310-a illustrated in the circuit 300 (e.g., which components are included within an illustrative boundary of a domain), or domains may be coupled with signal development components 250 or sense amplifiers 290 in a different manner than the domains 310-a illustrated in the circuit 300 (e.g., with different multiplexing organizations or schemes, different selection components).[0112] In the example of circuit 300, each of the domains 310-a may include memory cells 105-b that are coupled with or between one of a set of digit lines 210-a and one of a set of plate lines 215-a. For example, for domain 310-a-l, each of the set of memory cells 105-b (e.g., each of memory cells 105-b-l 11 through 105-b-lrm) may be coupled with one of the digit lines 210-a-l 1 through 210-a-lr and may be coupled with one of the plate lines215-a-l 1 through 215-a-lr. In other words, the domains 310-a may illustrate an arrangement of memory cells 105-b that are divided across or otherwise associated with r digit lines 210-a or“columns.” Although the example circuit 300 is illustrated as having separate plate lines 215-a, in some examples, a set of plate lines 215-a (e.g., a set of two or more of the plate lines 215-a-l 1 through 215-a-lr) may represent or be otherwise functionally equivalent with a common plate line of a domain 310-a (e.g., domain 310-a-l), or may represent or be otherwise functionally equivalent with a common plate line of a portion of a domain 310-a (e.g., a“sub-domain”), or a different set of plate lines 215-a (e.g., a set of two or more of the plate lines 215-a-l 1 through 215-a-sr) may represent or be otherwise functionally equivalent with a common plate line of a set of domains 310-a (e.g., a set of domains 310-a-l through 310-a-s).[0113] Domains 310-a may also illustrate an arrangement of memory cells 105-b that are divided across or otherwise associated with m word lines 205-a or“rows.” For example, domain 310-a-l may include respective sets of m memory cells 105-b that are coupled with or between each of the digit lines 210-a of the domain 310-a and the plate lines 215-a of the domain (e.g., a set of memory cells 105-b-l 11 through 105-b-l lm coupled with or between the digit line 210-a-l 1 and the plate line 215-a-l 1). For a set of memory cells 105-b coupled with a same digit line 210-a and a same plate line 215-a, each of the set may be individually selected or accessed based at least in part on an associated logical signal WL (e.g., for domain 310-a, one of logical signals WLn through WLim). Although illustrated as sharing a common set of word lines 205-a in a domain 310-a (e.g., word lines 205-a-l 1 through 205-a-lm shared across each of the columns of domain 310-a-l), other examples of a memory device may have a different arrangement of word lines 205 in a domain 310.[0114] In the example of circuit 300, each of the domains 310-a may also include or be otherwise associated with a selection component 320-a (e.g., a digit line selection component, a MUX, a transistor network, a transistor array, a switching network, a switching array) that is coupled with each of the set of digit lines 210-a of the domain 310-a. For example, the domain 310-a-l may include a selection component 320-a-l that is coupled with each of the digit lines 210-a-l 1 through 210-a-lr. The selection component 320-a-l, for example, may be configured to selectively couple a selected one of the digit lines 210-a-l 1 through 210-a-lr, or one of the memory cells 105-b-l 11 through 105-b-l lm, with the signal development component 250-a-l (e.g., in response to a logical or selection signal, such as a digit line multiplexing (DLM) signal DLMi). Accordingly, each of the selection components 320-a-l through 320-a-s may be associated with a respective one of the signal development components 250-a-l through 250-a-s.[0115] In the example of circuit 300, each of the signal development components 250-a may be associated with a respective set of memory cells 105-b or a respective set of digit lines 210-a. In some examples, the selection components 320-a-l through 320-a-s may be an example of a plurality of second selection components, where each second selection component of the plurality of second selection components is associated with a respective signal development component 250, and is configured to selectively couple any one memory cell 105-b or digit line 210-a of the set with the respective signal development component 250.[0116] In an illustrative example, each of the domains 310-a may include 1,048,576 memory cells 105-b arranged in 1,024 uniquely addressed rows and 1,024 columns (e.g., where m = 1024 and r = 1024). According to the illustrative example of circuit 300, one signal development component 250-a may be mapped to a particular domain 310-a, but in other examples a set of more than one signal development component 250-a may be mapped to a particular domain 310-a (e.g., to respective sets of digit lines 210-a of a domain 310-a).In some examples, such a mapping may be fixed (e.g., where respective sets of digit lines 210-a are mapped to a respective signal development component 250-a within each domain 310-a) which, in some examples, may reduce multiplexing or selection circuit complexity. In various other examples (not shown), a signal development component 250 may be mapped to more than one domain 310, more than one set of digit lines 210 (e.g., of a domain), or other configurations. Additionally or alternatively, a domain 310 or a set of digit lines 210 may be mapped to more than one signal development component 250. In other words, a memory device may include various configurations of signal development components 250 to support examples of the multiplexed signal development described herein.[0117] In the example of circuit 300, each of the digit lines 210-a is associated with (e.g., configured to be selectively coupled with) a single one of the signal development components (e.g., via a respective one of the selection components 320-a-l). For example, the digit line 210-a-l 1 may be associated with signal development component 250-a- 1, but not signal development component 250-a-s. However, in various examples of circuitry that supports the described techniques for content-addressable memory for signal development caching, a particular digit line 210-a may be associated with (e.g., configured to be selectively coupled with) more than one signal development component 250-a, which may include a selection component different from the set of selection components 320-a-l through 320-a-s illustrated in circuit 300. For example, the digit line 210-a-l 1 may be associated with (e.g., configured to be selectively coupled with) either the signal development component 250-a- 1 or the signal development component 250-a-s, or any other signal development components 250-a of the circuit 300.[0118] In another illustrative example that supports the described techniques for multiplexed signal development, another circuit may include several domains each with 1,048,576 memory cells 105 arranged in 1,024 uniquely addressed rows and 1,024 columns, which may refer to an organization of components that is different than the circuit 300. Each of the domains of the other circuit may be arranged such that m = 1024 and r = 1024, and the digit lines 210 of a respective domain of this other circuit may collectively be mapped to an array of 64 signal development components 250 (e.g., according to a many to-one mapping, according to a many-to-many mapping). In one example of the other circuit, each of the signal development components 250 may be mapped to a respective subset of the digit lines 210 of the domain (e.g., one signal development component 250 may be mapped to 1024 / 64 = 16 digit lines 210 within each domain). In some examples, such a mapping may be fixed (e.g., where groups or subsets of 16 digit lines 210 are mapped to a respective signal development component 250 within each domain) which, in some examples, may reduce multiplexing or selection circuit complexity.[0119] In this other example, a row of 1024 memory cells 105 (e.g., spanning one domain of the other circuit) may be selected by a single word line 205 in each domain. In other words, with 64 signal development components 250 per domain and r = 1024, the activation of a word line in one domain and the activation of another word line in another domain (e.g., including other independent word lines in other domains) may select memory cells 105 associated with the respective row. With 64 signal development components 250 per domain of such a circuit, 64 of the set of 1,024 memory cells 105 may be accessed at a time in each domain (e.g., by selectively coupling a respective digit line 210 with each of the 64 signal development components 250 via a respective selection component). During such accessing, other digit lines 210 may be selectively isolated from the respective signal development component 250 and other signal development components 250 interfacing the same domain. Further, the other digit lines 210 may be shunted or masked as described herein.[0120] Thus, examples in accordance with the techniques disclosed herein may include examples in which word lines 205 within a domain, or word lines 205 across multiple domains, or some combination thereof, are independent (e.g., selectable independently of one another). Examples in accordance with the techniques disclosed herein may also include examples in which word lines 205 within a domain, or word lines 205 across multiple domains, or some combination thereof, are locked (e.g., hard-wired) to be selected together (jointly). It is to be understood that in examples in which word lines 205 are independently selectable, such word lines 205 may nevertheless be operated synchronously (e.g., as though locked), at least at certain times or under certain conditions. Further, examples in accordance with the techniques disclosed herein may include examples in which many digit lines 210 are mapped to many signal development components 250 within a domain, as well as examples where many digit lines 210 are mapped to one signal development component 250 within a domain (e.g., a selection component 280 may have many-to-one or many-to-many functionality). Aspects of these and other example variations are described throughout the disclosure, including with reference to FIGs. 8A, 8B, and 8C.[0121] In some examples, operations associated with word line selection may be time- bounded to prevent loss or corruption of data, which may involve waiting for completion of operations that are in progress with accessed cells. For example, when switching from a first word line 205-a of a domain 310-a to a second word line 205-a of the same domain 310-a, such a switching may need to wait for cell access signal development of the domain 310-a (e.g., of the signal development component 250-a) to be completed before the switching takes place. In examples where a word line 205-a is shared across domains (e.g., a word line 205-a that is shared between domain 310-a-l and 310-a-s, word line 205-a-l 1 being functionally equivalent to word line 205-a-sl), when switching from a first shared word line 205-a to a second shared word line 205-a, such a switching may need to wait for cell access signal development of each of the domains 310-a-l and 310-a-s (e.g., each of the signaldevelopment components 250-a-l and 250-a-s) to be completed before the switching takes place[0122] In the example of circuit 300, each of the domains 310-a may also include or be otherwise associated with a set of shunts 330-a (e.g., digit line shunts, digit-to-plate shunts). For example, domain 310-a-l may include a set of shunts 330-a-l l through 330-a-lr. Each of the shunts 330-a may be coupled with or between a digit line 210-a and plate line 215-a. For example, for domain 310-a-l, shunt 330-a-l l may be coupled with or between the digit line 210-a-l 1 and the plate line 215-a-l 1. The shunt 330-a-l l, for example, may be configured to selectively couple the digit line 210-a-l 1 with the plate line 215-a-l 1 (e.g., in response to a logical or switching signal DLSn). In some examples, a shunt 330-a may be configured to selectively equalize a bias between a digit line 210-a and a plate line 215-a, or equalize one or more memory cells 105-b that are coupled with or between a digit line 210-a and a plate line 215-a. In some examples, a shunt 330-a may be configured to selectively discharge one or more memory cells 105-b that are coupled with or between a digit line 210-a and a plate line 215-a.[0123] In some examples, the circuit 300 may be operated according to a shunt mask. For example, when multiplexing is performed on a domain 310-a (e.g., using selectioncomponents 320-2), a shunt 330-a of a masked digit line 210-a (e.g., a digit line 210-a that is not associated with an access operation that is being performed) may support a selective coupling with a plate line 215-a to prevent or reduce data loss (e.g., charge leakage) of memory cells 105-b that are associated with the masked digit line 210-a. In other words, a shunt 330-a may turn off bit transfer on masked digit lines 210-a that are not associated with an access operation that is being performed. [0124] The selection component 280-a and the selection components 320-a may include various configurations of components, and each may be referred to as a multiplexer, a transistor network, a transistor array, a switching network, or a switching array. In one example, the selection component 280-a may include a set of transistors that are each coupled with the sense amplifier 290-a (e.g., each coupled with the signal line 285-a). Each of the set of transistors may also be coupled with a respective one of the signal development components 250-a (e.g., a respective one of the signal development lines 255-a-l through 255-a-s). Each of the set of transistors may be configured to selectively couple the respective one of the signal development components 250-a with the sense amplifier 290-a, responsive to one of a set of switching or logical signals provided to a gate of the transistor.[0125] In some examples, a selection component 280-a or a selection component 320-a may include decoder or other logical or selection signal conversion component. A decoder of the selection component 280-a, for example, may receive a logical or selection signal (e.g., signal SDCM), which may be a digital signal (e.g., a signal having or otherwise representing multiple bits) received over a signal bus. In some examples, the decoder may receive the digital signal as an input to generate a set of binary signals (e.g., switching or logical signals) that may be applied to the gates of a set of transistors configured in a switching arrangement. For example, the decoder of the selection component 280-a may receive a selection signal SDCM as a 4-bit digital input signal, and generate 16 binary (e.g., on/off) switching signals, each applied to the gate of one of a set of 16 transistors configured in a switchingarrangement.[0126] In various examples, the selection component 280-a may be configured such that one of the signal development components 250-a-l through 250-a-s is coupled with (e.g., selectively coupled with) the sense amplifier 290-a at a time, and others of the signal development components 250-a-l through 250-a-s may be decoupled from (e.g., selectively decoupled from) the sense amplifier 290-a at that time (e.g., the time when the one of the signal development components 250-a-l through 250-a-s is selectively coupled with the sense amplifier 290-a). In some examples, the selection component 280-a may also be configured to support operations where none of the signal development components 250-a-l through 250-a-s are coupled with the sense amplifier 290-a at a particular time (e.g., where each of the signal development components 250-a-l through 250-a-s are selectively isolated from the sense amplifier 290-a). In various examples of the circuit 300, the selection components 320-a may include similar features or sets of features as a selection component 280-a, or the selection components 320-a may include different features or sets of features as a selection component 280-a.[0127] In some examples of the circuit 300, the signal development components 250-a or the memory cells 105-b may be associated with or be otherwise considered as relatively high latency portions of the circuit 300, and the sense amplifier 290-a may be associated with or considered as a relatively low latency portion of the circuit 300. In accordance with examples as disclosed herein, the sense component 150-b may illustrate an example of dividing memory cell access circuitry into high-latency parts (e.g., signal development components 250-a) and low-latency parts (e.g., sense amplifier 290-a), and coupling a set of high-latency parts with a low-latency parts through a multiplexer (e.g., selection component 280-a).[0128] In the example of circuit 300, the selection component 280-a may provide a first degree of data pipelining, which may reduce the impact of data access serialization due to row buffer conflicts. For example, the selection component 280-a may support overlapping data transfers on different sets of digit lines 210-a (e.g., different domains 310-a). Thus, the sense amplifier 290-a may be free to support read, write, rewrite, or refresh operations (e.g., while coupled with one of the signal development components 250-a) while other signal development components 250-a are involved in data transfer (e.g., while other signal development components 250-a are coupled with digit lines 210-a or memory cells 105-b).[0129] The set of signal development components 250-a may be considered to be a small, fast local cache (e.g., a signal development cache), where the respective signal development components 250-a may be configured to store a signal state, different than logic states stored at the memory cells 105-b. Such a configuration may be used to support reducing a rate of row buffer conflicts, increasing internal bandwidth, or other benefits. In some examples, the selection components 320-a may provide further gains by providing a second degree of data pipelining via multiplexed digit lines 210-a. Thus, in accordance with examples as disclosed herein, a memory device 100 that includes the circuit 300 may include signal development components 250-a that are multiplexed via the selection component 280-a, or digit lines 210-a that are multiplexed via one or more selection components 320-a, which may compensate for portions of an access operation or portions of access circuitry that are associated with different latencies.[0130] Various memory devices (e.g., memory device 100) may include various arrangements of the circuit 300. For example, a memory device 100 may include a set of sense components 150-b, or a sense component 150 may otherwise include a set of sense amplifiers 290-a and corresponding sets of multiplexed signal development components 250-a. In one example, a memory device 100, or portion thereof, may include 16 sense amplifiers 290-a that are multiplexed with 1024 digit lines 210-a, which may or may not include multiplexing via selection components 320-a. In some examples, a set of sense amplifiers 290-a may be included in a composite array where the set of sense amplifiers 290-a are accessed as a single“row” of sense amplifiers of the composite array. In various examples, multiplexed digit lines 210-a may be in the same domain 310-a or different domains 310. In some examples, each of the domains 310-a may be independently controllable, and may be accessed via the same row component 125 or different row components 125.[0131] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0132] FIG. 4A illustrates an example of a read operation 400 that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The read operation 400 may illustrate portions (e.g., time intervals) of an access operation that are associated with generating cell access signals (e.g., cell read signals, cell write signals) and latch signals when accessing a memory cell 105. For example, the read operation 400 may be divided into a read signal development portion 410 (e.g., a cell read portion), a latch signal generation portion 420, and a rewrite signal development portion 430. (e.g., a cell rewrite portion). The read operation 400 may employ circuitry that supports multiplexed signal development, such as the circuit 300 described with reference to FIG. 3. As an illustrative example, the read operation 400 is described with reference to reading a logic state stored by the memory cell 105-b-l 11 of the circuit 300, but the read operation 400 may be illustrative of operations that may be performed on any one or more of the memory cells 105-b of the circuit 300.[0133] The read signal development portion 410 may be associated with a charge sharing between the memory cell 105-b-l 11 (e.g., a capacitive storage element of the memory cell 105-b-l 11, a linear capacitor or a ferroelectric capacitor), the digit line 210-a-l 1 (e.g., an intrinsic capacitance 230), and the signal development component 250-a-l. The read signal development portion 410 may be an example of developing a signal (e.g., a signal state, a cache signal) at the signal development component 250-a-l based at least in part on selectively coupling the signal development component 250-a-l with the memory cell 105-b-l 11. In some examples, developing the read signal at the signal development component 250-a-l is associated with a first latency (e.g., a relatively high latency or long duration). During the read signal development portion 410, the signal development component 250-a-l may be selectively decoupled from the sense amplifier 290-a.[0134] In some examples of the read signal development portion 410, an access line of the signal development component 250-a-l (e.g., the signal development line 255-a-l) may be biased with a relatively high voltage, which may be associated with storing a relatively high voltage charge at the signal development component 250-a-l (e.g., in a signal storage component of the signal development component 250-a-l, such as an integrator capacitor). In some examples, such a biasing may be associated with a“plate-low” read operation where, during the read signal development portion 410, the plate line 215-a-l 1 associated with the memory cell 105-b-l 11 being accessed is biased at a lower voltage (e.g., a ground voltage) than the digit line 210-a-l associated with the memory cell 105-b-l 11.[0135] The read signal development portion 410 may also include selectively coupling the memory cell 105-b-l 11 with the signal development component 250-a-l. In some examples, the read signal development portion 410 may include activating the word line 205-a-l 1 that is associated with the memory cell 105-b-l 11 that is being read (e.g., activating the logical signal WLi), which may selectively couple a memory storage element (e.g., a capacitor 220) with the respective digit line 210-a-l 1 (e.g., via a cell selection component 225 of the memory cell 105-b-l 11). In some examples, the read signal development portion 410 may include selectively coupling the respective digit line 210-a-l 1 with the signal development component 250-a-l (e.g., via selection component 320-a-l, based on a selection signal DLMi, or some other switching component). Charge may accordingly be shared between the memory cell 105-b-l 11 and the signal development component 250-a-l, and may settle after some time (e.g., according to a time constant behavior), with changes in voltage change of the digit line 210-a-l 1 and the signal development line 255-a-l that are based at least in part on the logic state stored by the memory cell 105-b-l 11.[0136] In some examples, a read signal development portion 410 may include a delay (e.g., a delay portion, a delay duration) between developing a read signal (e.g., a read signal at a signal development component 250 reaching a steady state, a read signal reaching a maximum value at a signal development component 250) and providing the developed read signal (e.g., as maintained by the signal development component 250) to a sense amplifier 290. In other words, there may be a delay or inactivity period during read signal development portion 410 before initiating a latch signal generation portion 420, which in some examples may include a decay of a developed read signal (e.g., a decay of a maintained read signal). In some examples, a circuit 300 may be configured such that a duration of such a delay or inactivity period, or an amount of decay of a developed read signal, can be tolerated while still reliably detecting a logic state stored by a memory cell 105. In some examples, such functionality of the circuit 300 may be supported by refreshing operations of signal development components 250 that mitigate decay of developed read signals (e.g., maintaining cache signals at the signal development components 250). These and other configurations may support signal development components 250 performing a caching function (e.g., a caching of a developed read signal or cache signal for some amount of time) in the circuit 300.[0137] In some examples, the charge sharing of the read signal development portion 410 may be associated with a destructive read operation (e.g., where the originally-stored logic state of the memory cell 105-b-l 11 is lost or otherwise degraded at the memory cell105-b-l 11), and therefore may be followed by rewrite operations (e.g., the rewrite signal development portion 430). In some examples, a rewrite operation may not immediately follow a read signal development portion 410, such as when stored data is transferred to a signal development component 250, where it may be stored and further read, written, or modified. In various examples, data may be returned to a same memory cell 105 or a different memory cell 105, which may be associated with operations that make the signal development component 250 available for other operations. In some examples, the charge sharing of the read signal development portion 410 may be associated with a non-destructive read operation (e.g., where the originally-stored logic state of the memory cell 105-b-l 11 is maintained at the memory cell 105-b-l 11), and therefore may not be followed by rewrite operations (e.g., rewrite signal development portion 430 may be omitted).[0138] The charge sharing of the read signal development portion 410 may be associated with a delay or latency known as a row-to-column address delay. In a DRAM application, data may be stored at a memory cell 105 as electrode charge, and may be relatively fast to respond (e.g., having a relatively low latency). In an FeRAM application, data may be stored at a memory cell 105 as a cell state in form of dipole orientation or polarization. The kinetics of such dipoles may be relatively slow (e.g., having a relatively high latency), which may lead to a longer sense time for FeRAM applications (e.g., longer than DRAM applications). Thus, in some examples (e.g., in an FeRAM application), the read signal development portion 410 may be associated with a relatively high latency or long duration (e.g., in comparison with a latch signal generation portion 420). In some FeRAM applications, for example, the latency associated with the operations of the read signal development portion 410 may be approximately 50 nanoseconds.[0139] In some examples of the read signal development portion 410, the shunts 330-a associated with other memory cells 105-b of the domain 310-a-l, such as shunts 330-a-12 (not shown, which may be associated with a digit line 210-a-12 or a plate line 215-a-12) through 330-a-lr, may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed (e.g., equalizing a bias between a digit line 210-a-12 and a plate line 215-a-12, equalizing a bias between a digit line 210-a-lr and a plate line 215-a-lr, and so on). In FeRAM applications, for example, such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b-l 11 that is being accessed during the read signal development portion 410.[0140] The latch signal generation portion 420 may be associated with a charge sharing between the signal development component 250-a-l and the sense amplifier 290-a. The latch signal generation portion 420 may be an example of generating an output signal of the sense amplifier 290-a (e.g., an amplifier component) based at least in part on the developed signal at the signal development component 250-a-l (e.g., the cell read signal). In some examples, generating the latch signal at the sense amplifier 290-a is associated with a second latency (e.g., a relatively low latency or short duration). The transition from the read signal development portion 410 to the latch signal generation portion 420 may include selectively coupling the signal development component 250-a-l with the sense amplifier 290-a.[0141] In some examples, selectively coupling the signal development component 250-a-l with the sense amplifier 290-a may include a selection via the selection component 280-a, based on a logical selection signal SDCM. In some examples, selectively coupling the signal development component 250-a-l with the sense amplifier 290-a may include a selective coupling via some other switching component (e.g., an isolation switching component) between the signal development component 250-a-l and the sense amplifier 290-a. In some examples, the charge sharing of the latch signal generation portion 420 may be relatively rapid, and may take some fraction of the amount of time involved for the charge sharing between the memory cell 105-b-l 1 and the signal development component 250-a-l.In other words, the latch signal generation portion 420 may be shorter in duration than the read signal development portion 410. In some FeRAM applications, for example, the latency associated with the operations of the latch signal generation portion 420 may beapproximately 5 to 10 nanoseconds.[0142] In some examples, the latch signal generation portion 420 may include“firing” the sense amplifier 290-a, which may include selectively coupling one or more voltage sources with the sense amplifier 290-a (e.g., a low voltage source 293, a high voltage source 294). Thus, an output signal may be generated at the sense amplifier 290-a that is based at least in part on the cell read signal (e.g., based at least in part on the logic state stored by the memory cell 105-b-l 11). The output signal may be passed from the sense amplifier 290-a to another component of a memory device (e.g., an input/output component 160) via the I/O line 295 to provide an indication of the data stored by the memory cell 105-b-l 11. In some examples, the output signal or some other signal associated with the generated latch signal may also be passed back to, or otherwise shared with the signal development component 250-a-l, which in some examples may support a rewrite operation (e.g., following a destructive read operation). For example, based on the generated latch signal or output signal (e.g., based on whether the memory cell 105-b-l 11 stored a logic 0 or a logic 1), a rewrite signal may be passed or otherwise shared or generated with the signal development component 250-a-l (e.g., via the signal development line 255-a-l) as part of the latch signal generation portion 420. In some examples, the generated latch signal or output signal may be passed back to the signal development component 250-a-l to reinforce a charge or other signal maintained at the signal development component 250-a-l, which may support a rewrite operation on the memory cell 105-b-l 11.[0143] In some examples of the latch signal generation portion 420, the shunts 330-a associated with other memory cells 105-b of the domain 310-a-l, such as shunts 330-a-12 (not shown, which may be associated with a digit line 210-a-12 or a plate line 215-a-12) through 330-a-lr, may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed (e.g., equalizing a bias between a digit line 210-a-12 and a plate line 215-a-12, equalizing a bias between a digit line 210-a-lr and a plate line 215-a-lr, and so on). In FeRAM applications, for example, such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b-l 11 that is being accessed during the latch signal generation portion 420.[0144] The rewrite signal development portion 430 may be associated with a charge sharing between the memory cell 105-b-l 11, the digit line 210-a-l 1, and the signal development component 250-a-l. The rewrite signal development portion 430 may be an example of developing a cell access signal (e.g., a cell write signal, a cell rewrite signal) at or using the signal development component 250-a-l. In some cases, developing a cell access signal (e.g., a cell write signal, a cell rewrite signal) at or using the signal development component 250-a-l may be based at least in part on a latch signal of the sense amplifier 290-a (e.g., as generated during the latch signal generation portion 420). In some examples, a cell access signal (e.g., a cell write signal, a cell rewrite signal) at or using the signal development component 250-a-l may be based on a charge or voltage maintained at the signaldevelopment component 250-a-l (e.g., based at least in part on the read signal development portion 410), where the charge or voltage maintained at the signal development component 250-a-l may be indicative of the logic state originally stored by the memory cell 105-b-l 11.In some examples, the charge or voltage maintained at the signal development component 250-a-l may be independent of the latch signal at the sense amplifier 290-a, or may be reinforced by the latch signal at the sense amplifier 290-a (e.g., as reinforced during the latch signal generation portion 420).[0145] In some examples, developing the rewrite signal at the signal development component 250-a-l is associated with a third latency (e.g., a relatively high latency or long duration), which may or may not be equal to the first latency. The transition from the latch signal generation portion 420 to the rewrite signal development portion 430 may include selectively decoupling or isolating the signal development component 250-a-l from the sense amplifier 290-a (e.g., via the selection component 280-a or an isolation switchingcomponent). Although the rewrite signal development portion 430 may support rewriting a logic state to a memory cell 105 that has been discharged, depolarized, or otherwise destroyed or degraded in a read operation, in examples of non-destructive read operations (e.g., when 105-b-l 11 maintains a stored logic state after the read signal development portion 410), the rewrite signal development portion 430 may be omitted, and the latch signal generation portion 420 may be followed by another access operation (e.g., a read operation, a write operation, a refresh operation).[0146] In various examples, a rewrite of the memory cell 105-b-l 11 during the rewrite signal development portion 430 may be performed or modified based on whether a rewrite signal is generated or otherwise provided by the sense amplifier 290-a, or based on whether a rewrite signal is generated or otherwise provided by a signal development component 250-a. For example, a rewrite operation of the rewrite signal development portion 430 may be performed without relying on a rewrite signal of the sense amplifier 290-a, such as when a signal development component 250-a is configured to locally maintain a charge or other state (e.g., cache state, signal state) associated with the originally-stored logic state of the memory cell 105-b-l 11 until it is transferred back to the memory cell 105-b-l 11 (e.g., providing a local caching function as related to rewrite operations). In other words, the read signal development portion 410 or latch signal generation portion 420 may or may not be “destructive” from the perspective of a signal development component 250-a, depending on whether the signal development component 250-a relies on a latch signal of the sense amplifier 290-a for rewriting the memory cell 105-b-l 11. In some examples (e.g., when a signal development component 250-a is configured to maintain a charge or other state indicative of an originally-stored logic state of the memory cell 105-b-l 11), the rewrite of the memory cell 105-b-l 11 may occur after some delay period (e.g., of the rewrite signal development portion 430) depending on a duration that the signal development component 250-a-l is configured to maintain such a charge or other state, or a type of control logic that implements the write-back (e.g., first-in-first-out (FIFO), least-recently used (LRU), or others). [0147] In some examples of a rewrite operation, the circuit 300 may be configured to couple the memory cell 105-b-l 11 with a high voltage source (e.g., a high voltage rail, via the signal development component 250-a-l), which may be a direct coupling by pull-up or pull down circuitry (e.g., a transistor or other switching component of the signal development component 250-a-l). In some examples, the signal development component 250-a-l may be configured with a capacitor or other charge storage component, and the latch signal generation portion 420 or the rewrite signal development portion 430 may include charging or refreshing the capacitor or other charge storage component with a charge that is sufficient to rewrite the memory cell 105-b-l 11 (e.g., during the rewrite signal development portion 430). Thus, in various examples, the signal development component 250-a-l may rewrite the logic state to the memory cell 105-b-l 11, which may be performed while the signal development component 250-a-l is selectively decoupled from the sense amplifier 290-a, so the sense amplifier 290-a is free to support operations with other signal development components 250-a.[0148] The charge sharing of the rewrite signal development portion 430 may be associated with a delay or latency known as a row precharge delay, which may include fully or partially rewriting a logic state originally stored at the memory cell 105-b-l 11. For example, to rewrite a logic 0, the digit line 210-a-l 1 may be biased to a positive voltage (e.g., 1.5 V) and the plate line 215-a-l 1 may be biased to a ground or negative voltage (e.g., 0 V). To rewrite a logic 1, the digit line 210-a-l 1 may be biased to a ground or negative voltage (e.g., 0 V) and the plate line 215-a-l 1 may be biased to a positive voltage (e.g., 1.5 V). In some cases, the biasing of the digit line 210-a-l 1 and the plate line 215-a-l 1 may be based at least in part on the generated latch signal (e.g., prior to the sense amplifier 290-a being selectively isolated from the signal development component 250-a-l). For example, during the rewrite signal development portion 430, the signal development component 250-a-l or the sense amplifier 290-a may bias the digit line 210-a-l 1 to either a positive voltage or a ground voltage based at least in part on the latch signal. In some cases, such a bias may be based on a charge or other state maintained at the signal development component 250-a-l, which may be independent of a generated latch signal (e.g., as generated using the sense amplifier 290-a).[0149] In a DRAM application, data may be written at a memory cell 105 as electrode charge, and may be relatively fast to respond (e.g., a relatively low latency). In an FeRAM application, data may be written at a memory cell 105 as cell state in form of dipole orientation or polarization. The kinetics of such dipoles may be relatively slow (e.g., a relatively high latency), which may lead to a longer write time for FeRAM applications (e.g., longer than DRAM application). Thus, in some examples (e.g., in an FeRAM application), the rewrite signal development portion 430 may be associated with a relatively high latency or long duration (e.g., in comparison with a latch signal generation portion 420). At the end of the rewrite signal development portion 430, all of the digit lines 210-a-l 1 and all of the plate lines 215-a of the domain 310-a-l may be biased with a ground voltage, effectively equalizing a bias across each of the memory cells 105-b of the domain 310-a-l 1, which may support maintaining logic states stored by the memory cells 105-b over time.[0150] In some examples, the shunts 330-a associated with other memory cells 105-b of the domain 310-a-l, such as shunts 330-a-12 (not shown, which may be associated with a digit line 210-a-12 or a plate line 215-a-12) through 330-a-lr, may be selected or activated during the rewrite signal development portion 430, which may equalize a bias across memory cells 105-b that are not being accessed (e.g., equalizing a bias between a digit line 210-a-12 and a plate line 215-a-12, equalizing a bias between a digit line 210-a-lr and a plate line 215-a-lr, and so on). Such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b- 111 that is being rewritten during the rewrite signal development portion 430.[0151] The read operation 400 may be associated with the reading of a single memory cell 105-b- 11 having a total duration of tAi - tAo, which includes the read signal development portion 410, the latch signal generation portion 420, and the rewrite signal development portion 430 for reading the single memory cell 105-b-l 11. In examples where the read operation 400 does not employ multiplexed signal development techniques (e.g., a sequence of read operations 400 that use the same signal development component 250), a subsequent read operation that employs the sense amplifier 290-a may follow the rewrite signal development portion 430. Thus, performing multiple read operations 400 (e.g., reading multiple memory cells 105-b) using a same signal development component 250 may involve integer multiples of the duration tAi - tAo (e.g., at least 2 * (tAi - tAo) to read two memory cells 105-b). However, multiplexing signal development components 250-a (e.g., via the selection component 280-a) may reduce the amount of time involved for the sense amplifier 290-a to read multiple memory cells 105-b. [0152] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0153] FIG. 4B illustrates an example of a read operation 450 that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The read operation 450 may illustrate portions (e.g., time intervals) of an access operation (e.g., a multi-cell access operation) that are associated with generating cell access signals (e.g., cell read signals, cell write signals) and latch signals when accessing four memory cells 105 (e.g., via four signal development components 250). For example, the read operation 450 may be divided into read signal development portions 410-a, latch signal generation portions 420-a, and rewrite signal development portions 430-a for each of a set of memory cells 105-b, which may be examples of corresponding portions described with reference to FIG. 4A. The read operation 450 may employ circuitry that supports multiplexed signal development, such as the circuit 300 described with reference to FIG. 3. The read operation 450 illustrates an example of separating signal development operations from input/output operations, which may improve data throughput in a memory device.[0154] As an illustrative example, the read operation 450 is described with reference to reading a logic state stored by four memory cells 105-b of four different domains 310-a, where each of the different domains is associated with a respective signal development component 250-a that is multiplexed with the sense amplifier 290-a. Read signaldevelopment portion 410-a-l, latch signal generation portion 420-a- 1, and rewrite signal development portion 430-a-l may refer to, for example, a read operation of memory cell 105-b-l 11 (e.g., of a domain 310-a-l, associated with a signal development component 250-a-l). Read signal development portion 410-a-2, latch signal generation portion 420-a-2, and rewrite signal development portion 430-a-2 may refer to, for example, a read operation of a memory cell 105-b-211 (e.g., of a domain 310-a-2, not shown, which may be associated with a signal development component 250-a-2). Read signal development portion 410-a-3, latch signal generation portion 420-a-3, and rewrite signal development portion 430-a-3 may refer to, for example, a read operation of a memory cell 105-b-311 (e.g., of a domain 310-a-3, not shown, which may be associated with a signal development component 250-a-3). Read signal development portion 410-a-4, latch signal generation portion 420-a-4, and rewrite signal development portion 430-a-4 may refer to, for example, a read operation of a memory cell 105-b-411 (e.g., of a domain 310-a-4, not shown, which may be associated with a signal development component 250-a-4). Each of the signal development components 250-a-l, 250-a-2, 250-a-3, and 250-a-4 may be selectively coupled with the same sense amplifier 290-a via a selection component 280-a (e.g., based on a logical selection signal SDCM).[0155] Each of the read signal development portions 410-a may be associated with charge sharing between a respective memory cell 105-b, a respective digit line 210-a and a respective signal development component 250-a, which may occur during overlapping time intervals. The read signal development portions 410-a may be examples of developing a signal (e.g., a cell read signal, a cache signal, a signal state) at a signal development component 250-a of a plurality of signal development components 250-a based at least in part on selectively coupling the signal development component 250-a with a memory cell 105-b of the plurality of memory cells 105-b. The read signal development portion 410-a-l may be an example of coupling (e.g., via the selection component 280-a, via the selection component 320-a-l), during a first time interval (e.g., and based at least in part on determining to access the memory cell 105-b-l 11), the memory cell 105-b-l 11 (e.g., a first memory cell) with the signal development component 250-a-l (e.g., a first signal development component), and the read signal development portion 410-a-2 may be an example of coupling (e.g., via the selection component 280-a, via a selection component 320-a-2), during a second time interval that overlaps the first time interval (e.g., and based at least in part on determining to access the memory cell 105-b-211), the memory cell 105-b-211 (e.g., a second memory cell) with the signal development component 250-a-2 (e.g., a second signal development component).[0156] Charge may accordingly be shared between the memory cell 105-b-l 11 and the signal development component 250-a-l, between the memory cell 105-b-211 and the signal development component 250-a-2, between the memory cell 105-b-311 and the signal development component 250-a-3, and between the memory cell 105-b-411 and the signal development component 250-a-4. In other words, charge may be shared via the signal development components 250-a-l through 250-a-4 during overlapping time intervals. In some examples, developing the cell read signals at the signal development components 250-a-l through 250-a-4 is associated with a first latency (e.g., a relatively high latency or long duration).[0157] In some examples of the read signal development portions 410-a, the shunts 330-a associated with other memory cells 105-b of the respective domain 310-a may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed. For example, for domain 310-a-l, during the read signal development portion 410-a-l, a bias between a digit line 210-a-12 and a plate line 215-a-12 may be equalized via a shunt330-a-12, a bias between a digit line 210-a-l 3 and a plate line 215-a-13 may be equalized via a shunt 330-a- 13, and so on. In FeRAM applications, for example, such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b that is being accessed during the respective read signal development portions 410.[0158] The latch signal generation portions 420-a may be associated with a charge sharing between respective ones of the signal development components 250-a-l and the sense amplifier 290-a, which may occur over non-overlapping time intervals. The latch signal generation portions 420-a may each be an example of generating an output signal of the sense amplifier 290-a based at least in part on the developed signal at the respective signal development component 250-a (e.g., based on the cell read signal, cache signal, or signal state). In some examples, generating the latch signal at the sense amplifier 290-a is associated with a second latency (e.g., a relatively low latency or short duration). The transition from a read signal development portion 410 to the corresponding latch signal generation portion 420-a may include selectively coupling the respective signal development component 250-a with the sense amplifier 290-a.[0159] The latch signal generation portion 420-a-l may be an example of coupling (e.g., via the selection component 280-a), during a third time interval subsequent to the first time interval, the signal development component 250-a-l (e.g., the first signal development component) with the sense amplifier 290-a. In some examples, the third time interval may at least partially overlap the second time interval, or the third time interval may be within the second time interval. The latch signal generation portion 420-a-2 may be an example of coupling (e.g., via the selection component 280-a), during a fourth time interval subsequent to the second time interval (e.g., and subsequent to the third time interval), the signal development component 250-a-2 (e.g., the second signal development component) with the sense amplifier 290-a[0160] The latch signal generation portions 420-a-l through 420-a-4 may be performed according to a sequence, which may be based at least in part on the sequence of signal development components selected or otherwise indicated by the logical selection signal SDCM. In some examples, each of the latch signal generation portions 420-a may be separated by a gap or delay period (e.g., the period between the latch signal generation portion 420-a-l and the latch signal generation portion 420-a-2), which may be associated with a gap or delay of the selection component 280-a, a gap or delay associated with changing a value of the logical selection signal SDCM, or a period during which no signal development components 250-a are coupled with the sense amplifier 290-a. In other words, an access operation may include a gap or delay period between when one signal development component 250-a is selectively decoupled from the sense amplifier 290-a and another signal development component 250-a is selectively coupled with the sense amplifier 290-a. In other examples, such decoupling and coupling may be configured to occur simultaneously.[0161] In some examples, the latch signal generation portions 420-a may include“firing” the sense amplifier 290-a, which may include selectively coupling one or more voltage sources with the sense amplifier 290-a (e.g., a low voltage source 293, a high voltage source 294). Thus, according to the sequence of latch signal generation portions 420-a-l through 420-a-4, a sequence of output signals may be generated at the sense amplifier 290-a that is based at least in part on the respective sequence of cell read signals (e.g., according to the sequence or read signal development portions 410-a-l through 410-a-4, based at least in part on the logic states stored by the accessed memory cells 105-b-l 11 through 105-b-411).[0162] The output signals may be passed from the sense amplifier 290-a to another component of a memory device (e.g., an input/output component 160) via the I/O line 295 to provide an indication of the data stored by the memory cells 105-b. In some examples, the output signals or some other signals associated with the generated latch signals may also be passed back to, or otherwise shared with the signal development components 250-a- 1 through 250-a-4, which in some examples may support rewrite operations (e.g., following a destructive read operation). For example, based on the generated latch signal or output signal (e.g., based on whether the memory cells 105-b stored a logic 0 or a logic 1), a rewrite signal may be passed or otherwise shared with the respective one of signal developmentcomponents 250-a-l through 250-a-4 as part of the latch signal generation portions 420.[0163] In some examples of the latch signal generation portions 420-a, the shunts 330-a associated with other memory cells 105-b of the respective domain 310-a may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed. For example, for domain 310-a-l, during the latch signal generation portion 420-a- 1, a bias between a digit line 210-a-12 and a plate line 215-a-12 may be equalized via a shunt330-a-12, a bias between a digit line 210-a-l 3 and a plate line 215-a-13 may be equalized via a shunt 330-a- 13, and so on. In FeRAM applications, for example, such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b that is being accessed during the respective latch signal generation portions 420.[0164] The rewrite signal development portions 430-a may be associated with a charge sharing between the respective one of the memory cells 105-b, the respective one of the digit lines 210-a, and the respective one of the signal development components 250-a. The rewrite signal development portions 430-a may each be an example of developing a cell access signal (e.g., a cell write signal, a cell rewrite signal) at a signal development component 250-a based at least in part on a latch signal of the sense amplifier 290-a, or may be independent of a latch signal of the sense amplifier 290-a. In some examples, developing the rewrite signals at the signal development components 250-a-l is associated with a third latency (e.g., a relatively high latency or long duration), which may or may not be equal to the first latency. The transition from a latch signal generation portion 420-a to a corresponding rewrite signal development portion 430-a may include selectively isolating the respective signaldevelopment component 250-a from the sense amplifier 290-a (e.g., via the selection component 280-a or another isolation switching component). Although the rewrite signal development portions 430-a may support rewriting logic states to memory cell 105 that have been discharged, depolarized, or otherwise destroyed or degraded in a read operation, in examples of non-destructive read operations, the rewrite signal development portions 430-a (e.g., associated with a charge sharing between a signal development component and a memory cell) may be omitted. [0165] In some examples of the rewrite signal development portions 430-a, the shunts 330-a associated with other memory cells 105-b of the respective domain 310-a may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed. For example, for domain 310-a-l, during the rewrite signal development portion 430-a-l, a bias between a digit line 210-a-12 and a plate line 215-a-12 may be equalized via a shunt 330-a-12, a bias between a digit line 210-a-13 and a plate line 215-a-13 may be equalized via a shunt 330-a- 13, and so on. Such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b that is being accessed during the rewrite signal development portions 430-a.[0166] Like the read operation 400, the read operation 450 may also be associated with the reading of a single memory cell 105 (e.g., via the sense amplifier 290-a) having a total duration of tAi - tAo, which may include the read signal development portion 410-a-l, the latch signal generation portion 420-a-l, and the rewrite signal development portion 430-a-l for reading the single memory cell 105-b-l 11. However, by employing multiplexed signal development as disclosed herein, performing multiple read operations via the same sense amplifier 290-a may not take an integer multiple of the duration of tAi - tAo (e.g., where the integer multiple may correspond to the quantity of memory cells 105-b being accessed in parallel). Rather, by generating cell access signals (e.g., cache signals, signal states) in overlapping time intervals (e.g., the time intervals of read signal development portions 410-a or rewrite signal development portions 430-a of the signal development component 250-a-l that overlap with the time intervals of a read signal development portions 410-a or rewrite signal development portions 430-a of the signal development component 250-a-2, and so on), the multiple memory cells 105-b may be read in a shorter time than such an integer multiple. In other words, in accordance with the described techniques for multiplexed signal development, the sense amplifier 290-a may support reading the four memory cells 105-b in a duration of tA3 - tA2, a duration which may be shorter than 4 * (tAi - tAo) (e.g., shorter than the corresponding integer multiple of a duration for reading a single memory cell 105-b).[0167] In one example, the rewrite signal development portions 430-a-l, 430-a-2,430-a-3, and 430-a-4 of a first set of reads may be followed by read signal development portions 410-a-5, 410-a-6, 410-a-7, and 410-a-8, respectively, of a second set of reads. The first set of reads may be associated with a first digit line index (e.g., a value of“1” as indicated by logical selection signals DLMi, DLMz, DLIVL, and DLIVL), and the second set of reads may be associated with a second digit line index (e.g., a value of“2” as indicated by logical selection signals DLMi, DLM2, DLM3, and DLM4). Or, more generally, the first set of reads and the second set of reads may differ based at least in part on selected digit lines 210-a of the read operations.[0168] In some examples (e.g., where selection components 320-a across domains 310-a are independently controllable, where logical selection signals DLM across domains 310-a are independently controllable), a new digit line 210-a may be selected for a signal development component 250 (e.g., via a selection component 320-a) as soon as a rewrite signal development portion 430 is complete for the same signal development component 250. In other words, as illustrated in the example of operation 450, a rewrite signal development portion 430-a of a first set of reads may overlap in time with a read signal development portion 410-a of a second set of reads for signal development components 250-a that are multiplexed with the same sense amplifier 290-a (e.g., the read signal development portion 410-a-5 overlapping the rewrite signal development portion 430-a-4). Thus, the periodicity for reading four memory cells 105 in the example of operation 450 where domains 310-a- 1 through 310-a-4 are independently controllable may be illustrated by the time tA3 - tA2, which in some examples may be equal or nearly equal to the time tAi - tAo, or tAi - tAo plus some delay or gap period (e.g., associated with the selection of a new digit line 210-a via a selection component 320-a), or some other duration that is based on the overall duration associated with a read operation (e.g., ΪAI - ΪAO), the respective latencies of sub-operations (e.g., relative durations of read signal development portions 410, latch signal generation portions 420, rewrite signal development portions 430), and the degree of multiplexing (e.g., a quantity of signal development components 250-a that are multiplexed with the sense amplifier 290-a).[0169] In some examples, a subsequent read may be performed on a memory cell 105-b that is coupled with a different digit line 210-a than a preceding read operation, but is coupled with a same activated word line 205-a, which may reduce latency. For example, maintaining a selected word 205-a line may eliminate a word line deselection operation and a subsequent word line selection operation. Such examples may be accompanied by shunting a digit line 210-a associated with the earlier read operation (e.g., a digit line 210-a that was previously un-shunted), and un-shunting a digit line 210-a associated with the later read operation (e.g., a digit line 210-a that was shunted during the earlier write operation).[0170] In another example, not shown, a set of reads may be associated with a first common word line (e.g., where logical word lines WL11, WL21, WL31, and WL41 are simultaneously activated), and a second set of reads may be associated with a second common word line (e.g., where logical word lines WL12, WL22, WL32, and WL42 are simultaneously activated). Or, more generally, the first set of reads and the second set of reads may differ based at least in part on a selected common word line 205-a of the read operations. In some examples (e.g., where word lines 205-a across domains 310-a are not independently controllable), a new word line 205-a may be selected as soon as a latch signal generation portion 420 is complete or a rewrite signal development portion 430 is complete for all of the multiplexed signal development components 250-a (e.g., associated with the sense amplifier 290-a, or other set of domains 310-a that are not independently controllable). In other words, in some examples, a latch signal generation portion 420 or a rewrite signal development portion 430 of a first set of reads may not overlap in time with a read signal development portion 410 of a second set of reads for signal development components multiplexed with the same sense amplifier 290-a.[0171] For example, when word lines 205-a are not independently controllable across domains 310-a-l through 310-a-4, the read signal development portion 410-a-5 may follow or be otherwise subsequent to the rewrite signal development portion 430-a-4. Thus, the periodicity for reading four memory cells 105 in the example where the domains 310-a are not independently controllable may be equal to or nearly equal to the combined time of one read signal development portion 410-a, each of the latch signal generation portions 420-a-l through 420-a-4 for the multiplexed signal development components 250-a- 1 through 250-a-4, and one rewrite signal development portion 430-a, plus any relevant delay or gap periods (e.g., associated with the selection of a new word line 205-a, or the selection of new signal development components 250-a via a selection component 280-a). Accordingly, in some examples, such a periodicity where domains 310-a are not independently controllable may be longer than the periodicity illustrated by time tA2 - tAo.[0172] Thus, in accordance with various examples as disclosed herein, the advantages provided by the described signal development multiplexing (e.g., a reduced latency when accessing multiple memory cells 105-b in parallel) may scale with the relative difference in latency (e.g., durations) of read signal development portions 410, latch signal generation portions 420, and rewrite signal development portions 430. The advantages by the described signal development multiplexing may also depend on whether domains 310-a are configured to be independently controllable, or are controlled via common access lines or common logical signals. [0173] Although the techniques of read operation 450 are described with reference to a single sense amplifier 290-a, the techniques of read operation 450 may be repeated for each sense amplifier 290 of a sense amplifier array, including various operations being performed concurrently (e.g., in parallel, with simultaneous or offset initiation or triggering), to support further pipelining of read operations in a memory device 100. For example, the read operation 450, or another read operation performed concurrently with or offset from the read operation 450, may include signal development operations including read signal development portions 410-b-l, 410-b-2, 410-b-3, and 410-b-4 (not shown) associated with a different sense amplifier 290 (e.g., of a same sense amplifier array). In some examples, a read signal development portion 410-b-l may be initiated at the same time as, or otherwise performed concurrently with or offset from, the read signal development portion 410-a-l (e.g., according to a simultaneous accessing of multiple memory cells of a row, a domain, or a subdomain, according to concurrent signal exchange with a cacheline). Likewise, a read signal development portion 410-b-2 may be initiated at the same time as, or otherwise performed concurrently with or offset from, the read signal development portion 410-a-2, and so on.[0174] Further, the read operation 450, or another read operation performed concurrently with the read operation 450, may include input/output operations including latch signal generation portions 420-b-l, 420-b-2, 420-b-3, and 420-b-4 (not shown) associated with a different sense amplifier 290 (e.g., of a same sense amplifier array). In some examples, a latch signal generation portion 420-b-l may be initiated at the same time as, or otherwise performed concurrently with or offset from, the latch signal generation portion 420-a-l (e.g., according to a simultaneous sensing at a sense amplifier array, according to a simultaneous latching at a set of latches of a sense component or I/O component, according to concurrent signal exchange with a cacheline). Likewise, a latch signal generation portion 420-b-2 may be initiated at the same time as, or otherwise performed concurrently with of offset from, the latch signal generation portion 420-a-2, and so on. Although described in the context of two parallel reads associated with two different sense amplifiers 290, the described techniques may be applied to any quantity of parallel reads. For example, to support a 64-bit information transfer scheme, 64 parallel reads may be performed using 64 sense amplifiers 290 in accordance with examples as disclosed herein.[0175] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0176] FIG. 5A illustrates an example of a write operation 500 that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The write operation 500 may illustrate portions (e.g., time intervals) of an access operation that are associated with generating latch signals and cell access signals (e.g., cell write signals) when accessing a memory cell 105. For example, the write operation 500 may be divided into a latch signal generation portion 510 and a write signal development portion 520 (e.g., a cell write portion). The write operation 500 may employ circuitry that supports multiplexed signal development, such as the circuit 300 described with reference to FIG. 3. As an illustrative example, the write operation 500 is described with reference to writing a logic state to the memory cell 105-b-l 11 of the circuit 300, but the write operation 500 may be illustrative of operations that may be performed on any one or more of the memory cells 105-b of the circuit 300.[0177] The latch signal generation portion 510 may be associated with a charge sharing between the signal development component 250-a-l and the sense amplifier 290-a. The latch signal generation portion 510 may be an example of generating a latch signal at the sense amplifier 290-a or the signal development component 250-a-l (e.g., a cache signal, a signal state) based at least in part on a write command or write signal (e.g., from an input/output component 160 or a memory controller 170) received via I/O line 295-a. In some examples, generating the latch signal at the sense amplifier 290-a or the signal development component 250-a-l is associated with a fourth latency (e.g., a relatively low latency or short duration), which may be the same as or different than the second latency of the latch signal generation portions 420 described with reference to read operations 400 and 450. [0178] The latch signal generation portion 510 may include selectively coupling the signal development component 250-a-l with the sense amplifier 290-a (e.g., at the beginning of the latch signal generation portion 510, or at another time after other operations of the latch signal generation portion 510 such as after receiving a write command or write signal via I/O line 295-a). In some examples, selectively coupling the signal development component 250-a-l with the sense amplifier 290-a may include a selection via the selection component 280-a, based on a logical selection signal SDCM. In some examples, selectively coupling the signal development component 250-a-l with the sense amplifier 290-a may include a selective coupling via some other switching component (e.g., an isolation switching component) between the signal development component 250-a-l and the sense amplifier 290-a.[0179] In some examples, the latch signal generation portion 510 may include“firing” the sense amplifier 290-a, which may include selectively coupling one or more voltage sources with the sense amplifier 290-a (e.g., a low voltage source 293, a high voltage source 294). Thus, a latch signal may be generated at the sense amplifier 290-a that is based at least in part on a write command or write signal (e.g., received via the I/O line 295-a). The generated latch signal or some other signal associated with the generated latch signal may be passed to, or otherwise shared with the signal development component 250-a-l (e.g., storing a cache signal or signal state at a cache element of the signal development component 250-a-l) to support the writing or the memory cell 105-b-l 11. For example, based on the generated latch signal (e.g., based on whether the memory cell 105-b-l 11 is to store a logic 0 or a logic 1), a write signal may be passed or otherwise shared or generated with the signal development component 250-a-l (e.g., via the signal development line 255-a-l) as part of the latch signal generation portion 510.[0180] The write signal development portion 520 may be associated with a charge sharing between the memory cell 105-b-l 11, the digit line 210-a-l 1, and the signal development component 250-a-l. The write signal development portion 520 may be an example of developing a cell access signal (e.g., a cell write signal) at or using the signal development component 250-a-l based at least in part on a latch signal of the sense amplifier 290-a. In some examples, developing the write signal at the signal development component 250-a-l is associated with a fifth latency (e.g., a relatively high latency or long duration), which may or may not be equal to the third latency of the rewrite signal development portions 430 described with reference to read operations 400 and 450. The transition from the latch signal generation portion 510 to the write signal development portion 520 may include selectively decoupling or isolating the signal development component 250-a-l from the sense amplifier 290-a (e.g., via the selection component 280-a or an isolation switchingcomponent).[0181] In some examples of a write operation, the circuit 300 may be configured to couple the memory cell 105-b-l 11 with a high voltage source (e.g., a high voltage rail, via the signal development component 250-a-l), which may be a direct coupling by pull-up or pull down circuitry (e.g., a transistor or other switching component the signal development component 250-a-l). In some examples, the signal development component 250-a-l may be configured with a capacitor or other charge storage component, and the latch signal generation portion 510 or the write signal development portion 520 may include charging or refreshing the capacitor or other charge storage component with a charge that is sufficient to rewrite the memory cell 105-b-l 11 (e.g., during the write signal development portion 520). Thus, in various examples, the signal development component 250-a-l may write the logic state to the memory cell 105-b-l 11, which may be performed while the signal development component 250-a-l is selectively decoupled from the sense amplifier 290-a, so the sense amplifier 290-a is free to support operations with other signal development components 250-a.[0182] The charge sharing of the write signal development portion 520 may also be associated with a delay or latency known as a row precharge delay, which may include writing a logic state to the memory cell 105-b-l 11 based on a write command. For example, to write a logic 0, the digit line 210-a-l 1 may be biased to a positive voltage (e.g., 1.5 V) and the plate line 215-a-l 1 may be biased to a ground or negative voltage (e.g., 0 V). To write a logic 1, the digit line 210-a-l 1 may be biased to a ground or negative voltage (e.g., 0 V) and the plate line 215-a-l 1 may be biased to a positive voltage (e.g., 1.5 V). The biasing of the digit line 210-a-l 1 and the plate line 215-a-l 1 may be based at least in part on the generated latch signal (e.g., prior to the sense amplifier 290-a being selectively isolated from the signal development component 250-a-l). For example, during the write signal development portion 520, the signal development component 250-a-l may bias the digit line 210-a-l 1 to either a positive voltage or a ground voltage based at least in part on the latch signal (e.g., based at least in part on a write command). At the end of the write signal development portion 520, all of the digit lines 210-a-l 1 and all of the plate lines 215-a of the domain 310-a-l may be biased with a ground voltage, effectively equalizing a bias across each of the memory cells 105-b of the domain 310-a-l 1, which may support maintaining logic states stored by the memory cells 105-b over time.[0183] In some examples, the shunts 330-a associated with other memory cells 105-b of the domain 310-a-l, such as shunts 330-a-12 through 330-a-lr, may be selected or activated during the write signal development portion 520, which may equalize a bias across memory cells 105-b that are not being accessed (e.g., equalizing a bias between a digit line 210-a-12 and a plate line 215-a-12, equalizing a bias between a digit line 210-a-lr and a plate line 215-a-lr, and so on). Such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b- 111 that is being written during the write signal development portion 520.[0184] The write operation 500 may be associated with the writing of a single memory cell 105-b- 11 having a total duration of t - teo, which includes the latch signal generation portion 510, and the write signal development portion 520 for writing the single memory cell 105-b-l 11. In examples where the write operation 500 does not employ multiplexed signal development techniques (e.g., a sequence of write operations 500 that use the same signal development component 250), a subsequent write operation that employs the sense amplifier 290-a may follow the write signal development portion 520. Thus, performing multiple write operations 500 (e.g., writing multiple memory cells 105-b) using a same signal development component 250 may involve integer multiples of the duration t - teo (e.g., at least 2 * (t - teo) to read two memory cells 105-b). However, multiplexing signal developmentcomponents 250-a (e.g., via the selection component 280-a) may reduce the amount of time involved for the sense amplifier 290-a to write multiple memory cells 105-b.[0185] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0186] FIG. 5B illustrates an example of a write operation 550 that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The write operation 550 may illustrate portions (e.g., time intervals) of an access operation (e.g., a multi-cell access operation) that are associated with generating latch signals and cell access signals (e.g., cell write signals) when accessing four memory cells 105 (e.g., via four signal development components 250). For example, the write operation 550 may be divided into latch signal generation portions 510-a and write signal development portions 520-a for each of a set of memory cells 105-b, which may be examples of corresponding portions described with reference to FIG. 5A. The write operation 550 may employ circuitry that supports multiplexed signal development, such as the circuit 300 described with reference to FIG. 3. The write operation 550 illustrates an example of separating signal development operations from input/output operations, which may improve data throughput in a memory device.[0187] As an illustrative example, the write operation 550 is described with reference to writing a logic state to four memory cells 105-b of four different domains 310-a, where each of the different domains is associated with a respective signal development component 250-a that is multiplexed with the sense amplifier 290-a. Latch signal generation portion 510-a-l and write signal development portion 520-a-l may refer to, for example, a write operation of memory cell 105-b- 111 (e.g., of a domain 310-a- 1, associated with a signal development component 250-a-l). Latch signal generation portion 510-a-2 and write signal development portion 520-a-2 may refer to, for example, a write operation of a memory cell 105-b-211 (e.g., of a domain 310-a-2, not shown, associated with a signal development component 250-a-2). Latch signal generation portion 510-a-3 and write signal development portion 520-a-3 may refer to, for example, a write operation of a memory cell 105-b-311 (e.g., of a domain 310-a-3, not shown, associated with a signal development component 250-a-3). Latch signal generation portion 510-a-4 and write signal development portion 520-a-4 may refer to, for example, a write operation of a memory cell 105-b-411 (e.g., of a domain 310-a-4, not shown, associated with a signal development component 250-a-4). Each of the signal development components 250-a-l, 250-a-2, 250-a-3, and 250-a-4 may be selectively coupled with a same sense amplifier 290-a via a selection component 280-a (e.g., based on a logical selection signal SDCM).[0188] Each of the latch signal generation portions 510-a may be associated with a charge sharing between respective ones of the signal development components 250-a-l and the sense amplifier 290-a, which may occur over non-overlapping time intervals. The latch signal generation portions 510-a may each be an example of generating a signal (e.g., a cache signal, a signal state) at a signal development component 250-a based at least in part on selectively coupling the signal development component 250-a with the sense amplifier 290-a (e.g., an amplifier component). In some examples, such a signal may be generated based at least in part on a write command or write signal. In some examples, generating a latch signal, cache signal, or signal state is associated with a fourth latency (e.g., a relatively low latency or short duration).[0189] The latch signal generation portion 510-a-l may be an example of coupling (e.g., via the selection component 280-a), during a first time interval and based at least in part on determining to access the memory cell 105-b-l 11 (e.g., a first memory cell), the signal development component 250-a-l (e.g., a first signal development component) with the sense amplifier 290-a (e.g., an amplifier component). The latch signal generation portion 510-a-2 may be an example of coupling (e.g., via the selection component 280-a), during a second time interval subsequent to the first time interval and based at least in part on determining to access the memory cell 105-b-211 (e.g., a second memory cell), the signal development component 250-a-2 (e.g., a second signal development component) with the sense amplifier 290-a.[0190] The latch signal generation portions 510-a-l through 510-a-4 may be performed according to a sequence, which may be based at least in part on a sequence of memory cell write commands or signals (e.g., as received via I/O line 295-a). Such a sequence may also correspond to the sequence of signal development components 250-a selected or otherwise indicated by the logical selection signal SDCM. In some examples, each of the latch signal generation portions 510-a may be separated by a gap or delay period (e.g., the period between the latch signal generation portion 510-a-l and the latch signal generation portion 510-a-2), which may be associated with a gap or delay of the selection component 280-a, a gap or delay associated with changing a value of the logical selection signal SDCM, or a period during which no signal development components 250-a are coupled with the sense amplifier 290-a. In other words, an access operation may include a gap or delay period between when one signal development component 250-a is selectively decoupled from the sense amplifier 290-a and another signal development component 250-a is selectively coupled with the sense amplifier 290-a. In other examples, such decoupling and coupling may be configured to occur simultaneously.[0191] In some examples, the latch signal generation portions 510-a may include“firing” the sense amplifier 290-a, which may include selectively coupling one or more voltage sources with the sense amplifier 290-a (e.g., a low voltage source 293, a high voltage source 294). Thus, according to the sequence of latch signal generation portions 510-a-l through 510-a-4, a sequence of signals may be generated at the sense amplifier 290-a or signal development components 250-a that is based at least in part on the respective sequence of write commands or signals.[0192] One or more signals may be transferred between a sense amplifier 290 and a signal development component 250 as part of or in connection with a write operation. For example, the generated latch signals may also be passed back to, or otherwise shared with the signal development components 250-a-l through 250-a-4 to support the respective write operations. For example, based on the generated latch signal (e.g., based on whether the memory cells 105-b are to store a logic 0 or a logic 1), a write signal may be passed or otherwise shared with the respective one of signal development components 250-a-l through 250-a-4 as part of the latch signal generation portions 510-a.[0193] The write signal development portions 520-a may be associated with a charge sharing between a respective one of the memory cells 105-b, a respective one of the digit lines 210-a, and a respective one of the signal development components 250-a. The write signal development portions 520-a may each be an example of developing a cell access signal (e.g., a cell write signal) at a signal development component 250-a based at least in part on a latch signal of the sense amplifier 290-a. The transition from a latch signal generation portion 510 to a corresponding write signal development portion 520-a may include selectively isolating the respective signal development component 250-a from the sense amplifier 290-a (e.g., via the selection component 280-a or another isolation switching component). The write signal development portion 520-a- 1 may be an example of coupling, during a third time interval subsequent to the first time interval, the signal development component 250-a-l (e.g., the first signal development component) with the memory cell 105-b- 111 (e.g., the first memory cell). In some examples, the second time interval is within, or at least partially overlaps the third time interval. The write signal development portion 520-a-2 may be an example of coupling, during a fourth time interval subsequent to the second time interval that overlaps the third time interval, the signal development component 250-a-2 (e.g., the second signal development component) with the memory cell 105-b-211 (e.g., the second memory cell).[0194] In some examples of the write signal development portions 520-a, the shunts 330-a associated with other memory cells 105-b of the respective domain 310-a may be selected or activated, which may equalize a bias across memory cells 105-b that are not being accessed. For example, for domain 310-a-l, during the write signal development portion 520-a-l, a bias between a digit line 210-a-12 and a plate line 215-a-12 may be equalized via a shunt 330-a-12, a bias between a digit line 210-a-13 and a plate line 215-a-13 may be equalized via a shunt 330-a- 13, and so on. Such an equalization of bias may prevent or reduce a loss of data (e.g., due to charge leakage) of memory cells 105-b other than the memory cell 105-b that is being accessed during the write signal development portions 520-a.[0195] Like the write operation 500, the write operation 550 may also be associated with the writing of a single memory cell 105 (e.g., via the sense amplifier 290-a) having a total duration of t - ΪBO, which may include the latch signal generation portion 510-a-l and the write signal development portion 520-a-l for writing the single memory cell 105-b-l 11. However, by employing multiplexed signal development in accordance with examples as disclosed herein, performing multiple write operations via the same sense amplifier 290-a may not take an integer multiple of the duration of tei -teo (e.g., where the integer multiple may correspond to the quantity of memory cells 105-b being written in parallel). Rather, by generating cell access signals in overlapping time intervals (e.g., the time intervals of a write signal development portions 520-a of the signal development component 250-a-l that overlap with the time intervals of a write signal development portions 520-a of the signaldevelopment component 250-a-2, and so on), the multiple memory cells 105-b may be written in a shorter time than such an integer multiple. In other words, in accordance with the described techniques for multiplexed signal development, the sense amplifier 290-a may support writing the four memory cells 105-b in a duration of tB2 - ΪBO, a duration which may be shorter than 4 * (tei -teo) (e.g., shorter than the corresponding integer multiple of duration for writing a single memory cell 105-b). [0196] In one example, the write signal development portions 520-a-l, 520-a-2, 520-a-3, and 520-a-4 of a first set of writes may be followed by latch signal generation portions 510-a-5, 510-a-6, 510-a-7, and 510-a-8, respectively, of a second set of writes. The first set of writes may be associated with a first digit line index (e.g., a value of“1” as indicated by logical selection signals DLMi, DLM2, DLM3, and DLM4), and the second set of writes may be associated with a second digit line index (e.g., a value of“2” as indicated by logical selection signals DLMi, DLM2, DLM3, and DLM4). Or, more generally, the first set of writes and the second set of writes may differ based at least in part on selected digit lines 210-a of the write operations. In some examples (e.g., where selection components 320-a across domains 310-a are independently controllable, where logical selection signals DLM across domains 310-a are independently controllable), a new digit line 210-a may be selected for a signal development component 250 (e.g., via a selection component 320-a) as soon as a write signal development portion 520-a is complete for the same signal development component 250. In other words, as illustrated in the example of operation 550, a write signaldevelopment portion 520-a of a first set of writes may overlap in time with a latch signal generation portion 510-a of a second set of writes for signal development components 250-a that are multiplexed with the same sense amplifier 290-a (e.g., the latch signal generation portion 510-a-5 overlapping the write signal development portion 520-a-4). Thus, the periodicity for writing four memory cells 105 in the example of operation 550 where domains 310-a-l through 310-a-4 are independently controllable may be illustrated by the time tB2 - ΪBO, which may be based on the overall duration associated with a write operation (e.g., tei - tBo), the respective latencies of sub-operations (e.g., relative durations of latch signal generation portions 510-a and write signal development portions 520-a), and the degree of multiplexing (e.g., a quantity of signal development components 250-a that are multiplexed with the sense amplifier 290-a).[0197] In some examples, a subsequent write may be performed on a memory cell 105-b that is coupled with a different digit line 210-a than a preceding write operation, but is coupled with a same activated word line 205-a, which may reduce latency. For example, maintaining a selected word 205-a line may eliminate a word line deselection operation and a subsequent word line selection operation. Such examples may be accompanied by shunting a digit line 210-a associated with the earlier write operation (e.g., a digit line 210-a that was previously un-shunted), and un-shunting a digit line 210-a associated with the later write operation (e.g., a digit line 210-a that was shunted during the earlier write operation). [0198] In another example, not shown, a set of writes may be associated with a first common word line (e.g., where logical word lines WLn, WL21, WL31, and WL41 of different domains are simultaneously activated), and a second set of writes may be associated with a second common word line (e.g., where logical word lines WL12, WL22, WL32, and WL42 of different domains are simultaneously activated). Or, more generally, the first set of writes and the second set of writes may differ based at least in part on a selected common word line 205-a of the write operations. In some examples (e.g., where word lines 205-a across domains 310-a are not independently controllable), a new word line 205-a may be selected as soon as a write signal development portion 520 is complete for all of the multiplexed signal development components 250-a (e.g., associated with the sense amplifier 290-a, or other set of domains 310-a that are not independently controllable). In other words, in some examples, a write signal development portion 520 of a first set of writes may not overlap in time with a latch signal generation portion 510 of a second set of writes for signal development components 250 that are multiplexed with the same sense amplifier 290-a.[0199] For example, when word lines 205-a are not independently controllable across domains 310-a-l through 310-a-4, the latch signal generation portion 510-a-5 may follow or be otherwise subsequent to the write signal development portion 520-a-4. Thus, the periodicity for writing four memory cells 105 in the example where the domains 310-a are not independently controllable may be equal to or nearly equal to the combined time of each of the latch signal generation portions 510-a-l through 510-a-4 and one of the write signal development portions 520-a for the multiplexed signal development components 250-a-l through 250-a-4. Accordingly, in some examples, such a periodicity where domains 310-a are not independently controllable may be longer than the periodicity illustrated by time tB2 - ΪBO.[0200] Thus, in accordance with various examples as disclosed herein, the advantages provided by the described signal development multiplexing (e.g., a reduced latency when accessing multiple memory cells 105-b in parallel) may scale with the relative difference in latency (e.g., durations) of latch signal generation portions 510 and write signal development portions 520. The advantages of the described signal development multiplexing may also depend on whether domains 310-a are configured to be independently controllable, or are controlled via common access lines or common logical signals.[0201] Although the techniques of write operation 550 are described with reference to a single sense amplifier 290-a, the techniques of write operation 550 may be repeated for each sense amplifier 290 of a sense amplifier array, including various operations being performed concurrently (e.g., in parallel, with simultaneous or offset initiation or triggering), to support further pipelining of write operations in a memory device 100. For example, the write operation 550, or another write operation performed concurrently with the write operation 550, may include input/output operations including latch signal generation portions 510-b-l, 510-b-2, 510-b-3, and 510-b-4 (not shown) associated with a different sense amplifier (e.g., of a same sense amplifier array). In some examples, a latch signal generation portion 510-b-l may be initiated at the same time as, or otherwise performed concurrently with or offset from, the latch signal generation portion 510-a-l (e.g., according to a simultaneous sensing at a sense amplifier array, according to a simultaneous latching at a set of latches of a sense component or I/O component, according to concurrent signal exchange with a cacheline). Likewise, a latch signal generation portion 510-b-2 may be initiated at the same time as, or otherwise performed concurrently with or offset from, the latch signal generation portion 510-a-2, and so on.[0202] Further, the write operation 550, or another write operation performedconcurrently with or offset from the write operation 550, may include signal development operations including write signal development portions 520-b-l, 520-b-2, 520-b-3, and 520-b-4 (not shown) associated with a different sense amplifier (e.g., of a same sense amplifier array). In some examples, a write signal development portion 520-b-l may be initiated at the same time as, or otherwise performed concurrently with or offset from, the write signal development portion 520-a-l (e.g., according to a simultaneous accessing of multiple memory cells of a row, a domain, or a subdomain, according to concurrent signal exchange with a cacheline). Likewise, a write signal development portion 520-b-2 may be initiated at the same time as, or otherwise performed concurrently with or offset from, the write signal development portion 520-a-2, and so on. Although described in the context of two parallel writes associated with two different sense amplifiers 290, the described techniques may be applied to any quantity of parallel writes. For example, to support a 64-bit information transfer scheme, 64 parallel writes may be performed using 64 sense amplifiers 290 in accordance with examples as disclosed herein.[0203] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0204] FIG. 6 illustrates an example of a signal development component 250-b that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein. The signal development component 250-b may be an example of signal development components 250 described with reference to FIGs. 1 through 5. The signal development component 250-b may be coupled with or between a digit line 210-b and a signal development line 255-b. The signal development component 250-b may include a capacitor 610 (e.g., an integrator capacitor, a storage element, a cache element, a cache storage element) and a transistor 620 that may be configured in an amplifier configuration (e.g., as a charge transfer sensing amplifier, as a cascode).[0205] The capacitor 610 may be an example of a signal storage component or a charge storage component of the signal development component 250-b. In the example of the signal development component 250-b, the capacitor 610 may be coupled with or between a line of the signal development component 250-b (e.g., the signal development line 255-b) and a voltage source 615 (e.g., a ground voltage source, a voltage source having a reference voltage for the capacitor 610). Although illustrated as including the capacitor 610, a signal development component 250 in accordance with examples as disclosed herein may, additionally or alternatively, include or otherwise employ a transistor in a particular state, a diode, or other components that may provide functionality of a signal storage component or charge storage component in the signal development component 250. In some examples, a set of signal development components 250-b may include a set of capacitors 610, which may provide a fast, local, in-memory cache (e.g., a signal development cache) in a device that includes the set of signal development components 250-b. [0206] In some examples, a memory device that includes the signal development component 250-b may include memory cells 105 that employ a logic storage element that includes a capacitive element (e.g., a linear capacitor in a DRAM application, a ferroelectric capacitor in an FeRAM application). In various examples, the capacitor 610 may include a same capacitive element or technology as a logic storage element (e.g., capacitor 610 may be a linear capacitor in a DRAM application, a capacitor 610 may be a ferroelectric capacitor in an FeRAM application), or a different capacitive element or technology as a logic storage element (e.g., capacitor 610 may be a linear capacitor in an FeRAM application, a PCM application, or a chalcogenide memory application).[0207] The transistor 620 may be an example of an amplifier or voltage regulator of the signal development component 250-b, and may be configured to transfer charge between the signal development line 255-b (e.g., a first access line) and the digit line 210-b (e.g., a second access line) based at least in part on one or both of a voltage of the signal development line 255-b and a voltage of the digit line 210-b. For example, a gate node of the transistor 620 may be coupled with a voltage source 625, and charge may be transferred across the transistor based at least in part on a relationship between a voltage of the voltage source 625 (e.g., V2) and a voltage of the digit line 210-b. In various examples, the transistor 620 may be associated with one or more digit lines 210 (e.g., multiplexed digit lines 210), and may be located outside the illustrative boundaries of the signal development component 250-b (e.g., in examples of memory devices that include a transistor 620 for each of a set of multiplexed digit lines 210).[0208] The transistor 620 may provide a conversion of signals between the digit line 210-b and the signal development line 255-b. For example, the transistor 620 may permit a flow of charge (e.g., electrical current) from the signal development line 255-b (e.g., from the capacitor 610) to the digit line 210-b, as fed or enabled by the voltage source 625, upon a reduction in voltage of the digit line 210-b (e.g., upon selection of a memory cell 105, upon selection of a digit line 210 via a selection component 320). A relatively small flow of charge to the digit line 210-b may be associated with a relatively small change in voltage of the signal development line 255-b, whereas a relatively large flow of charge to the digit line 210-b may be associated with a relatively large change in voltage of the signal development line 255-a. According to the net capacitance of the signal development line 255-b (e.g., including the capacitor 610), for example, the signal development line 255-b may undergo a relatively small change in voltage or a relatively large change in voltage depending on the flow of charge across the transistor 620 after selecting a memory cell 105. In some examples, the transistor 620 or the signal development component 250-b may be isolated from the digit line 210-b by a switching component or a selection component (e.g., a selection component 320). The transistor 620 may also referred to as a“voltage regulator” or a“bias component,” relating to how the transistor 620 regulates a flow of charge in response to the voltage of the digit line 210-b.[0209] In some examples, the signal development component 250-b may include circuitry configured to support a selective coupling (e.g., of the signal development line 255-b) with a relatively high voltage (e.g., voltage source 635). For example, the signal development component 250-b may include a switching component 630 that is operable based on a logical signal SWi. In some examples, the voltage source 645 may be coupled with a relatively high voltage rail or supply, which may support charging the capacitor 610 (e.g., for developing a cell access signal).[0210] In some examples, the signal development component 250-b may include circuitry configured to support a selective coupling (e.g., of the digit line 210-b) with a reference voltage (e.g., voltage source 645). For example, the signal development component 250-b may include a switching component 640 that is operable based on a logical signal SW2. In some examples, the voltage source 645 may be coupled with a ground or virtual ground rail or supply. In some examples, the voltage source 645 may be coupled with a same rail or supply as the voltage source 615 (e.g., Vi may be equal to VV).[0211] In some examples, the signal development component 250-b may include circuitry configured to support a selective coupling (e.g., of the signal development line 255-b, of the signal development component 250-b) with another component (e.g., a selection component 280, a sense amplifier 290). For example, the signal development component 250-b may include a switching component 650, which may be referred to as an isolation switching component, and may be operable based on a logical signal ISO. Additionally or alternatively, an isolation switching component may be included in a sense amplifier 290 in accordance with examples as disclosed herein.[0212] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0213] FIG. 7 illustrates an example of a sense amplifier 290-b that supports content- addressable memory for signal development caching in accordance with examples as disclosed herein. The sense amplifier 290-b may be an example of sense amplifiers 290 described with reference to FIGs. 1 through 5. The sense amplifier 290-b may be coupled with or between a signal line 285-b and a reference line 275-b. The sense amplifier 290-b may also be associated with (e.g., coupled with) I/O lines 295-b and 295-c. In some examples, the sense amplifier 290-b may be referred to as an amplifier component of a memory device.[0214] The sense amplifier 290-b may include a pair of opposed amplifiers 710-a and 710-b. Although illustrated as amplifiers 710, the sense amplifier 290-b may alternatively or equivalently include pairs of cross-coupled transistors (e.g., a pair of cross-coupled p-type transistors and a pair of cross-coupled n-type transistors)[0215] In some examples, the sense amplifier 290-b may include circuitry configured to support a selective coupling (e.g., of the amplifiers 710-a and 710-b) with sense amplifier low and high voltage sources (e.g., voltage sources 293-b and 294-b). For example, the sense amplifier 290-b may include switching components 730-a and 730-b that are operable based on logical signals SW3 and SW4, respectively. In some examples, activating or selecting logical signals SW3 and SW4 may be referred to as activating or latching the sense amplifier 290-b.[0216] In some examples, the sense amplifier 290-b may include circuitry configured to support a selective coupling with or decoupling from another component (e.g., a signal development component 250, a selection component 280, a reference component 270). For example, the sense amplifier 290-b may include switching components 720-a and 720-b, which may be referred to as an isolation switching component, and may be operable based on a logical signals ISOi and ISO2. Additionally or alternatively, an isolation switching component may be included in a signal development component 250 or a selection component 280 in accordance with examples as disclosed herein.[0217] In some examples (e.g., in support of a read operation), the sense amplifier 290-a may generate an output signal based at least in part on a cell read signal. For example, a signal development component 250 (e.g., a selected one of a set of signal development components 250) may pass a cell access signal, or otherwise share a charge with the sense amplifier 290-a that is based at least in part on a cell access signal, via the signal line 285-b.A reference component 270 may pass a reference signal, or otherwise share a charge with the sense amplifier 290-a that is based at least in part on a reference signal, via the reference line 275-b. When the signal line 285-b has a higher voltage than the reference line 275-b, the output signal may be generated with the I/O line 295-b having a relatively higher voltage (e.g., VH) and the I/O line 295-c having a relatively lower voltage (e.g., VL). When the reference line 275-b has a higher voltage than the signal line 285-b, the output signal may be generated with the I/O line 295-c having a relatively higher voltage (e.g., VH) and the I/O line 295-b having a relatively lower voltage (e.g., VL). In some examples, the switching components 720-a and 720-b may be closed to receive cell read signals or cell reference signals, and subsequently opened when activating the sense amplifier 290-b (e.g.,“latching”).[0218] In some examples, a generated sense or latch signal, or otherwise generated output signal, may be shared or otherwise associated with a write signal or rewrite signal passed to the selected signal development component 250 via the signal line 285-b (e.g., after closing the switching component 720-a). In some examples, a write command or write signal may be received at the sense amplifier 290-b (e.g., from an input/output component 160 via I/O lines 295-b and 295-c), and the received write command or write signal may be latched, shared (e.g., via the signal line 285-b), or otherwise associated with a cell write signal generated by the selected signal development component 250. In some examples, a write command or write signal associated with the sense amplifier 290-b may bypass signal development components 250 (e.g., via a bypass line 260).[0219] In some examples, a memory device may determine whether data associated with a received access command is stored in a signal development cache using CAM. Accessing a signal development cache of the memory device may take less time than accessing a larger memory array. In such examples, a duration used to perform an access operation may be reduced based on identifying that the data is stored in the signal development cache using the CAM. In response to access commands, the system may refer to one or more mappings stored in the CAM determine whether to access information stored in a signal development cache or access information stored in a memory array. In one example, the memory controller may receive a read command from a requesting device (e.g., a control device) from a first address of the memory array. The memory controller may determine, using mapping information from the CAM, that the information stored in first address of the memory array is also stored in the signal development component array. The memory controller may access the signal development component array to retrieve information associated with the read command based on the determining.[0220] FIG. 8A shows a block diagram of a system 800 that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein.The system 800 may include a memory array 805, a selection component 815, a signal development component array 825, a selection component 835, and a sense amplifier array 845. In some examples, these and other components may be included in a data path 860 of the system 800.[0221] The memory array 805 may include a set of memory cells 105, which may be associated with access lines such as those described with reference to FIGs. 1 through 3 (e.g., word lines 205, digit lines 210, plate lines 215). In some examples, the memory array may be associated with A rows (e.g., A independently accessible word lines 205) and B columns (e.g., B independently accessible digit lines 210). In one example, the memory array 805 may be associated with 1,048,576 memory cells 105, arranged according to 1,024 word lines 205 and 1,024 digit lines 210. Each of the memory cells 105 may be configured to store a respective logic state, which may alternatively be referred to as a memory state.[0222] In some examples, the memory array 805 may be arranged in a set of domains, which may be similar to domains 310 described with reference to FIG. 3. In one example, the memory array 805 may be split among 4 domains, and each of the four domains may have four independent zones with plate control (e.g., each domain of the memory array 805 may have four zones, which may be an example of subdomains, having commonly or individually biased plate lines 215). In such examples, the memory array 805 may be arranged according to 16 control zones, which may be associated with selecting 64-bit data.[0223] The signal development component array 825 may include a set of signal development components 250, which may include aspects of signal development components 250 described with reference to FIGs. 2 through 7. The signal development component array 825, or components thereof (e.g., cache elements of the signal development component array 825) may be an example of a signal development cache in accordance with examples as disclosed herein. In some examples, signal development components 250, or cache elements thereof, of the signal development component array 825 may be arranged in a grid having C columns and D rows. In some examples, each of the D rows may be associated with a cache block, and each of the C columns may be associated with a position in a respective cache block. In one example, the signal development component array 825 may be associated with 8 cache blocks, each having 64 positions. Each of the positions of each of the cache blocks may correspond to a single signal development component 250, or cache element of a signal development component 250.[0224] The selection component 815 may include various components that support mapping memory cells 105 of the memory array 805 with signal development components 250 of the signal development component array 825. For example, the selection component 815 may provide for selective coupling and decoupling of individual digit lines 210 of the memory array 805 with individual signal development components 250 of the signal development component array 825 to support various examples of multiplexed signal development described herein.[0225] The selection component 815 may be coupled with the memory array 805 via a bus 810 having N signal paths, and the selection component 815 may be coupled with the signal development component array 825 via a bus 820 having M signal paths. In some examples, the selection component 815 may be coupled with each of the digit lines 210 of the memory array 805 (e.g., where N = B). In some examples, the bus 820 may have fewer signal paths than the bus 810, where Mis associated with the size of cache blocks of the signal development component array (e.g., a quantity of storage elements for each cache line of a cache block). For example, the bus 810 may have N= 1,024 signal paths, and the bus 820 may have = 64 signal paths, or some other quantity of signal paths. [0226] In various examples, each digit line 210 of the memory array 805 may be configured for selective coupling with a particular one of the signal development components 250 of the signal development component array 825, a particular set of the signaldevelopment components 250 of the signal development component array 825, or may be configured for selective coupling with any one of the signal development components 250 of the signal development component array. Additionally or alternatively, a signal development component 250 of the signal development component array 825 may be configured for selective coupling with a particular one of the digit lines 210 of the memory array 805, a particular set of the digit lines 210 of the memory array, or may be configured for selective coupling with any one of the digit lines 210 of the memory array 805. In other words, the mapping between digit lines 210 and signal development components 250 in accordance with the described techniques may include a one-to-many mapping, a many-to-one mapping, or a many-to-many mapping.[0227] The sense amplifier array 845 may include a set of sense amplifiers 290, which may include aspects of sense amplifiers 290 described with reference to FIGs. 2 through 7. In some examples, sense amplifiers of the sense amplifier array 845 may be arranged in a strip or other grouped arrangement. The selection component 835 may be coupled between the signal development component array 825 (e.g., via a bus 830) and the sense amplifier array 845 (e.g., via a bus 840) to support various mappings between signal development components 250 and sense amplifiers 290. In various examples, the sense amplifiers 290 (e.g., of the sense amplifier array 845) may be integrated between cache blocks (e.g., of the signal development component array 825) or may be external to the signal development component cache region (e.g., external to the signal development component array 825). In some examples, the sense amplifier array 845 may be coupled with a bus 850, which may support communication of information with an I/O component (not shown), which may be considered to be within or outside the illustrative boundary of the data path 860.[0228] In some examples, the signal development component array 825 may be coupled with a strip or other group of sense amplifiers 290 (e.g., of the sense amplifier array 845), each of which may also be independently accessible. For example, each of a strip of sense amplifiers 290 may be configured for selective coupling with a particular one of the signal development components 250 of the signal development component array 825, a particular set of the signal development components 250 of the signal development component array 825, or may be configured for selective coupling with any one of the signal development components 250 of the signal development component array. Additionally or alternatively, a signal development component 250 of the signal development component array 825 may be configured for selective coupling with a particular one of the sense amplifiers 290 of the strip of sense amplifiers, a particular set of the sense amplifiers of the strip of sense amplifiers, or may be configured for selective coupling with any one of the sense amplifiers 290 of the strip of sense amplifiers. In other words, the mapping (e.g., via the selection component 835) between signal development components 250 of the signal development component array 825 and sense amplifiers 290 of the sense amplifier array 845 in accordance with the described techniques may include a one-to-many mapping, a many-to-one mapping, or a many-to-many mapping.[0229] In an illustrative example where the memory array 805 is associated with 1,024 digit lines 210, each of the 1,024 digit lines 210 may be coupled with a multiplexer (e.g., of the selection component 815), where they may be reduced to 64 x 4 = 256 digit lines. This may support signal transfer of 4 sets of 64 digit lines overlapping in time (e.g., participating in simultaneous transfer between a memory cell 105 and a signal development component 250). In some examples, each of these 4 sets can be routed to any of 8 cache blocks (e.g., of the signal development component array 825), where each cache block may include 8 lines by 64 bits. In other words, the total cache size associated with such a signal development component array 825 may be 64 x 64 bits. According to this example of array routing, any 64 bit sub-row from memory array may be routed to any of 64 bit signal development component cache lines.[0230] In another illustrative example, the system 800 may include several domains (e.g., of the memory array 805) each with 1,048,576 memory cells 105 arranged in 1,024 uniquely addressed rows and 1,024 columns. Each of the domains of the system 800 may be mapped (e.g., via the selection component 815) with 64 signal development components (e.g., of the signal development component array 825). In other words, 64 signal development components may be mapped to 1,024 digit lines 210 within each domain. In some examples, a particular signal development component 250 may be mapped to 16 digit lines 210 within each domain (e.g., 1,024 digit lines 210 divided by 64 signal development components 250). In some examples, such a mapping may be fixed (e.g., where groups of 16 digit lines 210 are mapped to a respective signal development component 250 within each domain) which, in some examples, may reduce multiplexing or selection circuit complexity. In various other examples, a signal development component 250 may be mapped to more than one domain, more than one set of digit lines 210 (e.g., of a domain), or other configurations. Additionally or alternatively, a domain or a set of digit lines 210 may be mapped to more than one signal development component 250. In other words, a memory device may include various configurations of signal development components 250 to support examples of themultiplexed signal development described herein.[0231] In this illustrative example, a row of 1024 memory cells 105 (e.g., spanning one domain 310) may be selected by a single word line 205 in each domain. With 64 signal development components 250 per domain, 64 of the set of 1,024 memory cells 105 may be accessed at a time in each domain (e.g., by selectively coupling a respective digit line 210 with each of the 64 signal development components 250-a via the selection component 815). During such accessing, other digit lines 210 may be selectively isolated from the signal development components 250 interfacing the same domain. Further, the other digit lines 210 may be shunted or masked as described herein.[0232] In some examples, operations of one or more components of the system 800 may be controlled by a memory controller, such as memory controller 870. The memory controller 870 may be an example of, or otherwise be associated with performing operations of a memory controller 170 as described with reference to FIG. 1. The memory controller 870 may be illustrative of a controller or other circuitry that is configured to control various components or operations of the system 800. For example, the system 800 may include various components or circuitry of a data path 860, which may include the memory array 805, the selection component 815, the signal development component array 825, the selection component 835, and the sense amplifier array 845, among other components along a path of information transfer in the system 800 (e.g., a row component 125, a column component 135, a plate component 145, an I/O component 160, and others). In various examples, the memory controller 870 may be in communication with any one or more of the components of the data path 860 for controlling the associated components or operations.[0233] The memory controller 870 may be configured (e.g., by one or more commands received from a host device) for performing one or more write operations, read operations, eviction operations, or bypass operations, among other examples of memory operations of the system 800. In various examples of such operations, the memory controller 870 may be configured for transferring data between one or more portions of the memory array 805, one or more portions of the signal development component array 825 (e.g., a cache block of the signal development component array 825), or one or more portions of the sense amplifier array 845 in accordance with the one or more memory operations.[0234] In some examples, the memory controller 870 may be configured for performing a read operation, which may include transferring data from the signal development component array 825 to the sense amplifier array 845 (e.g., when requested data is stored in the signal development component array 825). In some examples, the memory controller 870 may be configured for transferring the data from the memory array 805 to the signal development component array 825 (e.g., when requested data is not found in the signal development component array 825). Additionally or alternatively, the memory controller 870 may be configured for performing an eviction operation. The eviction operation may include transferring data stored in the signal development component array 825 to the memory array 805 prior to transferring other data (e.g., data associated with a read operation) from the memory array 805 to the signal development component array 825. In some examples, the memory controller 870 may be configured for performing a cache bypass operation, which may include transferring data directly from the memory array 805 to the sense amplifier array 845, which may facilitate, as an example, streaming read operations (e.g., performing multiple read operations in parallel).[0235] In some examples, the memory controller may be configured for performing a write-back operation, which may include transferring data from the sense amplifier array 845 to the signal development component array 825 (e.g., after performing a read operation). Additionally or alternatively, the memory controller 870 may be configured for performing a write-through operation. The write through operation may include transferring data directly from the sense amplifier array 845 to the memory array 805 based on determining that the data is stored at the signal development component array 825 in accordance with a write command. In some examples, the memory controller 870 may be configured for performing a bypass operation. For example, the bypass operation may include transferring data directly from the sense amplifier array 845 to the memory array 805 based on determining that the data is not stored in the signal development cache in accordance with a write command. Such examples of bypass operations may facilitate streaming write operations (e.g., performing multiple write operations in parallel). In some cases, one or more of the write operations described herein may include an eviction operation. For example, the memory controller 870 may transfer data stored in the signal development component array 825 to the memory array 805 based on determining that data corresponding to a write command (e.g., a write-back command) is not currently stored in the signal development component array 825.[0236] Although the system 800 in the example of 8A is illustrated with a selection component 815 operable to selectively couple the memory array 805 with the signal development component array 825, and a selection component 835 operable to selectively couple the signal development component array 825 with the sense amplifier array 845, other configurations are possible for supporting the described techniques for memory accessing.For example, in some cases, the memory array 805 may be selectively coupled with the sense amplifier array 845 in a manner that bypasses the signal development component array 825, or components thereof. In some examples, a coupling between the memory array 805 and the sense amplifier array 845 may be supported by way of one or more bypass lines, such as the bypass line 260 described with reference to FIG. 2.[0237] To organize information (e.g., signal states, cache states) stored at a signal development cache (e.g., of the signal development component array 825), the system 800 may include a storage component, such as CAM 880, configured to store a mapping between addresses of the memory array 805 and addresses of the signal development cache (e.g., addresses of the signal development component array 825). For example, the CAM 880 may be configured to store mappings between rows of memory cells 105, or portions thereof, and rows of signal development cache elements (e.g., a cache line of the signal development component array 825).[0238] The CAM 880 may include storage elements of various architectures that may be the same as, or different than, a storage architecture used in the memory array 805 or a storage architecture used in the signal development component array 825. In some examples, the CAM 880 may be configured with charge-locked IT (e.g., one transistor) storage elements, resistive storage elements (e.g., RRAM elements, ReRAM elements), ferroelectric memory elements (e.g., FeRAM elements), DRAM memory elements, static random-access memory (SRAM) memory elements, thresholding memory elements, or other types of storage elements. In some examples, storage elements of the signal development component array 825-a and storage elements of the CAM 880 may both use a capacitive storage architecture, which may be the same as each other or different from each other.[0239] In response to access commands, the system 800 (e.g., memory controller 870) may refer to mapping stored in the CAM 880 to determine whether or how to access information (e.g., signal states, cached signals) of a signal development cache of the signal development component array 825, or memory cells 105 of the memory array 805. For example, storage at the CAM 880 may be used to evaluate whether data corresponding to an address of an access command (e.g., a read command, a write command) is stored in the signal development component array 825. In some examples, an access command may include or be accompanied by metadata that may support such an evaluation. In some examples, accessing data in a signal development cache of the signal development component array 825, instead of accessing the same data in the memory array 805, may decrease latency of performing the access operation.[0240] FIG. 8B shows a block diagram of a system 800-a that supports signaldevelopment caching in accordance with examples as disclosed herein. The system 800-a may include a memory array 805-a, a bus 810-a, a bus 820-a, a signal development component array 825-a, a bus 840-a, a sense amplifier array 845-a, a bus 850-a, a memory controller 870-a, and a CAM 880-a, each of which may be an example of the respective components as described with reference to FIG. 8 A. The memory array 805-a, the bus 810-a, the bus 820-a, the signal development component array 825-a, the bus 840-a, and the sense amplifier array 845-a, may be part of a data path 860-a, and the memory controller 870-a may be coupled with any one or more of these and other components of the data path 860-a to support the techniques disclosed herein.[0241] In some examples, a system such as system 800-a may include a selection component 875 operable for selectively coupling the memory array 805-a with the sense amplifier array 845-a (e.g., bypassing the signal development component array 825-a, or components thereof), the memory array 805-a with the signal development component array 825-a, or the signal development component array 825-a with the sense amplifier array 845-a. In some cases, selection component 875 may be operable for selectively coupling the memory array 805-a, the sense amplifier array 845-a, and the signal development component array 825-a with each other concurrently. The selection component 875 thus may include or otherwise support functionalities described elsewhere herein and ascribed to one or more of switching component 265 described with reference to FIG. 2, selection components 280 described with reference to FIGs. 2 and 3, selection components 320 described with reference to FIG. 3, selection component 815 described with reference to FIG. 8A, or selection component 835 described with reference to FIG. 8 A, among other features or functions. [0242] The example of system 800-a may in some cases be referred to as a“T” configuration where each of a memory array 805, a signal development component array 825, and a sense amplifier array 845 may be coupled with common selection component 875 (e.g., a central switching network). In such an example, each of the memory array 805-a, the signal development component array 825-a, and the sense amplifier array 845-a may be coupled with the selection component 875 according to the quantity of signal paths in the respective system component, and the common selection component 875 may be configured or operable to perform the described techniques for signal development caching according to various degrees of multiplexing with the respective system component, or other arrangement.[0243] More generally, the selection component 875 may include various switching components, selection components, or other circuitry operable to selectively couple any one of the memory array 805-a or components thereof (e.g., a plurality of access lines of the memory array 805-a), the signal development component array 825-a or components thereof (e.g., cache elements of a signal development cache), or the sense amplifier array 845-a or components thereof (e.g., a plurality of sense amplifiers 290 of the sense amplifier array 845- a) with any one of the others or with both of the others concurrently (e.g., may couple all three or components thereof concurrently). Selection component 875 may thereby support various access techniques in accordance with examples as disclosed herein. For example, in some cases, each of the memory array 805-a or components thereof, the signal development component array 825-a or components thereof, and the sense amplifier array 845 or components thereof may be coupled with each other, and the sense amplifier array 845 may reinforce signals passed in either direction between the signal development component array 825 and the memory array 805-a (e.g., to support the writing of logic states to the memory array 805-a from the signal development component array 825-a, or to support the writing of logic states from the memory array 805-a to the signal development component array 825-a).[0244] In some examples, the bus 850-a may support communication of information with an I/O component (not shown), which may be considered to be within or outside the illustrative boundary of the data path 860. In some cases, the bus 850-a may be coupled with the selection component 875 as illustrated in the example of system 800-a. In other cases, the bus 850-a may be coupled with the sense amplifier array 845-a as illustrated in the example of system 800. In various examples, operation of the selection component 875 may be coordinated (e.g., by the memory controller 870-a) to avoid signaling conflicts in the data path 860-a, including coordination to avoid or mitigate conflicts that may inadvertently destroy or degrade information (e.g., logic states, signal states) intended to be maintained at a component of the data path 860-a.[0245] In some cases, a system in accordance with the described techniques for signal development caching may be arranged in a“T” configuration in which each of a memory array 805, a signal development component array 825, and a sense amplifier array 845 may be coupled with a common central node (e.g., a common bus node, a central node for each signal path of a set of signal paths of a common bus). FIG. 8C shows a block diagram of a system 800-b that supports signal development caching in accordance with such example.The system 800-b may include a memory array 805-b, a bus 810-b, a bus 820-b, a signal development component array 825-b, a bus 840-b, a sense amplifier array 845-b, a bus 850-b, a memory controller 870-b, and a CAM 880-b, each of which may be an example of the respective components as described with reference to FIGs. 8A and 8B. The memory array 805 b, the bus 810 b, the bus 820 b, the signal development component array 825 b, the bus 840 b, and the sense amplifier array 845 b may be part of a data path 860-b, and the memory controller 870-b may be coupled with any one or more of these and other components of the data path 860-b to support the techniques disclosed herein.[0246] Further, the system 800-b may include a central node 880. Each of the memory array 805, the signal development component array 825, and the sense amplifier array 845 may be selectively coupled with the central node 880 by way of a respective selection component 885-a, 885-b, or 885-c. Each respective selection component 885-a, 885-b, 885-c may have a first coupling with the common central node according to the quantity of signal paths of the common bus, and a second coupling with the respective system component (e.g., the memory array 805, the signal development component array 825, or the sense amplifier array 845) according to the quantity of signal paths in the respective system component, a degree of multiplexing with the respective system component, or other arrangement. Thus, although the central node 880 is illustrated as a single point, the central node 880 may illustrate a common bus connection having respective common nodes for each signal path of a set of signal paths coupled with the central node 880. In some cases, central node 880 and the respective selection component 885-a, 885-b, or 885-c may include aspects or otherwise support functions ascribed herein to a common selection component 875 as described with reference to FIG. 8B. In various examples, operation of the selection components 885-a, 885- b, and 885-c may be coordinated (e.g., by the memory controller 870-b) to avoid conflicts at the central node 880, including coordination to avoid or mitigate conflicts that may inadvertently destroy or degrade information (e.g., logic states, signal states) intended to be maintained at a component of the data path 860-b.[0247] FIG. 8 illustrates an example of a system 800 that supports content-addressable memory for signal development caching in accordance with examples as disclosed herein.The system 900 may include a memory array 805-c, a bus 810-c, a selection component 815- c, a bus 820-c, a signal development component array 825-c, a bus 830-c, a selection component 835-c, a bus 840-c, a sense amplifier array 845-c, a memory controller 870-c, and a CAM 880-c, each of which may be an example of the respective components as described with reference to FIGs. 8A, 8B, or 8C.[0248] The memory array 805-c may be arranged according to various quantities of memory cells 105, word lines 205, digit lines 210, and plate lines 215 or other plate nodes. In one example, the memory array 805-c may be arranged according to 1,024 word lines (e.g., A = 1,024) and 1,024 digit lines (e.g., B = N= 1,024), or some other organization of a 1,024 x 1,024 array of memory cells 105. In various examples, each word line 205 across the memory array 805-c, or each word line 205 across a domain 310-b, may be referred to as, or otherwise correspond to, a row of memory cells 105 of the memory array 805-c.[0249] In some examples, the memory array 805-c may be arranged according a quantity of domains 310-b, which may each include an equal number of digit lines 210 or columns. For example, the system 900 illustrates an example of memory array 805-c including four domains 310-b (e.g., domains 310-b-l, 310-b-2, 310-b-3, and 310-b-4). In one example, each of the domains 310-b may include 256 digit lines 210. Each of the domains 310-b may have independently controllable word lines 205, and each word line 205 may select or strip a defined quantity of sub-rows. In some cases, one or more of the sub-rows of a given word line 205 may be activated while the remaining sub-rows of that word line 205 may not be activated. In some examples, sub-rows associated with different word lines 205 may be activated concurrently in different domains 310-b. Although four domains 310-b of the memory array 805-c are illustrated, the memory array 805-c may include any quantity of domains 310-b.[0250] In some examples, each domain 310-b may be arranged according to a defined quantity of control zones 905. In one example, each domain 310-b may include four control zones 905, such that memory array 805-c may include a total of sixteen control zones 905. However, a domain 310-b may be divided or organized according to any quantity of control zones. In an example where a domain 310-b includes 256 digit lines 210, each of the control zones 905 may include (e.g., span) 64 digit lines 210 (e.g., for supporting a 64-bitinformation transfer scheme). In some examples, each of the control zones 905 may support independent plate control. Independent plate control may refer to a capability for plate lines 215 or other plate node within a control zone 905 to be activated at a same time as other plate lines within the control zone 905 (e.g., with a same biasing, by a same independently controllable plate node), but to be activated independently from plate lines 215 in other control zones 905. In various examples, each of the control zones 905 may be associated with a common plate or plate node (e.g., common to all memory cells 105 of the control zone), or each of the control zones 905 may be associated with plate lines 215 that can be biased or activated separately from each other. In some examples, word lines 205 may be further stripped within plate line areas of a domain 310-b (e.g., within each control zone 905) providing additional access granularity within a domain 310-b.[0251] In some examples, a set of digit lines 210, a set of memory cells 105, or both, that are spanned by a sub-row or a control zone 905 may be referred to as a sub-domain. Sub- domains may provide functionality to compose a bit-row from multiple word lines 205 accessed simultaneously on a same domain 310-b, which may expand access patterns, or reduce row-buffer conflicts, among other benefits.[0252] In the example of system 900, each digit line 210 of the memory array 805-c may be coupled with the selection component 815-c via the bus 810-c. In some examples, the digit lines 210 of a domain 310-b may be grouped according to a respective sub-bus 910 of the bus 810-c, where each sub-bus 910 may be associated with some quantity of signal paths. In some examples, the quantity of sub-busses 910 may be equal to the quantity of domains in the memory array 805-c. The example of system 900 may include four sub-busses 910, and each sub-bus 910 may include 256 signal paths. Accordingly, the bus 810-c, in aggregate, may include or be otherwise associated with 1024 signal paths each corresponding to a respective digit line 210.[0253] The selection component 815-c may be operable to couple a selected set of digit lines 210, or a selected portion of the signal paths of a sub-bus 910, with a corresponding set of storage elements or cache elements in a respective cache block of the signal development component array 825-c (e.g., via a respective sub-bus 920 of the bus 820-c). The example of system 900 may include four sub-busses 920, and each sub-bus 920 may include 64 signal paths (e.g., to support a 64-bit information transfer scheme). In some examples, the quantity of sub-busses 920 may be equal to the quantity of domains in the memory array 805-c, and the sub-busses 920 may include an integer fraction or ratio of the quantity of signal paths of the sub-busses 910.[0254] The signal development component array 825-c may be arranged according to cache blocks, each of which may be associated with a quantity of cache lines that are each coupled with a respective set of storage elements (e.g., cache elements). Each of the storage elements may be configured to maintain a signal state (e.g., a cache signal, a cache state) corresponding to a logic state while the respective storage element is isolated from one or both of the memory array 805-c or sense amplifier array 845-c. In the example of system 900, the signal development component array 825-c may include eight cache blocks, where each cache block includes eight cache lines, and each cache line includes 64 cache elements. Thus, the total cache size of the signal development component array 825-c may be 64 x 64 bits (e.g., 4,096 bits). In some examples, the signal development component array 825-c, or cache blocks thereof, may include another selection component, not shown, operable to select or activate a target cache line of a respective cache block (e.g., to couple a target cache line with a sub-bus 920).[0255] In some examples, the quantity of signal paths of a respective sub-bus 920 may be equal to a quantity of storage elements in a cacheline or row of the signal development component array 825-c. Thus, the quantity of storage elements coupled to a cache line may be proportional to (e.g., equal to, an integer multiple of) the quantity of digit lines 210 in a control zone 905 or subdomain. For instance, if a control zone 905 is associated with 64 digit lines 210, a cache line may be associated with 64n (where n = 1,2,3 . . . ) storage elements. In some examples, a quantity of signal paths of a sub-bus 920, or a quantity of storage elements of a cache line may be equal to a quantity of bits of data of a read command, or a quantity of bits of data of a write command (e.g., where 64 storage elements in a given cache line, or 64 signal paths of a given sub-bus 920, may correspond to a 64-bit data transfer scheme).[0256] The selection component 835-c may be operable to couple one or more selected cache lines of the signal development component array 825-c with a set of sense amplifiers 290 of the sense amplifier array 845-c. For example, the selection component 835-c may be coupled with the signal development component array 825-c via bus 830-c, which may include four sub-busses 930 each having 64 signal paths. The selection component 835-c may also be coupled with the sense amplifier array 845-c via bus 840-c. In one example, the bus 840-c may include multiple sub-busses 940 (e.g., sub-bus 940-a through 940-d), which may each be coupled with a respective subarray of sense amplifiers 290 of the sense amplifier array 845-c. In one example (e.g., an example for supporting a 64-bit information transfer scheme), each of a set of sub-busses 940 may be coupled with a respective subarray of 64 sense amplifiers 290, in which case each sub-bus 940 may be associated with 64 signal paths. In another example (e.g., another example for supporting a 64-bit information transfer scheme, with a different multiplexing ratio), the bus 840-c may be associated with a single set of 64 signal paths, which may be coupled with a single array of 64 sense amplifiers 290 of the sense amplifier array 845-c. In these and other examples, any one or more of the cache lines of the signal development component array 825-c may be coupled with various sense amplifiers 290 of the sense amplifier array to support various techniques for multiplexing and pipelining.[0257] In some examples, the system 900 may include other signal paths between components that are not specifically illustrated. For example, the system 900 may include a one or more busses between the memory array 805-c and the sense amplifier array 845-c that bypass the signal development component array 825-c. In some examples, such signal paths may provide a coupling via another selection component that supports various multiplexing between the memory array 805-c and the sense amplifier array 845-c (e.g., directly). In some examples, such busses or selection components between the memory array 805-c and the sense amplifier array 845-c may support access operations, such as read operations or write operations, that bypass the signal development component array 825-c or components thereof (e.g., cache elements).[0258] The sense amplifier array 845-c may be coupled with a bus 850-c, which may support communication of information with an I/O component (not shown). In some examples, the bus 850-c may support the communication of signals latched at respective sense amplifiers 290 or other latching component of the sense amplifier array 845-c. In other examples, the bus 850-c may provide signaling to be latched at components outside the sense amplifier array 845-c. In some examples, the quantity of signal paths of the bus 850-c may be the same as a quantity of bits in a data transfer scheme (e.g., 64 signal paths for a 64 bit data transfer scheme). In other examples, the quantity of signal paths of the bus 850-c may be some fraction or ratio of the quantity of bits in a data transfer scheme, and multiple busses 850-c (e.g., from multiple sense amplifier arrays 845-c, from multiple systems 900) may be multiplexed together to support the given data transfer scheme (e.g., eight busses 850-c, each having 64 signal paths, to support a 64-byte data transfer scheme)[0259] Although certain details of the system 900 are described in the context of an illustrative example of a 64-bit information transfer scheme, the system 900 may be illustrative of components for supporting other information transfer schemes. For example, to support a 64-byte information transfer scheme, illustrative quantities of the described components may be scaled by a factor of eight, or illustrative quantities of subcomponents of a given component may be scaled by a factor of eight, or various combinations thereof, to support various techniques for multiplexing and pipelining.[0260] One or more of the memory array 805-c, the selection component 815-c, the signal development component array 825-c, the selection component 835-c, or the sense amplifier array 845-c may be coupled with a memory controller 870-c to support various operations of the system 900.[0261] To organize information (e.g., signal states, cache states) stored at a signal development cache (e.g., of the signal development component array 825-c), the memory controller 870-c may be coupled with the CAM 880-c, configured to store a mapping between addresses of the memory array 805 and addresses of the signal development cache (e.g., addresses of the signal development component array 825-c). For example, the CAM 880-c may be configured to store one or more mappings between a row of memory cells 105, or portion thereof, and a row of signal development cache elements (e.g., a cache line of the signal development component array 825-c).[0262] In response to access commands, the system 900 (e.g., memory controller 870-c) may refer to one or more mappings stored in the CAM 880-c to determine whether or how to access information (e.g., signal states, cached signals) of a signal development cache of the signal development component array 825-c or memory cells 105 of the memory array 805-c. In one example, the memory controller 870-c may receive a read command from a requesting device (e.g., a control device) associated with a first address, corresponding to an address of the memory array 805-c, and the memory controller 870-c may determine, using mapping information from the CAM 880-c, that the first address corresponds to an address of the signal development component array 825-c. Based at least in part on determining the correspondence, the memory controller 870-c may access the signal development component array 825-c to retrieve information associated with the read command.[0263] In another example, the memory controller 870-c may receive a write command associated with a set of logic states from a requesting device and store, at one or more addresses of the signal development component array 825-c, a set of signal states associated with the set of logic states. In some cases, the memory controller 870-c may determine a mapping between the one or more addresses of the signal development component array 825- c and one or more addresses of the memory array 805-c, and store the determined mapping at the CAM 880-c. The memory controller 870-c may write the set of logic states to the addresses of the memory array 805-c based at least in part on the set of signal states and the determined mapping.[0264] Writing logic states to the memory array 805-c may be performed according to various timing or other temporal or sequential configurations (e.g., relative to writing corresponding signal states to the signal development component array 825-c). In one example, logic states may be written to the memory array 805-c simultaneously or concurrently with writing signal states to the signal development component array 825-c. In another example, logic states may be written to the memory array 805-c in series with writing signal states to the signal development component array 825-c (e.g., writing a logic state to the memory array 805-c after writing a corresponding signal state to the signal development component array 825-c, in a write operation sequence responsive to a write command that writes signal states to the signal development component array 825-c and then writes logic states to the memory array 805-c). In another example, logic states may be written to the memory array 805-c following some duration or delay after writing signal states to the signal development component array 825-c (e.g., in an eviction operation), which may or may not include a response to some trigger (e.g., an eviction command, an eviction timer) that is different from an initial write command. Thus, in various examples, data may be written to a signal development cache, and a mapping (e.g., to a corresponding address in the memory array 805-c) may be stored in the CAM 880-c to reduce or hide latency of write writing logic states to the memory array 805-c, and such writing to the memory array 805-c may or may not be postponed to later time during a signal development cache eviction.[0265] According to these and other examples, storage at the CAM 880-c may be used to evaluate whether data corresponding to an address of an access command (e.g., a read command, a write command) is stored in the signal development component array 825-c. In some examples, accessing data in a signal development cache of the signal development component array 825-c, instead of accessing the same data in the memory array 805, may decrease latency of performing the access operation.[0266] The mapping of addresses in the system 900 may be supported by various techniques for indicating addresses, and for mapping one address to another (e.g., mapping an address of the memory array 805-c to an address of the signal development component array 825-c).[0267] In one example, to support addressing at the memory array 805-c with a 1,024 x 1,024 array of memory cells 105, and a 64-bit cacheline size, the quantity of addresses of the memory array 805-c may be given by 1,024 x 1,024 / 64 = 16,384 addresses. In various examples, each of these addresses of the memory array 805-c may refer to a word line 205 of the memory array 805-c, or to a word line 205 and a respective domain 310-b of the memory array (e.g., sub-rows), or to a word line 205 and a respective control zone 905 (e.g., subdomain), or to some other organizational indicator of the memory array 805-c. A unique address from among the 16,384 total addresses of the memory array 805-c may be identified by a 14-bit number. In one example, a 14-bit address indicator may be divided into a word line portion using a 10-bit indicator to identify one of 1,024 word lines 205 of the memory array, a domain portion using a 2-bit indicator to identify one of four domains (e.g., one of domains 310-b- 1 through 310-b-4), and a subdomain portion using a 2-bit indicator to identify one of four control zones 905 or subdomains of an indicated domain 310-b.However, different organizational techniques may be used to uniquely identify an address of the memory array 805-c.[0268] In another example, to support addressing at a signal development cache of the signal development component array 825-c with a 64 x 64 array of cache elements, and a 64-bit cacheline size, the quantity of addresses of the signal development cache may be given by 64 x 64 / 64 = 64 addresses. In various examples, each of these addresses of the signal development cache may refer to a cache line of the signal development cache, or to a cache line of a cache block of the signal development cache, or to some other organizational indicator of the signal development cache. A unique address from among the 64 total addresses of the signal development component array 825-c may be identified by a 6-bit number. [0269] The CAM 880-c may provide a storage location for storing determined mappings between addresses of the signal development component array 825-c and respective addresses of the memory array 805-c. Following the illustrative examples above, the CAM 880-c may provide a mapping between 14-bit addresses of the memory array 805-c and 6-bit addresses of the signal development component array 825-c. In one example, the CAM 880-c may include a lookup table that includes both the 14-bit addresses of the memory array and the 6-bit addresses of the signal development component array 825-c, which may be supported by entries of the lookup table having at least a 20-bit entry. In another example, the CAM 880-c may include 64 storage positions, with each storage position corresponding to a respective one of the addresses of the signal development component array 825-c (e.g., having a fixed correspondence). Each of the 64 positions of the CAM 880-c may therefore be associated with at least a 14-bit storage capacity, such that the CAM 880-c can indicate which address of the memory array 805-c may be stored at a given address of the signaldevelopment component array 825-c.[0270] In some examples, in response to a read command, the memory controller 870-c may read the storage entries in the CAM 880-c to determine whether or not an address of the memory array 805-c, as indicated in the read command, is entered at one of the positions of the CAM 880-c. If the memory controller 870-c identifies a match of the address of the memory array 805-c (e.g., in an entry that is accompanied by an address of the signal development component array 825-c or at a position of the CAM 880-c) the memory controller 870-c may read information from the corresponding address of the signal development component array 825-c. For example, the memory controller 870-c may read information at the address of the signal development component array 825-c that corresponds to the position of the CAM 880-c that included the matching address entry, which may be performed without accessing the memory array 805-c. Otherwise, the memory controller 870- c may proceed with accessing the memory array 805-c to retrieve the information, which may or may not include signaling exchange with the signal development component array 825-c, or signal storage at associated cache elements. When storing information of the read command at cache elements of the signal development component array 825-c, the memory controller 870-c may determine a mapping and store a new entry in the CAM 880-c indicating the address being read from the memory array 805-c. When evicting information from cache elements of the signal development component array 825-c, corresponding addresses of the memory array 805-c may be removed from the CAM 880-c. [0271] Using the CAM 880-c to support the described techniques for memory access may be associated with various advantages compared with other techniques. For example, the CAM 880-c may support flexibility for mapping information of the memory array 805-c to various locations (e.g., various cache lines) in the signal development component array 825-c, which may improve utilization of the associated signal development cache. In some examples, a fully associative cache array may enable full utilization of a cache array before overwriting cache various elements.[0272] In some examples, the CAM 880-c may support parallel matching of address tags in charge-locked IT memory, ReRAM, or 3-d cross-point architectures relatively quickly. Accordingly, the CAM 880-c may support various match functions where the cache storage media and tag check media are made of the same or similar methodology. In some examples, the described techniques for using CAM 880-c for cache tag lookup may support relatively faster mapping evaluations than if an entire cache array was searched.[0273] The CAM 880-c, the memory controller 870-c, or a combination there of may also be configured to perform additional content verification. For example, access information or associated access commands may be associated with an application corresponding to specific data, a specific data structure, or with an operating system performing certain functional activity. Accordingly, access operations performed by the system 900 may be accompanied by various content verifications based on accessing a signal development cache, which may include verifying a data structure of the information, performing a page table walk, validating a byte-addressable portion of the information associated with the read request within a storage block, and other operations.[0274] Although the CAM 880-c is illustrated as a single component, a CAM 880 may include one or more subcomponents that are distributed at various portions of the system 900 or other system. In one example, a CAM 880, or described components or functionality thereof, may include subcomponents (e.g., individual CAM components) associated with each of the sub-busses 940 (e.g., a first CAM component corresponding to the sub-bus 940-a, a second CAM component corresponding to the sub-bus 940-b, and so on). In another example, a CAM 880, or described components or functionality thereof, may include subcomponents associated with (e.g., included in) a selection component, such as selection component 815-c or selection component 835-c. Moreover, although the CAM 880 is shown as being outside the illustrative boundary of the data path 860 of FIG. 8 A, in other examples, a CAM 880, or components thereof, may be considered to be included in a data path 860.[0275] FIG. 9 illustrates an example of a process flow 900 that supports content- addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. The operations of the process flow 900 may be implemented by a memory device or its components as described herein. For example, the operations of the process flow 900 may be performed by a memory device that includes a memory array, a signal development component array (e.g., a signal development cache, an SDC cache), a sense amplifier array (e.g., an SA array), and a CAM operable to store mappings between addresses of the memory array and addresses of the signal development cache.[0276] At 905, the process flow 900 may include receiving a read request for data. The read request may be received from a requesting or accessing device, such as a host device or other device (e.g., a graphics processing (GPU) device, an AI accelerator, a storage device, a device different than the memory device). In some examples, the read request may be associated with a read command, such as a read command. The read request may include, or be otherwise accompanied by or associated with an address of requested information, which may be (e.g., correspond to) an address of a memory array of the memory device (e.g., a row, a page, a word line or portion thereof). After the operations of 905, the process flow may proceed to 910.[0277] At 910, the process flow 900 may include checking in a CAM for an address tag match. For example, in a prior access operation (e.g., a prior read operation, a prior write operation), information associated with or corresponding to one or more addresses of the memory array may have been stored at an address of the signal development cache. In one example, preceding read operations may have been issued and received by the memory device, and the memory device may have transferred information (e.g., logic states) of the memory array into the signal development cache (e.g., as cache signals, as signal states).After such a transfer, the information (e.g., logic states) may or may not also remain stored in the memory array. Additionally or alternatively, preceding write operations may have been issued and received by the memory device, and the memory device may have written information associated with the write operations to the signal development cache (e.g., as cache signals, as signal states). Additionally or alternatively, a read-modify-write command may be issued and the data for the write portion of that operation may still be stored in the signal development cache. In various examples, the information (e.g., logic states) may or may not have also been stored in the memory array, or the read request may be received while the information is being written to the memory array. In these and other cases, when information is stored in the signal development cache, an address of the signal development cache may be mapped with an address of the memory array, and such a mapping may be stored in the CAM. After the operations of 910, the process flow may proceed to 915.[0278] At 915, the process flow 900 may include a determination of whether or not an address of the read request (e.g., an address of the memory array) matches an address tag in the CAM, which may indicate whether or not information associated with the read request is available in the signal development cache. For example, the operations of 915 may include identifying an association between a bitmap corresponding to an address of the signal development cache and a bitmap corresponding to the address of the read request, or an identification that a bitmap corresponding to the address of the read request is entered in a particular storage position in the CAM. If the address does not match an address tag in the CAM, which may indicate that information of the read command (e.g., information associated with the address of the memory array) is not available in the signal development cache, the process flow may proceed to 920. If the address matches an address tag in the CAM, which may indicate that information of the read command (e.g., information associated with the address of the memory array) is available in the signal development cache, the process flow may proceed to 925.[0279] At 920, the process flow 900 may include loading data (e.g., logic states) from the memory array into a signal development cache (e.g., as cache signals, as signal states). For example, at 920, when information associated with the read request may not be available at the signal development cache, the memory device may couple one or more memory cells associated with the address of the read request (e.g., an address of the memory array) with the signal development component cache. In some examples, the operations of 920 may include operations of one or more read signal development portions 410, as described with reference to FIGs. 4 A and 4B. Performing such operations may be accompanied by determining a mapping between the address of the memory array (e.g., of the read request) and an address of the signal development cache, and storing the determined mapping in the CAM (e.g., for future accessing). After the operations of 920, the process flow may proceed to 925. [0280] At 925, the process flow 900 may include transferring data from the signal development cache to a sense amplifier (SA) array for readout. The operations of 925 may be an example of accessing the signal development cache to retrieve information associated with the read request. For example, at 925, when information associated with the read request is available at the signal development cache (e.g., as cache signals, as signal states), the memory device may couple one or more cache elements of the signal development cache (e.g., according to an address of the signal development cache) with one or more sense amplifiers of the sense amplifier array, and sensing logic signals based on the coupling. The sensing may include or be otherwise accompanied by a latching of the logic signals, which may be performed at latches of the sense amplifier array or some other component (e.g., latches of an I/O component). In some examples, the operations of 925 may include operations of one or more latch signal generation portions 420, as described with reference to FIGs. 4 A and 4B. Performing such operations may be based at least in part on a determined mapping between the address of the memory array (e.g., of the read request) and an address of the signal development cache, which may have been determined, for example, at 920 or prior to beginning the process flow 900.[0281] In some examples, the process flow 900 may include or be accompanied by transmitting an indicator of whether or not information of the read request was retrieved from the memory array (e.g., whether or not the operations of 920 were bypassed, a positive or negative result of the determining at 915). Moreover, in some examples, the process flow 900 may include or be accompanied by various content verifications based on accessing the SDC cache (e.g., at 925), which may include verifying a data structure of the information, performing a page table walk, validating a byte-addressable portion of the information associated with the read request within a storage block, and other operations.[0282] In accordance with the techniques for signal development caching as disclosed herein, by storing cache signals or signal states at a signal development cache, and maintaining an address mapping between a memory array and a signal development cache (e.g., in a CAM), the process flow 900 may support a direct transition from 915 to 925. Such techniques may support accessing information associated with a command (e.g., a read request) without accessing the memory array (e.g., in response to the read request). A direct transition from 915 to 925 may reduce or eliminate latency associated with the operations of 920 (e.g., reduce or eliminate latency associated with read signal development portions 410). When information of the command is not available at the signal development cache, the access operations may proceed with accessing the memory array. In some examples, accessing the memory array may include storing the information in the signal development cache as cache signals or signal states (e.g., the operations of 920, one or more read signal development portions 410). In other examples, accessing the memory array may include bypassing such storage, such as proceeding directly with a sense signal development that does not include a cache signal storage at the signal development cache elements.[0283] FIG. 10 shows a block diagram 1000 of a memory device 1005 that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. The memory device 1005 may be an example of aspects of a memory device as described with reference to FIGs. 1 through 9. The memory device 1005 may include a memory array 1010, an access command receiver 1015, an address mapping component 1020, a signal development cache 1025, a content-addressable memory 1030, a signal development cache selection component 1035, a sense amplifier array 1040, an input/output component 1045, a mapping status indication component 1050, and a content verification component 1055. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0284] The memory array 1010 may include an array of memory cells. In various examples, the memory cells of memory array 1010 may include storage elements of various architectures, such as capacitive memory elements, material memory elements, resistive memory elements, thresholding memory elements, transistor memory elements, and others.[0285] In some examples, the memory array 1010 may be accessed to retrieve information, which may be associated with various access commands (e.g., read commands, write commands).[0286] The access command receiver 1015 may receive access commands. For example, the access command receiver 1015 may receive a read command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device) indicating a first address, the first address associated with (e.g., corresponding to) an address of the memory array 1010.[0287] In some examples, the access command receiver 1015 may receive a second read command from the requesting device indicating a third address, the third address associated with (e.g., corresponding to) an address of the memory array 1010. [0288] The address mapping component 1020 may determine that a first address of the memory array corresponds to a second address, the second address associated with (e.g., corresponding to) an address of the signal development cache 1025.[0289] In some examples, the address mapping component 1020 may identify an association between a bitmap corresponding to a second address of the signal development cache 1025 and a bitmap corresponding to a first address of the memory array 1010, and determining whether the first address corresponds to the second address may include identifying the association.[0290] In some examples, the address mapping component 1020 may determine that a third address does not correspond to an address of the signal development cache 1025.[0291] In some examples, the address mapping component 1020 may determine a mapping between a set of memory cells of the memory array 1010 that corresponds to a third address and a set of cache elements of the signal development cache 1025.[0292] The signal development cache 1025 may include a plurality of cache elements (e.g., storage elements). In various examples, the cache elements of the signal development cache 1025 may include storage elements of various architectures, such as capacitive cache elements, material cache elements, resistive cache elements, thresholding cache elements, transistor cache elements, and others. In various examples, the cache elements of the signal development cache 1025 may use a same storage architecture or a different storage architecture than the storage elements of the memory array 1010.[0293] In some examples, the memory device may access the signal development cache 1025 to retrieve information associated with the read command based on determining that an address of the memory array 1010 corresponds to an address of the signal development cache 1025. [0294] In some examples, accessing the signal development cache 1025 may be performed without accessing the memory array 1010 to retrieve the information.[0295] The content-addressable memory 1030 may store a mapping between a set of memory cells and a set of cache elements. [0296] In some examples, the signal development cache 1025 may transfer the information of the set of memory cells to the set of cache elements based on storing a mapping.[0297] In some examples, the content-addressable memory 1030 may store an association between a bitmap corresponding to an address of the set of cache elements and a bitmap corresponding to an address of a set of memory cells of the memory array 1010.[0298] The signal development cache selection component 1035 may couple a set of cache elements of the signal development cache 1025 with a set of sense amplifiers.[0299] In some examples, the signal development cache selection component 1035 may couple the set of memory cells with the set of cache elements based on determining the mapping between a set of memory cells of the memory array 1010 that correspond to the third address and the set of cache elements of the signal development cache 1025.[0300] In some examples, the signal development cache selection component 1035 may couple the set of cache elements with a set of sense amplifiers (e.g., after coupling the set of memory cells with the set of cache elements).[0301] The sense amplifier array 1040 may sense (e.g., detect, capture, latch), at the set of sense amplifiers, logic signals based on the coupling.[0302] In some examples, the sense amplifier array 1040 may sense (e.g., detect, capture, latch), at the set of sense amplifiers, logic signals associated with the information and from the cache elements based on transferring the information to the set of cache elements.[0303] The input/output component 1045 may output, to a requesting device, information retrieved from the signal development cache 1025 without accessing the memory array 1010.[0304] The mapping status indication component 1050 may output an indicator to the requesting device, the indicator indicating when information being retrieved from the signal development cache 1025, or from the memory array 1010, is based on a mapping between the memory array 1010 and the signal development cache 1025 (e.g., based on a mapping or other indication of content-addressable memory 1030).[0305] The content verification component 1055 may perform content verification (e.g., verifying a data structure of information, performing a page table walk, performing a network interface driver or TCP/IP entry lookup, validating a byte-addressable portion of information associated with the read command within a storage block) for an access command, such as a read command, based on the accessing.[0306] FIG. 11 shows a block diagram 1100 of a memory device 1105 that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. The memory device 1105 may be an example of aspects of a memory device as described with reference to FIGs. 1 through 9. The memory device 1105 may include a write command receiver 1110, a signal development cache 1115, an address mapping component 1120, a memory cell write component 1125, a mapping storage component 1130, and a read command receiver 1135. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0307] The write command receiver 1110 may receive a write command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device), the write command associated with a set of logic states for storing in the memory device.[0308] The signal development cache 1115 may store, at one or more addresses of a signal development cache of the memory device, a set of signal states associated with the set of logic states.[0309] In some examples, the memory device may access the signal development cache 1115 to retrieve information associated with the read command based on determining that the read command is associated with the at least one of the one or more addresses of the signal development cache 1115.[0310] In some examples, the memory device may access the signal development cache 1115 without accessing a memory array to retrieve the information associated with a read command.[0311] The address mapping component 1120 may determine a mapping between the one or more addresses of the signal development cache 1115 and one or more addresses of a memory array of the memory device.[0312] In some examples, the memory array may include a plurality of domains, each including a respective plurality of word lines, and, to determine the mapping, the address mapping component 1120 may determine a first mapping between a first of the one or more addresses of the signal development cache 1115 and an address of a word line of the first of the set of domains. In some examples, the address mapping component 1120 may determine a second mapping between a second of the one or more addresses of the signal development cache 1115 and an address of a word line of the second of the set of domains.[0313] In some examples, the address mapping component 1120 may determine that the read command is associated with information stored in at least one of the one or more addresses of the signal development cache 1115.[0314] The memory cell write component 1125 may write the set of logic states to the one or more addresses of the memory array based on the set of signal states and the mapping.[0315] The mapping storage component 1130 may store the mapping between the one or more addresses of the signal development cache 1115 and the one or more addresses of the memory array.[0316] In some examples, the mapping storage component 1130 may store the mapping at a content-addressable memory of the memory device.[0317] The read command receiver 1135 may receive a read command from the requesting device after receiving the write command.[0318] FIG. 12 shows a flowchart illustrating a method or methods 1200 that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. The operations of method 1200 may be implemented by a memory device or its components as described herein. For example, the operations of method 1200 may be performed by a memory device as described with reference to FIG. 10. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0319] At 1205, the memory device may receive a read command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device) indicating a first address, the first address associated with (e.g., corresponding to) an address of a memory array. The operations of 1205 may be performed according to the methods described herein. In some examples, aspects of the operations of 1205 may be performed by an access command receiver as described with reference to FIG. 10. [0320] At 1210, the memory device may determine that the first address of the memory array corresponds to a second address, the second address associated with (e.g.,corresponding to) an address of a signal development cache of the memory device. The operations of 1210 may be performed according to the methods described herein. In some examples, aspects of the operations of 1210 may be performed by an address mapping component as described with reference to FIG. 10.[0321] At 1215, the memory device may access the signal development cache to retrieve information associated with the read command based on the determining. The operations of 1215 may be performed according to the methods described herein. In some examples, aspects of the operations of 1215 may be performed by a signal development cache as described with reference to FIG. 10.[0322] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1200. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, at a memory device including a memory array, a read command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device) indicating a first address, the first address associated with (e.g., corresponding to) an address of the memory array, determining that the first address of the memory array corresponds to a second address, the second address associated with (e.g., corresponding to) an address of a signal development cache of the memory device, and accessing the signal development cache to retrieve information associated with the read command based on the determining.[0323] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for identifying an association between a bitmap corresponding to an address of the signal development cache and a bitmap corresponding to the first address, and determining whether the first address corresponds to the second address may include identifying the association.[0324] In some examples of the method 1200 and the apparatus described herein, the identifying may be based on (e.g., occurs at, includes an accessing of) a content-addressable memory of the memory device.[0325] In some examples of the method 1200 and the apparatus described herein, accessing the signal development cache to retrieve the information associated with the read command may include operations, features, means, or instructions for coupling a set of cache elements of the signal development cache with a set of sense amplifiers, and sensing (e.g., detecting, capturing, latching), at the set of sense amplifiers, logic signals based on the coupling.[0326] Some examples of the method 1200 and the apparatus described herein the operations, features, means, or instructions for accessing the signal development cache may be operable without accessing the memory array to retrieve the information.[0327] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for receiving a second read command from the requesting device indicating a third address of the memory array, determining that the third address does not correspond to an address of the signaldevelopment cache of the memory device, and accessing the memory array to retrieve information associated with the second read command.[0328] In some examples of the method 1200 and the apparatus described herein, accessing the memory array to retrieve the information associated with the second read command may include operations, features, means, or instructions for determining a mapping between a set of memory cells of the memory array that correspond to the third address and a set of cache elements of the signal development cache, storing the mapping between the set of memory cells and the set of cache elements based on the determining, transferring the information of the set of memory cells to the set of cache elements based on storing the mapping, and sensing (e.g., detecting, capturing, latching), at the set of sense amplifiers, logic signals associated with the information and from the cache elements based on transferring the information to the set of cache elements.[0329] In some examples of the method 1200 and the apparatus described herein, accessing the memory array to retrieve the information associated with the second read command may include operations, features, means, or instructions for coupling the set of memory cells with the set of cache elements based on determining the mapping between a set of memory cells of the memory array that correspond to the third address and the set of cache elements of the signal development cache, and coupling the set of cache elements with a set of sense amplifiers (e.g., after coupling the set of memory cells with the set of cache elements). [0330] In some examples of the method 1200 and the apparatus described herein, storing the mapping may include operations, features, means, or instructions for storing an association between a bitmap corresponding to a fourth address of the set of cache elements and a bitmap corresponding to the third address.[0331] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for outputting, to the requesting device, the information retrieved from the signal development cache without accessing the memory array.[0332] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for outputting an indicator to the requesting device, the indicator indicating when information being retrieved from the signal development cache or from the memory array, may be based on a mapping between the memory array and the signal development cache (e.g., based on a mapping or other indication of a CAM).[0333] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for performing content verification for the read command (e.g., verifying a data structure of the information, performing a page table walk, performing a network interface driver or TCP/IP entry lookup, validating a byte- addressable portion of the information associated with the read command within a storage block) based on the accessing.[0334] FIG. 13 shows a flowchart illustrating a method or methods 1300 that supports content-addressable memory for signal development caching in a memory device in accordance with examples as disclosed herein. The operations of method 1300 may be implemented by a memory device or its components as described herein. For example, the operations of method 1300 may be performed by a memory device as described with reference to FIG. 11. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0335] At 1305, the memory device may receive a write command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device), the write command associated with a set of logic states for storing in the memory device. The operations of 1305 may be performed according to the methods described herein. In some examples, aspects of the operations of 1305 may be performed by a write command receiver as described with reference to FIG. 11.[0336] At 1310, the memory device may store, at one or more addresses of a signal development cache of the memory device, a set of signal states associated with the set of logic states. The operations of 1310 may be performed according to the methods described herein. In some examples, aspects of the operations of 1310 may be performed by a signal development cache as described with reference to FIG. 11.[0337] At 1315, the memory device may determine a mapping between the one or more addresses of the signal development cache and one or more addresses of a memory array of the memory device. The operations of 1315 may be performed according to the methods described herein. In some examples, aspects of the operations of 1315 may be performed by an address mapping component as described with reference to FIG. 11.[0338] At 1320, the memory device may write the set of logic states to the one or more addresses of the memory array based on the set of signal states and the mapping. The operations of 1320 may be performed according to the methods described herein. In some examples, aspects of the operations of 1320 may be performed by a memory cell write component as described with reference to FIG. 11.[0339] At 1325, the memory device may store the mapping between the one or more addresses of the signal development cache and the one or more addresses of the memory array. The operations of 1325 may be performed according to the methods described herein. In some examples, aspects of the operations of 1325 may be performed by a mapping storage component as described with reference to FIG. 11.[0340] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1300. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, at a memory device, a write command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device), the write command associated with a set of logic states for storing in the memory device, storing, at one or more addresses of a signal development cache of the memory device, a set of signal states associated with the set of logic states, determining a mapping between the one or more addresses of the signal development cache and one or more addresses of a memory array of the memory device, writing the set of logic states to the one or more addresses of the memory array based on the set of signal states and the mapping, and storing the mapping between the one or more addresses of the signal development cache and the one or more addresses of the memory array.[0341] In some examples of the method 1300 and the apparatus described herein, the memory array may include a plurality of domains, each including a plurality of word lines, and determining the mapping may include operations, features, means, or instructions for determining a first mapping between a first of the one or more addresses of the signal development cache and an address of a word line of the first of the set of domains, and determining a second mapping between a second of the one or more addresses of the signal development cache and an address of a word line of the second of the set of domains.[0342] Some examples of the method 1300 and the apparatus described herein may further include operations, features, means, or instructions for receiving a read command from the requesting device after receiving the write command, determining that the read command may be associated with information stored in at least one of the one or more addresses of the signal development cache, and accessing the signal development cache to retrieve information associated with the read command based on determining that the read command may be associated with the at least one of the one or more addresses of the signal development cache.[0343] Some examples of the method 1300 and the apparatus described herein may include operations, features, means, or instructions for accessing the signal development cache without accessing the memory array to retrieve the information associated with the read command.[0344] In some examples of the method 1300 and the apparatus described herein, storing the mapping may include operations, features, means, or instructions for storing the mapping at a content-addressable memory of the memory device.[0345] It should be noted that the methods described herein are possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, portions from two or more of the methods may be combined. [0346] An apparatus is described. The apparatus may include a memory array including a set of memory cells, a signal development cache including a set of cache elements different than the set of memory cells and configured to store (e.g., temporarily) signaling (e.g., information, logic states, cache states, signal states) associated with information exchange with a sense amplifier array (e.g., between the memory array and the sense amplifier array), and a content-addressable memory configured to store a mapping between one or more addresses of the signal development cache and one or more addresses of the memory array.[0347] In some examples of the apparatus, to store the mapping, the content-addressable memory may be configured to store an association between a bitmap corresponding to an address of the signal development cache and a bitmap corresponding to an address of the memory array.[0348] In some examples of the apparatus, the memory array includes a plurality of word lines each associated with a respective subset of the plurality of memory cells, the signal development cache includes a plurality of cache lines each associated with a respective subset of the plurality of cache elements, and, to store the mapping, the content-addressable memory may be configured to store a mapping between a cache line of the plurality of cache lines and a respective word line of the plurality of word lines.[0349] In some examples of the apparatus, the memory array includes a plurality of domains each associated with a respective subset of a plurality of word lines, the signal development cache comprises a plurality of cache lines each associated with a respective subset of the plurality of cache elements, and, to store the mapping, the content-addressable memory may be configured to store a mapping between a cache line of the plurality of cache lines and a respective domain of the plurality of domains and one of the respective plurality of word lines.[0350] Some examples of the apparatus may include a selection component operable to selectively couple the memory array with the signal development cache based at least in part on the mapping.[0351] Some examples of the apparatus may include a selection component operable to selectively couple the signal development cache with the sense amplifier array based at least in part on the mapping. [0352] Some examples of the apparatus may include a selection component operable to selectively couple the memory array with the sense amplifier array based at least in part on the mapping.[0353] In some examples of the apparatus, the sense amplifier array may include a plurality of sense amplifiers, each sense amplifier of the plurality of sense amplifiers configured to output a logic state based at least in part on sensing (e.g., detecting, capturing, latching) an input signal from the signal development cache.[0354] In some examples of the apparatus, the content-addressable memory may be configured to receive a read command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device), and access the signal development cache based at least in part on the read command and the mapping.[0355] In some examples of the apparatus, the content-addressable memory may be configured to output an indicator (e.g., to the requesting device), the indicator indicating when information being retrieved from the signal development cache or from the memory array is based at least in part on a mapping between the memory array and the signal development cache (e.g., based at least in part on a mapping or other indication of a CAM).[0356] In some examples of the apparatus, the content-addressable memory may be configured to determine the mapping based at least in part on a write command received from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device).[0357] In some examples of the apparatus, the content-addressable memory may be configured to determine the mapping based at least in part on a read command received from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device).[0358] In some examples of the apparatus, the content-addressable memory may be configured to determine the mapping based at least in part on transferring signaling associated with logic states of the memory array to the signal development cache.[0359] Another apparatus is described. The apparatus may include a memory array comprising a plurality of memory cells, a signal development cache comprising a plurality of cache elements different than the plurality of memory cells and configured to temporarily store signaling associated with information exchange with a sense amplifier array, and a controller. The controller may be operable or configured to receive a read command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device) indicating a first address, the first address associated with (e.g., corresponding to) an address of the memory array, determine that the first address of the memory array corresponds to a second address, the second address associated with (e.g., corresponding to) an address of a signal development cache of the memory device and access the signal development cache to retrieve information associated with the read command based at least in part on the determining. In some examples, the apparatus may include a content-addressable memory configured to store a mapping between one or more addresses of the signal development cache and one or more addresses of the memory array.[0360] Another apparatus is described. The apparatus may include a memory array comprising a plurality of memory cells, a signal development cache comprising a plurality of cache elements different than the plurality of memory cells and configured to temporarily store signaling associated with information exchange with a sense amplifier array, and a controller. The controller may be configured to receive a write command from a requesting device (e.g., a host device, a GPU device, an AI accelerator, a storage device, a device different than the memory device), the write command associated with a plurality of logic states for storing in the memory array, store, at one or more addresses of a signaldevelopment cache of the memory device, a plurality of signal states associated with the plurality of logic states, determine a mapping between the one or more addresses of the signal development cache and one or more addresses of the memory array, write the plurality of logic states to the one or more addresses of the memory array based at least in part on the plurality of signal states and the mapping, and store the mapping between the one or more addresses of the signal development cache and the one or more addresses of the memory array. In some examples, the apparatus may include a content-addressable memory configured to store a mapping between one or more addresses of the signal development cache and one or more addresses of the memory array.[0361] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0362] The terms“electronic communication,”“conductive contact,”“connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0363] The term“coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals can be communicated betweencomponents over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0364] The term“isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components from one another, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0365] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on- glass (SOG) or silicon-on-sapphire (SOS), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0366] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be“on” or“activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be“off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0367] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or“advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0368] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0369] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0370] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field- programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be amicroprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiplemicroprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0371] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase “based at least in part on.”[0372] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
PROBLEM TO BE SOLVED: To provide a seal-removing method and apparatus for an integrated circuit. SOLUTION: According to the present invention, a package is irradiated with laser, typically ones of a spectrum ranging from a UV to an IR with a short pulse interval, and the ablation of a molding compound is performed. Further, the real time monitoring and the process control of a seal-removing process, and the ablation waste is removed in a fluid. Seal-removing can be executed without damaging the internal structure of the integrated circuit. Further, the oxidation of the molding compound in an ambient air can be prevented by seal-removing in liquid.
A method of removing encapsulation of an integrated circuit package comprising one or more layers of an electronic circuit covered by one or more layers of encapsulation material, wherein the integrated circuit package is irradiated with laser radiation to remove the encapsulation material. A method of sealing and removing an integrated circuit package, comprising a step of removing.2. The method according to claim 1, further comprising a step of selecting a selected position on the package to be irradiated before performing the laser irradiation step.Further, the process of removing the seal is monitored; it is determined whether or not the removal of the seal at the selected position is completed; if it is determined that the sealing is not completed, irradiation of the package is continued, and it is determined that the package is completed. In this case, it is determined that all de-sealing has been completed; when all de-sealing has been determined to be completed, the de-sealing process is stopped and it is determined that all de-sealing has not been completed. 3. The method of claim 2, further comprising the step of selecting a new location to which the laser energy is directed, and continuing to irradiate the package.4. The integrated circuit according to claim 3, wherein the determination as to whether the desealing has been completed at the location includes monitoring a sound signal and / or an optical signal emitted from the desealing location. Package sealing removal method.Measuring the energy of the laser radiation; comparing the measured energy of the laser radiation with an expected value; and adjusting the laser energy accordingly if the two values do not match. The method for sealing and removing an integrated circuit package according to any one of claims 1 to 4, wherein the method comprises the steps of:The method for sealing and removing an integrated circuit package according to any one of claims 1 to 5, further comprising a step of flowing a fluid over the package.7. The method according to claim 6, wherein the fluid is a liquid.The method according to any one of claims 1 to 7, wherein the package is immersed in a liquid during the encapsulation removing process.7. The method according to claim 6, wherein the fluid is a gas.An apparatus for desealing an integrated circuit package including one or more layers of electronic circuitry covered by one or more layers of encapsulating material, said apparatus comprising a laser energy source and means for holding said package. And a device for irradiating the package with the laser energy to remove the sealing material.11. The apparatus of claim 10, further comprising means for monitoring the laser energy and comparing the monitored value to an expected value, thereby adjusting characteristics of the laser energy source.12. An apparatus according to claim 10, further comprising means for monitoring said laser encapsulation removal process.13. The apparatus of claim 10 further comprising means for providing an image of the package during said laser desealing process.14. An apparatus according to claim 10, further comprising means for generating a fluid flow on said integrated circuit package during said desealing process.15. The apparatus of claim 10, wherein said package is immersed in a liquid during said desealing process.16. The method of claim 10 further comprising means for effecting a relative movement between the laser energy directed at said package and said package itself, thereby illuminating different locations on said package. An integrated circuit package sealing and removing apparatus according to any one of the preceding claims.An apparatus for desealing an integrated circuit package substantially as herein described and in the accompanying drawings.A method of encapsulating and removing an integrated circuit package substantially as herein described and in the accompanying drawings.
BACKGROUND OF THE INVENTION 1. Field of the Invention This invention relates to the field of integrated circuit (IC) encapsulation, also known as film removal or unsealing, and more particularly to preparing for failure analysis of devices. It is used for. Further, the present invention relates to a method and an apparatus for sealing and removing an IC package.BACKGROUND OF THE INVENTION Packaging of integrated circuits (ICs) typically involves placing a small wafer die on a leadframe (typically a copper leadframe). The fine copper or gold wires are then soldered to the corresponding lead feet in the lead frame and to the bonding pads on the wafer die to provide electrical continuity. Then, a thin transparent layer of epoxy material is typically bonded to the top surface of the wafer die to completely cover the die to protect the device from environmental factors such as heat and moisture. The assembly is then sealed (encapsulated) with a molding compound (usually black), resulting in an IC package.[0003] One of the important research fields in the IC industry is failure analysis of IC packages. This is important to help the manufacturer understand and address the causes of device failure, thereby improving manufacturing yield and device reliability. There are many known device failure analysis technologies, one of which is an electrical signal detection technology, and by diagnosing the detection result, it is possible to gain insight into what is the reason for the device failure. However, it is difficult to obtain sufficient information for a complete analysis using this technique, and in particular, the more integrated the circuit, the more difficult it is to obtain sufficient information. In order to perform a more complete device failure analysis, it is necessary to remove the black molding compound covering the wafer die and the epoxy layer and open the IC package.As one of the sealing and removing techniques, a chemical sealing and removing technique is widely used. In chemical sealing removal, sulfuric acid and / or nitric acid are used according to the material of the molding compound to be removed. This chemical encapsulation takes several hours, and in addition to damaging the lead frame itself, may also damage the bonding pads, solder balls and copper wires in the organic die mount. Thus, achieving complete encapsulation while avoiding such collateral damage is complicated and difficult.Accordingly, there is a need for a method of encapsulating and removing an integrated circuit package that reduces incidental damage to the internal structure of the integrated circuit. In addition, there is also a demand for a sealing removal method that enables simple, quick, highly accurate and safe operation. Further, there is a demand for an apparatus for performing such a sealing and removing method.SUMMARY OF THE INVENTION An object of the present invention is to satisfy the above requirements.In accordance with a first aspect of the present invention, in accordance with the above object, the sealing of an integrated circuit package (1) comprising one or more layers of an electronic circuit covered by one or more layers of a sealing material (6). A method of removing a seal, comprising: irradiating the integrated circuit package (1) with laser radiation (31) to remove the sealing material (6). A method is provided.The method of the present invention preferably includes a step of selecting (30) a selected position on the package to be irradiated before performing the laser irradiation step (31).The desealing process is monitored (35), and it is determined whether or not the desealing at the selected position is completed (36). If it is determined that the desealing is not completed, the irradiation of the package is continued. (31) When it is determined that the sealing has been completed, it is determined that all the desealing has been completed (37), and when it is determined that all the desealing has been completed, the sealing removing process is stopped (38). If it is determined that all of the encapsulation has not been removed, a step (39) of selecting a new position to which the laser energy is directed and continuing the irradiation of the package (31) may be included. It is advantageous. Furthermore, advantageously, said determination (36) as to whether the desealing has been completed at said location comprises monitoring the acoustic and / or optical signals emitted from said desealing location.The method of the present invention preferably includes the step of measuring the energy of the laser radiation (32) and comparing the measured energy of the laser radiation with an expected value (33), and determining the two values. If not, adjusting the laser energy accordingly (34).Advantageously, a fluid (19), preferably a liquid (eg water or isopropyl alcohol solution), flows (29) over said package (1). Also, preferably, the package is immersed in a liquid during the encapsulation removal process. Alternatively, the fluid (19) may be a gas (for example, air).According to a second aspect of the present invention, there is provided an apparatus for desealing an integrated circuit package including one or more layers of an electronic circuit covered by one or more layers of a sealing material, wherein the apparatus comprises: Sealing and removing the integrated circuit package, comprising a laser energy source (7) and means (22) for holding the package, wherein the package is irradiated with the laser energy to remove the sealing material. An apparatus is provided.The apparatus of the present invention includes means for monitoring the laser energy (32) and comparing the monitored value to an expected value (33), thereby adjusting the characteristics of the laser energy source (7) (34). May have. Further, a means for monitoring the laser sealing removal process may be provided. Further, means (9, 10, 11) for providing an image of the package during the laser encapsulation removal process may be provided. As the means (25, 11) for monitoring the laser encapsulation removing process, for example, a CCD camera (23) and a monitor (24) may be used.[0014] Means (20, 21) for generating a flow of fluid (19) over the integrated circuit package during the desealing process may be provided. The fluid (19) may be a gas such as air or a liquid such as water or isopropyl alcohol (IPA) solution. This allows very small particles of molding compound debris to be removed from the surface at high speed during laser ablation. Also, the package may be immersed in a liquid (19) during the desealing process. This prevents oxidation of the molding compound, speeds up the process and lowers laser fluence.[0015] Means (22) may be provided for effecting relative movement between the laser energy directed at the package and the package itself, thereby illuminating different locations on the package.Further, in accordance with the present invention, there is provided a method and apparatus for encapsulating and removing an integrated circuit package substantially as set forth herein and in the accompanying drawings.BEST MODE FOR CARRYING OUT THE INVENTION In order to better understand the present invention and to show how it can be put into practice, the present invention will be described with reference to, for example, various schematic specific examples of the present invention. The present invention will be described with reference to the drawings, but the present invention is not limited thereto. The figures are not to scale.FIGS. 1 (a) and 1 (b) are transmissive cross-sectional views of an IC package before and after encapsulation removal according to the present invention, respectively. In FIG. 1A, an IC package 1 has a lead frame 2 on which a wafer die 3 is placed. A wire, typically a gold wire 4, forms an electrical connection between the lead frame 2 and the wafer die 3. Overlying the wafer die is a transparent epoxy material layer 5. The wafer dies and their associated electrical connections are sealed with a molding material layer 6 (usually black).FIG. 1B is a view similar to FIG. 1A of the package after the sealing is removed. Since the molding material layer 6 covering the wafer die 3 has been removed, it is possible to test the upper surface of the wafer die (with the electric circuit (not shown)) via the transparent epoxy layer 5. The connection between the wire 4 and the wafer die 3 can also be tested. Testing of the connection between the wire 4 and the lead frame 2 is also possible, with the removal of more sealing material by further desealing.FIG. 2 shows an apparatus suitable for implementing the present invention. The figure shows a laser light source 7, preferably using a light source having a wavelength in the UV to IR spectral range at short pulse intervals. For example, a suitable laser is an Nd: YAG laser having a pulse interval of 7 ns and a wavelength of 532 nm. The laser light source 7 emits a beam 8 and transmits through a beam sampler 9. The beam sampler 9 passes a few percent (eg, 5%) of the beam through the energy meter 10 and monitors the laser pulse energy in real time. The reading of the meter 10 is sent to the process controller 11. The process controller compares the actual value with the expected value. If the two values do not match, the output of the laser light source is adjusted accordingly. The remaining beam 8 passes through the beam expander 12 to produce an expanded beam 13. The expanded beam is reflected by the mirror 14, generates light 16 homogenized by the beam homogenizer 15, and is incident on the IC package 1 through the opening 17. The homogenized light results in a uniform energy profile on the package, the spot diameter of the beam on the package being controllable by the aperture 17.The IC package 1 is held in the seal removing chamber 18. When performing de-sealing by laser in a gas, for example, in air, the de-sealing chamber 18 is connected to a gas flow or suction system for removing ablation debris. However, as shown, in this system, the desealing chamber contains liquid 19 for immersing the package (eg, water, isopropyl alcohol (IPA) solution, or other suitable liquid). Preventing the molding compound from oxidation by the liquid speeds up the process and results in low laser fluence. A liquid inlet 20 and a liquid outlet 21 are provided to maintain liquid flow. Ablation debris flows into the liquid stream and exits the desealing chamber via the liquid outlet 21. Thereby, the debris accumulated on the package surface is removed. The desealing chamber 18 is held on an XY stage 22 so that the laser beam can scan over the package surface as required to effect relative movement between the laser and the package surface. The movement of the XY stage 22 is controlled by the process controller 11. As another configuration, the laser beam itself may move on the surface.To monitor the laser de-seal process in real time, a CCD camera 23 and associated monitor 24 are used to provide an image of the progress of the de-seal process. Further, during the laser ablation of the molding compound, a sound wave signal or an optical signal is generated, and the progress of the ablation process is measured based on the amplitude of the signal. The acoustic or / and optical monitoring device 25 may be used in conjunction with the process controller 11 to monitor the ablation process in real time and provide the device with feedback control. When the desealing at a specific position is completed, the XY stage can be moved to perform desealing at a different position. This process is then repeated, and if it is determined that all the encapsulation removal processes have been completed, the series of processes may be terminated.FIGS. 3-7 are micrographs of several locations on the package that have been desealed according to the present invention. FIG. 3 is a lead frame part with the seal removed, FIG. 4 is a wire with the seal removed, FIG. 5 is a solder joint part of the lead frame with the seal removed, FIG. 6 is a bonding pad of the wafer die with the seal removed, FIG. 7 shows the circuit structure on the wafer die with the encapsulation removed. From these figures, it can be seen that encapsulation has been achieved without damaging the package structure.FIG. 8 is a flowchart illustrating the method of the present invention. At the start of the process (26), a package to be desealed is placed in a desealing chamber (27). The package is moved so that the sealing removal position is located in the optical path of the laser beam, and then a fluid is flowed over the package (29). An irradiation area is selected (30), and the package is irradiated (31). The laser energy is measured (32), and it is determined whether the actual value is different from the expected value (33). If the laser energy is different from the expected value, the laser energy is adjusted (34). If the laser energy meets the expected value, the sound / light signal emitted from the seal removing unit is monitored (35). As a result of the monitoring, if the desealing at that position is incomplete, irradiation is continued (31), and if complete, a further determination is made as to whether or not all desealing has been completed (37). . If completed, the series of processes is terminated (38); otherwise, a new irradiation position is selected (39) and the encapsulation at that position is continued (31).The method of the present invention is a non-contact process that enables opening of a wafer die of an IC package for device failure analysis. Efficient and rapid laser encapsulation, independent of the internal IC package structure, requires careful selection of laser control parameters such as pulse period, laser fluence, laser wavelength and pulse interval. . No chemical reaction takes place on the epoxy layer, metal or semiconductor structure, since no chemical encapsulation is required.The molding compound is easily removed using pulsed laser irradiation. Ablation threshold fluence is much lower than in metal / semiconductor materials. During laser ablation, fine particles of molding compound debris are removed from the surface at high speed. These particles can be removed using the fluid flow / suction system described above. The fluid may be a gas such as air or a liquid such as water or isopropyl alcohol (IPA) solution. Laser ablation is preferably performed with the package completely immersed in the liquid. This can substantially prevent the molding compound from being oxidized by the surrounding air, so that the ablation is substantially accelerated. Oxidation may be suppressed by performing sealing removal in an inert atmosphere. The use of fluid immersion requires low laser fluence.Laser fluence is an important parameter in laser encapsulation removal. If the laser fluence is too high, the process speed will increase, but may damage the epoxy layers, metal and / or semiconductor structures, especially at the interface with the molding compound. As mentioned above, it is possible to choose a value for the laser fluence such that the threshold for removal of the molding compound is lower than other materials and does not cause collateral damage. This technique offers significant advantages over chemical encapsulation.The present invention relates to a method and an apparatus for removing and sealing an integrated circuit. The present invention irradiates the package with laser radiation, typically in the UV to IR spectrum and with short pulse intervals, to ablate the molding compound. The present invention also provides for real-time monitoring and process control of the encapsulation removal process, and for removing ablation debris with a fluid. By using the present invention, encapsulation can be removed without damaging the internal structure of the integrated circuit. Furthermore, the removal of the seal in the liquid can prevent the molding compound from being oxidized by the surrounding air.BRIEF DESCRIPTION OF THE FIGURESFIG. 1A is a diagram schematically showing an IC package before sealing removal by seeing through a cross section. FIG. 2B is a view corresponding to FIG. 1A after the sealing is removed.FIG. 2 is a schematic diagram illustrating an apparatus according to one embodiment of the present invention.FIG. 3 is a micrograph of a lead frame of an IC package according to the present invention, which has been sealed and removed.FIG. 4 is a photomicrograph of a gold wire of an encapsulated IC package according to the present invention.FIG. 5 is a photomicrograph of solder bonding of an encapsulated and removed IC package to a lead frame according to the present invention.FIG. 6 is a photomicrograph of solder welding to a bonding pad of a wafer die of an IC package having undergone sealing removal according to the present invention.FIG. 7 is a photomicrograph of a circuit structure for a wafer die of an encapsulated IC package according to the present invention.FIG. 8 is a flowchart illustrating a method according to the present invention.Explanation of reference numeralsDESCRIPTION OF SYMBOLS 1 IC package 2 Lead frame 3 Wafer die 4 Wire 5 Epoxy layer 6 Molding material 7 Laser light source 11 Process controller 12 Beam expander 15 Beam homogenizer 17 Opening 18 Sealing removal room 23 CCD camera 24 Monitor 25 Optical monitoring device
Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.
WHAT IS CLAIMED IS1. An apparatus comprising:an interface configured to receive memory access requests comprising request addresses; and logic configured to:maintain an identification of a first region of contiguous data in a system memory that has a copy of the contiguous data stored in a second region of a cache, wherein the first region represents a range of addresses and the identification comprises:a first start address that identifies the beginning of the first region; and a size of the second region; andin response to receiving a memory access request:send the memory access request to the system memory, in response to determining the request address is not within the range of addresses; andservice the memory request by accessing data from the cache, in response to determining the request address is within the range of addresses.2. The apparatus as recited in claim 1, wherein to service the selected memory request by accessing data from the cache, the logic is further configured to:determine an offset based on a difference between the request address and the first start address;determine a translated address based on the offset and a second start address pointing to a beginning of the second region of the cache; andservice the memory request by accessing data from the cache beginning at the translated address.3. The apparatus as recited in claim 2, wherein in response to determining the translated address points to a memory location outside of the second region of the cache, the logic is further configured to:determine a wrap around address by subtracting a largest address of the second region from the translated address; andservice the memory request by accessing data from the cache beginning at the wraparound address.4. The apparatus as recited in claim 2, wherein the logic is further configured to:maintain the second start address; anddetermine the translated address as a sum of the offset and the second start address.5. The apparatus as recited in claim 4, wherein in response to determining a size of the first region changes, the logic is further configured to:update one or more of the first start address, the second start address and the size of the second region; andupdate the range of addresses based at least in part on the updated size of the secondregion.6. The apparatus as recited in claim 4, wherein in response to predicting a region of upcoming data accesses, the logic is further configured to:initialize one or more of the first start address, the second start address and the size of the second region; andstore a copy of contiguous data from the region of upcoming data accesses in the system memory into the cache.7. The apparatus as recited in claim 6, wherein determining the region of data accesses isdefined comprises one or more of:the logic monitors received memory access requests and identifies a pattern used toidentify the region; andthe logic receives a hint from software that identifies the region.8. The apparatus as recited in claim 6, wherein in response to determining there are no more upcoming data accesses for the region, the logic is further configured to:store an indication that specifies that there is no region stored in the cache by updating the size of the second region to a value of zero bytes.9. The apparatus as recited in claim 1, wherein the logic is further configured to maintain an identification of a plurality of regions of contiguous data in the system memory, each region has a copy of respective contiguous data stored in the cache, wherein the logic is further configured to:determine a plurality of ranges of addresses, one for each of the plurality of regions;send the selected memory request to system memory, in response to determining a request address of the selected memory request is not within any of the plurality of ranges of addresses; andservice the memory request by accessing data from the cache, in response to determining the address of the selected memory access request is within one of the plurality of ranges of addresses.10. A method, comprising:receiving memory access requests comprising request addresses;maintaining an identification of a first region of contiguous data in a system memory that has a copy of the contiguous data stored in a second region of a cache, wherein the first region represents a range of addresses and the identification comprises:a first start address pointing to a memory location storing data at the beginning of the first region; anda size of the second region; andin response to receiving a memory access request:sending the memory access request to the system memory, in response to determining the request address is not within the range of addresses; and servicing the memory request by accessing data from the cache, in response to determining the request address is within the range of addresses.11. The method as recited in claim 10, wherein to service the selected memory request by accessing data from the cache, the method further comprises:determining an offset based on a difference between the request address and the first start address;determining a translated address based on the offset and a second start address pointing to a beginning of the second region of the cache; andservicing the memory request by accessing data from the cache beginning at the translated address.12. The method as recited in claim 11, wherein in response to determining the translated address points to a memory location outside of the second region of the cache, the method further comprises:determining a wrap around address by subtracting a largest address of the second region from the translated address; andservicing the memory request by accessing data from the cache beginning at the wrap around address.13. The method as recited in claim 11, further comprising:maintaining the second start address; anddetermining the translated address as a sum of the offset and the second start address.14. The method as recited in claim 13, wherein in response to determining a size of the first region changes, the method further comprises:updating one or more of the first start address, the second start address and the size of the second region; andupdating the range of addresses based at least in part on the updated size of the second region.15. The method as recited in claim 13, wherein in response to predicting a region of upcoming data accesses, the method further comprises:initializing one or more of the first start address, the second start address and the size of the second region; andstoring a copy of contiguous data from the region of upcoming data accesses in thesystem memory into the cache.16. A non-transitory computer readable storage medium storing program instructions, wherein the program instructions are executable by a processor to:receive memory access requests comprising request addresses;maintain an identification of a first region of contiguous data in a system memory that has a copy of the contiguous data stored in a second region of a cache, wherein the first region represents a range of addresses and the identification comprises:a first start address pointing to a memory location storing data at the beginning of the first region; anda size of the second region; andin response to receiving a memory access request:send the memory access request to the system memory, in response to determining the request address is not within the range of addresses; andservice the memory request by accessing data from the cache, in response to determining the request address is within the range of addresses.17. The non-transitory computer readable storage medium as recited in claim 16, wherein to service the selected memory request by accessing data from the cache, the program instructions are further executable by a processor to: determine an offset based on a difference between the request address and the first start address;determine a translated address based on the offset and a second start address pointing to a beginning of the second region of the cache; andservice the memory request by accessing data from the cache beginning at the translated address.18. The non-transitory computer readable storage medium as recited in claim 17, wherein inresponse to determining the translated address points to a memory location outside of the second region of the cache, the program instructions are further executable by a processor to: determine a wrap around address by subtracting a largest address of the second region from the translated address; andservice the memory request by accessing data from the cache beginning at the wraparound address.19. The non-transitory computer readable storage medium as recited in claim 17, wherein the program instructions are further executable by a processor to:maintain the second start address; anddetermine the translated address as a sum of the offset and the second start address.20. The non-transitory computer readable storage medium as recited in claim 19, wherein inresponse to determining a size of the first region changes, the program instructions are further executable by a processor to:update one or more of the first start address, the second start address and the size of the second region; andupdate the range of addresses based at least in part on the updated size of the secondregion.
CACHE FOR STORING REGIONS OF DATABACKGROUNDDescription of the Related Art[0001] As both semiconductor manufacturing processes advance and on-die geometric dimensions reduce, semiconductor chips provide more functionality and performance. However, design issues still arise with modem techniques in processing and integrated circuit design that limit potential benefits. One issue is that interconnect delays continue to increase per unit length in successive generations of two-dimensional planar layout chips. Also, high electrical impedance between individual chips increases latency. In addition, signals that traverse off-chip to another die increase power consumption for these signals due to the increased parasitic capacitance on these longer signal routes.[0002] Another design issue is that most software applications that access a lot of data are typically memory bound in that computation time is generally determined by memory bandwidth. A memory access latency for an off-chip dynamic random access memory (DRAM) is hundreds to over a thousand clock cycles, and an increased number of cores in a processor design have accentuated the memory bandwidth problem. Recently, there has been progress in memory technologies for implementing in-package memory that provides access to a large, low latency, high bandwidth memory before accessing off-package DRAM and main memory.[0003] One example of the memory technology is three-dimensional integrated circuits (3D ICs) used to include two or more layers of active electronic components integrated both vertically and horizontally into a single circuit. The 3D packaging, known as System in Package (SiP) or Chip Stack multi-chip module (MCM), saves space by stacking separate chips in a single package. Components within these layers communicate using on-chip signaling, whether vertically or horizontally. This signaling provides reduced interconnect signal delay over known two- dimensional planar layout circuits.[0004] The manufacturing trends in the above description lead to gigabytes of integrated memory within a single package. In some cases, the computing system uses the additional on- chip storage as a last-level cache before accessing off-chip memory. A reduced miss rate achieved by the additional memory helps hide the latency gap between a processor and its off- chip memory. However, cache access mechanisms for row-based memories are inefficient for this additional integrated memory. Large tag data arrays, such as a few hundred megabytes for a multi-gigabyte cache, are expensive to place on the microprocessor die and provide a high latency for lookups of the large tag arrays. The lookup and data retrieval consume too much time as the tags and data are read out in a sequential manner.[0005] Increasing the size of a data cache line for the additional integrated memory, such as growing from a 64-byte line to a 4-kilobyte (KB) line, reduces both a number of cache lines in the integrated memory and the size of a corresponding tag. However, dirty bits and coherency information are still maintained on a granularity of the original cache line size (64-byte line). Therefore, the on-package DRAM provides a lot of extra data storage, but cache and DRAM access mechanisms are inefficient.[0006] In view of the above, efficient methods and systems for efficiently performing memory accesses in a computing system are desired.BRIEF DESCRIPTION OF THE DRAWINGS[0007] The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:[0008] FIG. l is a block diagram of one embodiment of data storage.[0009] FIG. 2 is a flow diagram of one embodiment of a method for performing efficient memory accesses in a computing system.[0010] FIG. 3 is a block diagram of one embodiment of a computing system.[0011] FIG. 4 is a block diagram of one embodiment of a system-in-package (SiP).[0012] FIG. 5 is a block diagram of one embodiment of data storage.[0013] FIG. 6 is a block diagram of one embodiment of data storage.[0014] FIG. 7 is a block diagram of one embodiment of data storage.[0015] FIG. 8 is a block diagram of one embodiment of data storage.[0016] FIG. 9 is a block diagram of one embodiment of data storage.[0017] FIG. 10 is a flow diagram of one embodiment of a method for performing efficient memory accesses in a computing system.[0018] FIG. 11 is a block diagram of one embodiment of data storage.[0019] FIG. 12 is a block diagram of one embodiment of data storage.[0020] FIG. 13 is a block diagram of one embodiment of data storage.[0021] FIG. 14 is a flow diagram of one embodiment of a method for performing efficient memory accesses in a computing system.[0022] While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims.PET ATT, ED DESCRIPTION OF EMBODIMENTS[0023] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.[0024] Various systems, apparatuses, methods, and computer-readable mediums for efficiently performing memory accesses in a computing system are disclosed. One or more clients in the computing system process applications. Examples of such clients include a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), an input/output (I/O) device, and so forth. The computing system also includes multiple link interfaces for transferring data between clients. In addition, each of the one or more clients access data from a last-level cache via the communication fabric.[0025] In various embodiments, a cache is implemented with low latency, high bandwidth memory separate from system memory. In some embodiments, the cache is used as a last-level cache in a cache memory subsystem. In other embodiments, the cache is another level within the cache memory subsystem. The system memory includes one of a variety of off-package dynamic random access memory (DRAM) and main memory such as hard disk drives (HDDs) and solid- state disks (SSDs). In some embodiments, the computing system implements the cache with integrated DRAM, such as three-dimensional (3D) DRAM, included in a System-in-Package (SiP) with a processing unit of one of the clients. In other embodiments, the computing system includes one of other memory technologies for implementing the cache such as synchronous RAM (SRAM), embedded DRAM (eDRAM), flash memory such as solid state disks, and one of a variety of non-volatile memories. Examples of the non-volatile memories are phase-change memory, memristors and spin-transfer torque (STT) magnetoresistive random-access memory (MRAM). [0026] A cache controller for the cache includes one or more queues. Each queue stores memory access requests of a respective type. For example, in some designs, a first queue stores memory read requests and a second queue stores memory write requests. Logic within the cache controller selects a queue of the one or more queues and selects a memory access request from the selected queue. The logic determines a range of addresses corresponding to a first region of contiguous data stored in system memory with a copy of the contiguous data stored in a second region of the cache. As used herein,“contiguous data” refers to one or more bits of data located next to one another in data storage. In some embodiments, the size of the contiguous data ranges between a size of a cache line (e.g., 64 bytes) and a size of a page (e.g., 4 kilobytes) in order to provide a size granularity corresponding to a region of predicted upcoming data accesses for a software application being executed. In other embodiments, another size of the contiguous data is used.[0027] At an earlier point in time, when the logic determined a region of predicted upcoming data accesses is defined, the logic stored a copy of the contiguous data from this region, which is the first region in this example, of the system memory into the second region of the cache. The contiguous data in the first region includes data corresponding to the predicted upcoming data accesses. The logic also initialized multiple parameters used to characterize the regions. For example, the logic maintains a first start address pointing to a memory location storing data at the beginning of the system memory. In addition, the logic maintains a second start address pointing to a memory location storing data at the beginning of the second region of the cache. Further, the logic maintains a size of the second region.[0028] In one embodiment, by monitoring received memory access requests and identifying a pattern, the logic predicts a region of the system memory that is going to be accessed with upcoming memory accesses. The logic identifies this region. In response, the logic performs the above steps such as storing a copy of the contiguous data from this region and updating corresponding parameters. In another embodiment, the logic receives one or more hints from software that identifies or is used to identify the region of predicted upcoming data accesses.[0029] When the logic detects a change to the size of the second region, the logic determines a range of addresses beginning at the first start address and ending at an address that is the sum of the first start address and the new size of the second region. In some embodiments, the updates to one or more of the first start address and the size of the second region occurs as data is updated in the second region. The updates to the second region include one or more of adding data, removing data, and overwriting existing data in the second region. When the logic selects a memory access request from one of the multiple queues, the logic of the cache controller compares a request address of the selected memory access request to the range of addresses. The logic determines whether the request address is within this range. Therefore, to determine whether there is a cache hit or a cache miss within the last-level cache, the logic compares the request address to this maintained range of addresses, rather than performs a set-associative lookup or a fully-associative lookup of the tag arrays in the cache. The comparison is a faster operation than the lookup operations of indexes and tags of the cache.[0030] When the logic determines that the request address of the selected memory access request is not within the range of addresses, the logic sends the selected memory access request to system memory for servicing. However, when the logic determines that the request address is within the range of addresses, the logic services the memory access request by accessing data from the cache. In order to do so, the logic determines an offset based on a difference between the request address and the first start address. Afterward, the logic determines a translated address based on the offset and the second start address. Following, the logic services the memory access request by accessing data from the cache beginning at the translated address.[0031] Referring to FIG. 1, a generalized block diagram of one embodiment of data storage 100 is shown. As shown, each of system memory 110 and a last-level cache 130 store data. Although the description describes the cache 130 as a last-level cache, in other embodiments, the cache 130 is another level within the cache memory subsystem. Processing units, communication interfaces and so forth are not shown for ease of illustration. Data 126 is contiguous data stored in the region 120 of the system memory 110. The last-level cache 130 stores in the region 140 a copy of the contiguous data 126 of the region 120. The region parameters 150 characterize the regions 120 and 140.[0032] In various designs, the system memory 110 includes one or more of off-package DRAM, hard disk drives (HDDs) and solid-state disks (SSDs). In some designs, the last-level cache 130 includes on-package, low latency, high bandwidth memory separate from the system memory 110. In some designs, the last-level cache 130 includes 3D DRAM. In other designs, the last- level cache 130 includes synchronous RAM (SRAM), embedded DRAM (eDRAM), flash memory such as solid state disks, and one of a variety of non-volatile memories. Examples of the non-volatile memories are phase-change memory, memristors and spin-transfer torque (STT) magnetoresistive random-access memory (MRAM).[0033] In the illustrated embodiment, the address 122, which is also referred to as“x”, points to a memory location storing data at the beginning of the region 120. Here, the generic value“x” is any value represented in any manner such as integer, hexadecimal, and so forth. The region 120 has a size 124, which is also referred to as“S bytes.” In a similar manner, the address 142, which is also referred to as“a”, points to a memory location storing data at the beginning of the region140. The region 140 has a size 144, which is also referred to as“S bytes,” and it is equal to the size 124 of region 120. The values“x”,“S” and“a” are positive integers.[0034] In some embodiments, sequential elements in a cache controller for the last-level cache 130 store the region parameters 150. Examples of the sequential elements are registers, flip-flop circuits, and latches. In an embodiment, the region parameters 150 include status information 152 such as a valid bit and metadata. Examples of the metadata are identifiers of the producer of the data 126, identifiers of the consumer of the data 126, cache coherency information for the data 126, clean/dirty information for the data 126, and so on. The identifiers for the producer and the consumer include one or more of a processing unit identifier, a process identifier, a thread identifier. In other embodiments, the region parameters 150 do not include the status information 152, since this information is stored in other queues and sequential elements in the cache controller.[0035] In an embodiment, the region parameters 150 include two addresses. The first address 154 is a copy of the address 122, which points to a memory location storing data at the beginning of the region 120. The second address 156 is a copy of the address 142, which points to a memory location storing data at the beginning of the region 140. Therefore, the region parameters 150 include a memory mapping between the beginning of the region 120 and the beginning of the region 140. For example, the region parameters 150 currently stores a memory mapping between the address 122 (“x”) and the address 142 (“a”). In some embodiments, the region parameters 150 also includes the size 158 of the region 140. In an embodiment, logic in the cache controller uses a size value of zero bytes in the size field 158 to indicate no valid region is stored in the last-level cache, rather than a valid bit in the status field 152.[0036] Using the region parameters 150, the logic in the cache controller determines a cache hit or a cache miss for the last-level cache 130 with a comparison operation that is faster than a lookup operation in a large tag array. In one example, the logic determines whether a valid region is stored in the last-level cache 130. If a status field 152 is used and a valid bit is negated, then there is no valid region stored in the last-level cache 130. If a status field 152 is not used and the size field 158 stores a value of zero bytes, then there is no valid region stored in the last-level cache 130. In such cases, the logic in the cache controller determines that there is a cache miss, and sends the memory access request with the request address to the system memory 110 for servicing. Therefore, the logic skips performing set-associative lookup operations into a set of the large tag array selected by an index of the request address, which reduces the latency of handling the memory access request. [0037] If a status field 152 is used and a valid bit is asserted, or if a status field 152 is not used, but the size field 158 stores a positive, non-zero integer, then the logic of the cache controller determines that there is a valid region stored in the last-level cache 130. In such a case, the logic in the cache controller determines a range of addresses when the logic determines a change in one or more of the address 122 (“x”) and the size 158 (“S”) of the region 140. The logic determines the range of addresses as beginning at the address 122 (“x”) and ending at an address that is the sum of the address 122 and the size 158 (“S”) of the region 140. Using the notation in the illustrated embodiment, the range of addresses is“x + S”. The logic determines whether the request address is within this range. For example, if the request address is denoted as“y,” then the logic determines whether the expression, x < y < (x + S), is true. Therefore, to determine whether there is a cache hit or a cache miss within the last-level cache 130, the logic compares the request address to this range of addresses. The comparison operation is a faster operation than the lookup operations of indexes and tags of the last-level cache 130.[0038] If the logic determines the access of the last-level cache 130 is a cache miss, then the cache controller sends the memory access request with the request address to the system memory110 for servicing. However, if the logic determines the access of the last-level cache 130 is a cache hit, then the logic services the memory access request by retrieving data from the last-level cache 130. In order to do so, the logic determines an offset based on a difference between the request address (“y”) and the address 122 (“x”), which is expressed as (y - x). The logic determines a translated address based on the offset (y - x) and the address 142 (“a”), which is the sum of the two values and is expressed as (a + (y - x)). The logic services the memory access request by accessing data from the last-level cache 130 beginning at the translated address, or at the address represented by (a + (y - x)). The logic skips performing set-associative lookup operations into a set of the large tag array selected by an index of the request address. Rather, after the comparison operation used to determine the cache hit, simple arithmetic operations are used to identify the location storing the requested data in the last-level cache 130.[0039] Referring now to FIG. 2, one embodiment of a method 200 for efficiently performing memory accesses in a computing system is shown. For purposes of discussion, the steps in this embodiment (as well as in Figures 10 and 14) are shown in sequential order. However, it is noted that in various embodiments of the described methods, one or more of the elements described are performed concurrently, in a different order than shown, or are omitted entirely.Other additional elements are also performed as desired. Any of the various systems or apparatuses described herein are configured to implement methods 200, 1000 and 1400. [0040] One or more processing units execute one or more computer programs, or software applications. Examples of a processing unit are a processor core within a general-purpose central processing unit (CPU), a graphics processing unit (GPU), or other. In some embodiments, aSystem-in-Package (SiP) includes the processing unit and an on-package, low latency, high bandwidth memory. One example of such a memory is a 3D integrated memory, such as a 3DDRAM. In an embodiment, the processing unit utilizes at least a portion of the 3D DRAM as a cache. In one embodiment, the cache is a last-level cache. Although the following description describes the low latency, high bandwidth memory as being used as a last-level cache, in other embodiments, the high bandwidth memory is used as a first-level (LI), a second-level (L2), or other level in the cache memory hierarchy other than the last-level. The processing unit determines a memory request misses within the cache memory subsystem in levels lower than the last-level cache (block 202).[0041] The processing unit sends a request address corresponding to the memory request to the last-level cache (block 204). In an embodiment, the logic in the cache controller for the last-level cache maintains an identification of a first region of contiguous data in the system memory that has a copy of the contiguous data stored in a second region of the last-level cache. In some embodiments, the identification includes a first start address that identifies the beginning of the first region. Additionally, the identification includes a size of the second region. Logic in the cache controller for the last-level cache determines a range of addresses for this first region, which is a range of addresses within the system memory address space pointing to the memory locations storing the contiguous data stored in the system memory (block 206). This contiguous data has a copy stored in the last-level cache. In some designs, the logic uses the expressions described earlier in the description of the data storage 100 (of FIG. 1).[0042] If the request address is not within the range (“no” branch of the conditional block 208), then logic sends the memory request including the request address to system memory (block 210). The access of the last-level cache for the memory request is considered to be a cache miss, and accordingly, the memory request is sent to a lower level of the memory subsystem such as the system memory. If the request address is within the selected range (“yes” branch of the conditional block 208), then logic determines an offset based on a difference between the request address and a start address of the range in system memory (block 212). Logic determines a translated address based on the offset and a start address of the range in the last-level cache (block 214). For example, the translated address is a sum of the offset and a start address of the range in the last-level cache. Logic services the memory request by accessing data from the last- level cache beginning at the translated address (block 216). [0043] Turning now to FIG. 3, a generalized block diagram of one embodiment of a computing system 300 utilizing a low-latency, high-bandwidth cache is shown. In various embodiments, the computing system 300 utilizes three-dimensional (3D) packaging such as the System in Package(SiP) 310. The SiP 310 is connected to a memory 362 and off-package DRAM 370 via a memory bus 350. In one embodiment, the computing system 300 is a stand-alone system within a mobile computer, a smart phone, or a tablet; a desktop; a server; or other. The SiP 310 uses the processing unit 320 and a low-latency, high-bandwidth cache 330. The processing unit 310 and the cache 330 communicate through low-latency interconnect 348. The in-package low-latency interconnect 348 uses one or more of horizontal and/or vertical routes with shorter lengths than long off-chip interconnects when a SiP is not used.[0044] Although, in some embodiments, the SiP 310 utilizes DRAM memory technology, such as 3D DRAM, other memory technologies that use a low latency, high bandwidth and row-based access scheme including one or more row buffers or other equivalent structures are possible and contemplated. Examples of other memory technologies are phase-change memories, spin-torque- transfer resistive memories, memristors, embedded DRAM (eDRAM), and so forth. In some designs, the processing unit 320 is a general-purpose microprocessor, whereas, in other designs, the processing unit 320 is another type of processing unit. Other types of processing units include a graphics processing unit (GPU), a field programmable gate array (FPGA), an accelerated processing unit (APU), which is a chip that includes additional processing capability. This additional processing capability accelerates one or more types of computations outside of a general-purpose CPU. In one embodiment, an APU includes a general-purpose CPU integrated on a same die with a GPU, a FPGA, or other processing unit, thus improving data transfer rates between these units while reducing power consumption. In other embodiments, an APU includes video processing and other application-specific accelerators.[0045] The execution engine 322 uses one or more processor cores based on the type of the processing unit 320. Additionally, in some designs, the execution engine 322 uses a communication fabric (or“fabric”) for transferring communication messages. Examples of communication messages are coherency probes, interrupts, and read and write access commands and corresponding data. Examples of interconnections in the fabric are bus architectures, crossbar-based architectures, network-on-chip (NoC) communication subsystems, communication channels between dies, silicon interposers, and through silicon vias (TSVs). In many designs, the processing unit 320 incorporates a system bus controller in the interface logic 326 that utilizes one of various protocols to connect the processor cores of the execution engine 322 to memory 362, DRAM 370, peripheral input/output (I/O) devices and other processing units.[0046] The computing system 300 uses the off-package memory 362 as main memory, or system memory. The memory 362 is one of hard disk drives (HDDs) and solid-state disks (SSDs). The off-package DRAM 370 is one of a variety of types of DRAM. The computing system 300 fills the off-chip DRAM 370 with data from the off-chip memory 362 through the I/O controller and bus 360 and the memory bus 350. The interface logic 360 supports communication protocols, address formats and packet formats for each of the off-package memory 362 and the off-package DRAM 370.[0047] Each of the processor cores within the execution engine 322 uses one or more levels of a cache memory subsystem for reducing memory latencies for the processor cores. In some designs, the processor cores additionally access a shared cache within the execution engine 322. When the cache memory subsystem within the execution engine 322 does not include data requested by a processor core, the execution engine 322 sends the memory access request to the in-package cache 330. The interface logic 340 supports communication protocols, address formats and packet formats for transferring information between the in-package cache 330 and the processing unit 320.[0048] Similar to other DRAM topologies, in some designs, the in-package cache 330 uses multiple memory arrays 332 that are segmented into multiple banks. In such cases, each one of the banks includes a respective row buffer. Each one of the row buffers stores data in an accessed row of the multiple rows within the corresponding memory array bank. In some embodiments, the functionality of the queues 342, the region parameters 344, and the portion of the logic 346 that uses the region parameters 344, are located in the logic 336. For example, this functionality is included in a cache controller for the in-package cache 330. In other embodiments, this functionality is located in the interface logic 340 as shown. Each of the logic 336 and the logic 346 is implemented by software, hardware such as circuitry used for combinatorial logic and sequential elements, or a combination of software and hardware.[0049] When the interface logic 340 receives a memory access request from the execution engine 322, the logic 346 stores received memory access requests in one of the multiple queues 342 based on access type. For example, a first queue of the queues 342 stores memory read requests and a second queue of the queues 342 stores memory write requests. Arbitration logic within the logic 346 selects a queue of the multiple queues 342 and selects a memory access request from the selected queue. For the selected memory access request, the logic 346 determines a range of addresses corresponding to a first region of system memory, such as region 372, with a copy of data stored in a second region of the in-package cache 330, such as region 338. The system memory is implemented by the combination of the off-package memory 362 and the off-packageDRAM 370.[0050] The logic 346 sends a selected memory access request to system memory when the logic 346 determines a request address of the memory access request is not within the range of addresses for the region 372. The cache controller services the selected memory request by accessing data from the memory arrays 332 within the in-package cache 330 when the logic 346 determines the request address is within the range of addresses for the region 372. The logic 346 uses the region parameters 344 for the above determinations. In various embodiments, the region parameters 344 are equivalent to the region parameters 150 (of FIG. 1).[0051] The logic 346 uses one of a variety of techniques for determining when to store a copy of the region 372 in the off-package system memory as region 338 in the in-package cache 330. In some embodiments, the logic 346 monitors memory accesses from the execution engine 322 to detect a streaming or sequential memory access pattern. The logic 346 uses one of a variety of techniques to detect streaming access patterns such as at least the techniques used by hardware prefetchers. When the logic 346 detects a streaming pattern, the logic 346 defines a new region. In some embodiments, when a memory request accesses an address that is within L bytes of the end of the region 338, one of the logic 336 and logic 346 extends the size of the region 338 by P bytes where L and P are positive, non-zero integers. In an embodiment, the values of L and P are stored in programmable registers in the control and status registers (CSRs) 347. In some embodiments, an initial region size is also stored in a programmable register in CSRs 347.[0052] In other embodiments, the logic 346 uses software hints for determining when to define and create the region 338 in the in-package cache 330. Software uses particular instructions to update certain registers accessed by the application or the operating system. In addition, the software is capable of updating one or more control and status registers (CSRs) 347 in the interface logic 340. When processing a deep neural network, the software application is aware of when it finishes processing one layer of a multi-layer neural network and when it moves on to the next layer of the multi-layer network. As each layer of the multi-layer network is traversed (whether forward or backward), the software application utilizes software hints to inform one or more of the logic 346 and the CSRs 347 of the current region in the system memory that is being processed. In some embodiments, the software application provides hints to indicate when to increase or decrease the sizes of the regions 372 and 338. The hints also indicate to change the sizes from the left direction or the right direction of the regions 372 and 338. [0053] Additionally, the software hints indicate when to change the entire content of the regions372 and 338 by moving to another region in the system memory. Limits stored in the CSRs 347 prevent the region 338 from exceeding the size of the in-package cache 330. In some embodiments, if the logic 346 already defines a region, the logic 346 selects between supporting the existing region and a new region based on criteria. Examples of the criteria are a size of the region, a priority level of an application accessing the data of the regions, an age of the existing region, and so forth.[0054] During the execution of one or more software applications, the applications modify the contents of the region 338. If the logic 346 adjusts the size or the portion of the region 338 such that a modified portion of the region 338 is no longer valid, then one of the logic 336 and the logic 346 sends the modified data to the off-package DRAM 370. In some designs, one of the logic 336 and the logic 346 controls the in-package cache 330 with a write-through cache policy or a write-back cache policy. The write-through cache policy spreads out the write operations to the off-package DRAM 370 over time. In contrast, the write-back cache policy delays the write operations until the size of the region 338 reduces. At such a time, the logic 336 or the logic 346 sends the write operations for the modified data to the off-package DRAM 370 in a burst of write traffic. In other designs, one of the logic 336 and the logic 346 controls the in-package cache 330 with a combination of a write-through cache policy and a write-back cache policy to trade off the benefits and the costs of the two policies.[0055] As described earlier, in some designs, the in-package cache 330 uses low latency, high bandwidth memory technologies such as SRAM, phase-change memories, spin-torque-transfer resistive memories, memristors, embedded DRAM (eDRAM), and so forth. In other designs, the in-package cache 330 uses low latency, high bandwidth 3D DRAM. Turning now to FIG. 4, a generalized block diagram of embodiments of systems-in-package (SiP) 400 and 440 are shown. The illustrated SiP includes one or more three-dimensional integrated circuits (3D ICs). A 3D IC includes two or more layers of active electronic components integrated both vertically and/or horizontally into a single circuit. In some designs, fabrication techniques use interposer-based integration whereby the fabrication technique places the 3D IC next to the processing unit 420. Alternatively, fabrication technique stacks a 3D IC directly on top of another IC.[0056] Die-stacking technology is a fabrication process that enables the physical stacking of multiple separate pieces of silicon (integrated chips) together in a same package with high- bandwidth and low-latency interconnects. The dies are stacked side by side on a silicon interposer, or vertically directly on top of each other. One configuration for the SiP is to stack one or more DRAM chips next to and/or on top of a processing unit. The stacked DRAM chips provide a very large cache for the processing unit. In some designs, this large cache has a size on the order of several hundred MB (or more).[0057] As shown, in one embodiment, the SiP 400 includes a processing unit 420 and one or more three-dimensional (3D) DRAM 430 and 432 that communicate with the processing unit 420 through horizontal low-latency interconnect 410. Again, the processing unit 420 is one of a general-purpose CPU, a graphics processing unit (GPU), an accelerated processing unit (APU), a field programmable gate array (FPGA), or other data processing device that makes use of a row- based memory, such as a cache.[0058] The in-package horizontal low-latency interconnect 410 provides reduced lengths of interconnect signals versus long off-chip interconnects when a SiP is not used. The in-package horizontal low-latency interconnect 410 use particular signals and protocols as if the chips, such as the processing unit 420 and the 3D DRAMs 430 and 432, were mounted in separate packages on a circuit board. The SiP 400 additionally includes backside vias or through-bulk silicon vias 412 that reach to package external connections 414. The package external connections 414 are used for input/output (I/O) signals and power signals.[0059] In another embodiment, the SiP 440 includes a 3D DRAM 450 stacked directly on top of the processing unit 420. Although not shown, for each of the SiP 400 and the SiP 440, multiple chips, or device layers, are stacked on top of one another with direct vertical interconnects 416 tunneling through them. The size and density of the vertical interconnects 416 that can tunnel between the different device layers varies based on the underlying technology used to fabricate the 3D ICs.[0060] Referring to FIG. 5, a generalized block diagram of one embodiment of data storage 500 is shown. Circuitry and logic previously described are numbered identically. As shown, each of system memory 110 and a last-level cache 130 store data. Again, although the description describes the cache 330 as a last-level cache, in other embodiments, the cache 330 is another level within the cache memory subsystem. Data 126 is contiguous data stored in the system memory 110. Data 526 is contiguous data added to the region in the system memory 110 to create the region 520. The size of contiguous data in the system memory 110 copied as a region grew from the size 124 to the size 524.[0061] The last-level cache 130 stores in the region 540 a copy of the contiguous data 126 and data 526 of the region 520. Accordingly, the size of contiguous data in the last-level cache 130 maintained as a region grew from the size 144 to the size 544. The region parameters 150 characterize the regions 520 and 540. The start addresses 122 and 142 remain the same. Therefore, the fields 154 and 156 remain unchanged in the region parameters 150. However, logic updates the size field 158 to an increased amount. In the example, the logic updates the size field 158 from S bytes to S+T bytes where S and T are positive, non-zero integers.[0062] Turning now to FIG. 6, a generalized block diagram of one embodiment of data storage600 is shown. Circuitry and logic previously described are numbered identically. Data 126 is contiguous data stored in the system memory 110. Data 626 is contiguous data added to the region in the system memory 110 to create the region 620. The size of contiguous data in the system memory 110 copied as a region grew from the size 124 to the size 624.[0063] The last-level cache 130 stores in the region 640 a copy of the contiguous data 126 and data 626 of the region 620. Accordingly, the size of contiguous data in the last-level cache 130 maintained as a region grew from the size 144 to the sum of the size 644 and the size 646. The contiguous data wrapped around the last-level cache 130. The region parameters 150 characterize the regions 620 and 640. The start addresses 122 and 142 remain the same.Therefore, the fields 154 and 156 remain unchanged in the region parameters 150. However, logic updates the size field 158 to an increased amount. In the example, the logic updates the size field 158 from S bytes to S+T+U bytes where S, T and U are positive, non-zero integers.[0064] The access of a wrap around region alters the calculation of the translated address for the last-level cache 130. In one example, the region 620 uses the address space 2,000 to 2,700 where the addresses are expressed as digits. The entire last-level cache 130 uses the address space5,000 to 6,000 where the addresses are also expressed as digits. The region 640 uses the address space 5,800 wrapped around to 5,500. When a received memory request uses the request address2,400, logic determines that the offset is (2,400 - 2,000), or 400. The logic adds the offset to the region start address of 5,800 to obtain (5,800 + 400), or 6,200. This value exceeds the limit of the region 640. In response, the logic determines the difference, which is (6,200 - 6,000), or 200.The logic adds the difference to the start address to obtain (5,000 + 200), or 5,200. The translated address is 5,200, and the logic uses the translated address 5,200 to access data from the last-level cache 130 in order to service the memory request.[0065] Referring to FIG. 7, a generalized block diagram of one embodiment of data storage 700 is shown. Similar to data storage 500 and 600 and upcoming data storage 800-900 and 1300, circuitry and logic previously described are numbered identically. As shown, each of system memory 110 and a last-level cache 130 store data. Although the description describes the cache 330 as a last-level cache, in other embodiments, the cache 330 is another level within the cache memory subsystem. Data 126 is contiguous data stored in the system memory 110. Data 726 is contiguous data added to the region in the system memory 110 to create the region 720. The size of contiguous data in the system memory 110 copied as a region grew from the size 124 to the size 724. The increase is in the left direction, rather than the right direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 720 is address722 (“x2”), rather than the address 122 (“xl”).[0066] The last-level cache 130 stores in the region 740 a copy of the contiguous data 126 and data 726 of the region 720. Accordingly, the size of contiguous data in the last-level cache 130 maintained as a region grew from the size 144 to the size 744. The increase is in the left direction, rather than the right direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 740 is address 742 (“a2”), rather than the address 142 (“al”). The region parameters 170 characterize the regions 720 and 740. The start addresses 122 and 142 change, and the fields 154 and 156 indicate the changes in the region parameters 150. Logic also updates the size field 158 to an increased amount. In the example, the logic updates the size field 158 from S bytes to S+V bytes where S and V are positive, non zero integers.[0067] Referring to FIG. 8, a generalized block diagram of one embodiment of data storage 800 is shown. Data 126 is contiguous data stored in the system memory 110. Data 826 is contiguous data added to the region in the system memory 110 to create the region 820. The size of contiguous data in the system memory 110 copied as a region grew from the size 124 to the size 824. The increase is in the left direction, rather than the right direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 820 is address 822 (“x2”), rather than the address 122 (“xl”).[0068] The last-level cache 130 stores in the region 840 a copy of the contiguous data 126 and data 826 of the region 820. Accordingly, the size of contiguous data in the last-level cache 130 maintained as a region grew from the size 144 to the sum of the size 844 and the size 846. The contiguous data wrapped around the last-level cache 130. The increase is in the left direction, rather than the right direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 840 is address 842 (“a2”), rather than the address 142 (“al”). The region parameters 150 characterize the regions 820 and 840. The start addresses 122 and 142 change, and the fields 154 and 156 indicate the changes in the region parameters 150. Logic also updates the size field 158 to an increased amount. In the example, the logic updates the size field 158 from S bytes to S+V+W bytes where S, V and W are positive, non-zero integers. The access of a wrap around region alters the calculation of the translated address for the last-level cache 130. Logic uses a computation as described earlier for data storage 600.[0069] Referring to FIG. 9, a generalized block diagram of one embodiment of data storage 900 is shown. Data 126 is contiguous data stored in the system memory 110. Data 926 is contiguous data removed from the region in the system memory 110 to create the region 920. The size of contiguous data in the system memory 110 copied as a region reduced from the size 124 to the size 924. The decrease is in the right direction, rather than the left direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 920 is address922 (“x2”), rather than the address 122 (“xl”).[0070] The last-level cache 130 stores in the region 940 a copy of the contiguous data 126 of the region 920. Accordingly, the size of contiguous data in the last-level cache 130 maintained as a region reduced from the size 144 to the size 944. The decrease is in the right direction, rather than the left direction. Accordingly, the address pointing to the memory location storing data at the beginning of the region 940 is address 942 (“a2”), rather than the address 142 (“al”). The region parameters 150 characterize the regions 920 and 940. The start addresses 122 and 142 change, and the fields 154 and 156 indicate the changes in the region parameters 150. Logic also updates the size field 158 to a decreased amount. In the example, the logic updates the size field 158 from S bytes to S-U bytes where S and U are positive, non-zero integers. It is noted that if the decrease in sizes of the regions occurred at the end of the regions and in the left direction, rather than the right direction, then the addresses 122 and 142 remain the same while the size field 158 is still updated.[0071] Referring now to FIG. 10, one embodiment of a method 1000 for performing memory accesses in a computing system is shown. Logic monitors memory access patterns and/or receives software hints of data accesses (block 1002). As described earlier, software techniques with particular instructions used as hints, hardware techniques such as the techniques used by hardware prefetchers, or a combination are used to determine when to begin defining a region of memory.[0072] If logic does not predict a region of upcoming data accesses (“no” branch of the conditional block 1004), then control flow of method 1000 returns to block 1002 where logic monitors memory access patterns and/or receives software hints. If logic predicts a region of upcoming data accesses, (“yes” branch of the conditional block 1004), then the logic initializes parameters characterizing the region of predicted upcoming data accesses (block 1006). For example, the logic stores a start address for the region of upcoming data accesses in the system memory, and stores a start address for this region in the last-level cache. Additionally, the logic stores a region size for this region. In some embodiments, an initial region size is provided in a programmable register of multiple control and status registers. In some designs, the initial size is between the fine granularity of a cache line size (e.g., 64 bytes) and a page size (e.g., 4 kilobytes or more). [0073] Logic stores a copy of contiguous data from the region of upcoming data accesses in the system memory into the last-level cache. For example, the logic stores a copy of data from a first region of system memory into a second region of a last-level cache (block 1008) where each of the first region and the second region corresponds to the region of the predicted upcoming data accesses in the system memory. Logic services memory requests targeting the first region by accessing data from the second region (block 1010). If logic determines the second region changes size (“yes” branch of the conditional block 1012), then logic updates parameters characterizing the region to indicate the size change (block 1014).[0074] If logic determines the second region does not change in size (“no” branch of the conditional block 1012), and accesses of the second region have not completed (“no” branch of the conditional block 1016), then control flow of method 1000 returns to block 1010. In block 1010, logic services memory requests targeting the first region by accessing data from the second region. If the accesses of the second region have completed (“yes” branch of the conditional block 1016), then logic updates parameters characterizing the region to indicate there is no region (block 1018). Afterward, control flow of method 1000 returns to block 1002 where logic monitors memory access patterns and/or receives software hints.[0075] Referring to FIG. 11, a generalized block diagram of one embodiment of data storage 1100 is shown. Each of system memory 1110 and a last-level cache 1130 store data. Processing units, communication interfaces and so forth are not shown for ease of illustration. Although the description describes the cache 1130 as a last-level cache, in other embodiments, the cache 1130 is another level within the cache memory subsystem. Data 1120-1128 are contiguous data stored in the system memory 1110. The last-level cache 1130 stores a copy of a portion of the contiguous data 1120-1128 at different points in time.[0076] In one design example, the weights of a large (deep) neural network are stored in system memory 1110, such as off-package DRAM. The weights, such as data 1120-1128, are too large to fit in the last-level cache 1130, such as in-package 3D DRAM. During training of the neural network, the weights are evaluated by a processing unit executing a software application. From points in time tl to 1 lt7 (or times tl to t7), the size and content of the region stored in the last- level cache 1130 changes. The data 1120 corresponds to a first layer of weights of a multi-layer neural network. The data 1122 corresponds to a second layer of weights of the multi-layer neural network, and so on.[0077] Initially, the data 1120 is copied into the last-level cache 1130 at time tO (not shown). At the later time tl, the data 1122 is added to the region stored in the last-level cache 1130. Similarly, at times t2 and t3, the data 1124 and the data 1126 are added to the region stored in the last-level cache 1130. As the evaluation of the neural network proceeds by inference or forward propagation, the region in the last-level cache 1130 expands in order to store the weights. The accessing of the neural network’s weights proceeds in a regular, predictable manner. Therefore, the region in the last-level cache 1130 is increased sufficiently ahead of the evaluation of the weights. As described earlier, programmable registers of CSRs 347 (of FIG. 3) store the parameters L and P to indicate when and by how much to change the size of the region stored in the last-level cache. Accordingly, the processing unit accesses the weights in the in-package last- level cache, rather than in the off-package DRAM.[0078] At time t3, the entire last-level cache 1130 is filled. At this time, logic in the cache controller or in the processing unit adjusts the size of the region by decreasing the size from the left. At time t4, the logic removes the data 1120 from the region in the last-level cache 1130. The logic updates region parameters accordingly. Due to the nature of the software application performing the training of the weights, once the processing unit evaluates a given layer, the corresponding weights are not needed again for the current inference or forward propagation. Therefore, in some designs, the given layer of weights are removed from the region of the last- level cache 1130. At time t5, data 1126 is added to the region of the last-level cache 1130. The region wraps around the last-level cache 1130. The access of a wraparound region alters the calculation of the translated address for the last-level cache 130. Logic uses a computation as described earlier for data storage 600.[0079] At time t6, the logic removes the data 1122 from the region in the last-level cache 1130. The logic updates region parameters accordingly. At time t7, the logic adds data 1128 to the region of the last-level cache 1130. After the processing unit has processed the last layer of the neural network, the processing unit generates a final output. The processing unit typically compares this final output against an expected value to compute an error or loss. The training of the neural network then continues with a backward propagation phase. During the backward propagation, the processing unit processes the layers of the multi-layered neural network in reverse order. Logic allocates and deallocates the region of the last-level cache 1130 in a manner to support the reverse order.[0080] Referring to FIG. 12, a generalized block diagram of one embodiment of data storage 1200 is shown. The system memory 1210 stores data. Processing units, communication interfaces and so forth are not shown for ease of illustration. The system memory 1210 stores data of multiple regions. Examples of the regions include a first region 1220 pointed to by the address 1212 (“wO”), data of a second region 1222 pointed to by the address 1214 (“x0”), data of a third region 1224 pointed to by the address 1216 (“yO”), and data of a fourth region 1226 pointed to by the address 1218 (“zO”).[0081] In this example, a software application performs a stencil-like calculation where each element in an output vector stored in the region pointed to by the address 1218 (“zO”) is a sum of elements in the other vectors pointed to by addresses 1212-1216 (“w0”-“y0”). For example, if the output vector is represented as vector“d”, and each of the vectors in the other regions are represented as“a” to“c”, then the value of the element d[i] of the vector d is a[i-l] + a[i] + a[i+l] + b[i-l] + b[i] + b[i+l] + c[i-l] + c[i] + c[i+l]. The adder 1230 sums the values of the elements of the input vectors to generate an element in the output vector. In many cases, none of the input vectors fit within an in-package cache. However, each region within the in-package cache is capable of storing an active portion of each respective vector. As the calculation proceeds, each of the regions are updated to maintain an active portion of each vector. An example of such a scheme is shown in the following description.[0082] Referring to FIG. 13, a generalized block diagram of one embodiment of data storage 1300 is shown. Circuitry and logic previously described are numbered identically. The last-level cache (LLC) 1330 stores a copy of data stored in the system memory 1210. Although the description describes the cache 1330 as a last-level cache, in other embodiments, the cache 1330 is another level within the cache memory subsystem. Processing units, communication interfaces and so forth are not shown for ease of illustration. The last-level cache 1330 stores data of multiple regions. Examples of the regions include a first region pointed to by the address 1332 (“aO”) with a size 1334 (“S bytes”), a second region pointed to by the address 1336 (“bO”) with a size 1338 (“T bytes”), a third region 1224 pointed to by the address 1340 (“cO”) with a size 1342 (“U bytes”), and a fourth region pointed to by the address 1344 (“dO”) with a size 1346 (“V bytes”).[0083] The table 1350 stores region parameters for the regions stored in the last-level cache 1330. In many designs, the fields 1352-1358 are equivalent to the fields 152-158 of the region parameters 150 (of FIG. 1). Here, the table 1350 supports multiple separate regions, rather than a single region. In the illustrated embodiment, the table 1350 includes four valid rows (entries) for supporting the four regions in the last-level cache 1330. Although four regions and entries are shown, any number of entries and regions are used in other examples. To support the multiple regions, logic maintains the information in the table 1350 to ensure that the multiple regions grow, reduce and wrap around the last-level cache 1330 without overrunning another region. For each memory access of the last-level cache 1330, logic compares the requested address against each valid, supported region in the last-level cache 1330. In various designs, the table 1350 stores information in a fully-associative manner. The requested address now checks against allN sets of region definition registers (analogous to a fully-associative cache structure).[0084] Referring now to FIG. 14, one embodiment of a method 1400 for performing memory accesses in a computing system is shown. One or more processing units execute one or more computer programs, or software applications. Examples of a processing unit are a processor core within a CPU, a GPU, or other. In some embodiments, a System-in-Package (SiP) includes the processing unit and high bandwidth memory. One example of the high bandwidth memory is a3D integrated memory, such as a 3D DRAM. In an embodiment, the processing unit utilizes at least a portion of the 3D DRAM as a cache. The processing unit determines a memory request misses within a cache memory subsystem in levels lower than the last-level cache (block 1402).In various embodiments, the processing unit utilizes at least a portion of the high bandwidth memory as a last-level cache. The processing unit sends an address corresponding to the memory request to the last-level cache (block 1404).[0085] Logic selects a range of one or more address ranges within a system memory address space with data stored in the last-level cache (block 1406). If logic determines the request address is not within the selected range (“no” branch of the conditional block 1408), and the last range is not reached (“no” branch of the conditional block 1410), then control flow of method 1400 returns to block 1406. In block 1406, logic selects another range of the one or more address ranges. If logic determines the request address is not within the selected range (“no” branch of the conditional block 1408), and the last range is reached (“yes” branch of the conditional block 1410), then logic sends the memory request including the request address to system memory (block 1412).[0086] If logic determines the request address is within the selected range (“yes” branch of the conditional block 1408), then logic determines an offset based on a difference between the address and a start address of the range in system memory (block 1414). Logic determines a translated address based on the offset and a start address of the range in the last-level cache (block 1416). Logic services the memory request by accessing data from the last-level cache beginning at the translated address (block 1418).[0087] In various embodiments, program instructions of a software application are used to implement the methods and/or mechanisms previously described. The program instructions describe the behavior of hardware in a high-level programming language, such as C. Alternatively, a hardware design language (HDL) is used, such as Verilog. The program instructions are stored on a non-transitory computer readable storage medium. Numerous types of storage media are available. The storage medium is accessible by a computing system during use to provide the program instructions and accompanying data to the computing system for program execution. The computing system includes at least one or more memories and one or more processors that execute program instructions.[0088] It should be emphasized that the above-described embodiments are only non-limiting examples of implementations. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
The dimensions of mask patterns, such as pitch-multiplied spacers, are controlled by controlled growth of features in the patterns after they are formed. To form a pattern of pitch-multiplied spacers 175a, a pattern of mandrels is first formed overlying a semiconductor substrate 110. Spacers are then formed on sidewalls of the mandrels by depositing a blanket layer of material over the mandrels and preferentially removing spacer material from horizontal surfaces. The mandrels are then selectively removed, leaving behind a pattern of freestanding spacers. The spacers comprise a material, such as polysilicon and amorphous silicon, known to increase in size upon being oxidized. The spacers are oxidized to grow them to a desired width 95. After reaching the desired width, the spacers 175a can be used as a mask to pattern underlying layers 150 and the substrate 110. Advantageously, because the spacers 175a are grown by oxidation, thinner blanket layers can be deposited over the mandrels, thereby allowing the deposition of more conformal blanket layers and widening the process window for spacer formation.
WE CLAIM: 1. A method for fabricating an integrated circuit, comprising: providing a substrate having an overlying mask layer, the mask layer comprising mask material and openings, the mask material and openings forming a pattern; oxidizing the mask material; and transferring the pattern to the substrate after oxidizing the mask material. 2. The method of Claim 1, wherein transferring the oxidized mask pattern comprises etching the semiconductor substrate through the openings in the mask layer. 3. The method of Claim 1, wherein the mask layer comprises polysilicon or amorphous silicon. 4. The method of Claim 3, wherein oxidizing the mask layer comprises forming silicon oxide. 5. The method of Claim 3, wherein oxidizing the mask layer comprises partially oxidizing the mask layer. 6. The method of Claim 1, wherein oxidizing the mask layer comprises enlarging the mask material to a desired width corresponding to a desired critical dimension of a feature in the integrated circuit. 7. The method of Claim 6, wherein the desired critical dimension is a width of conductive interconnects in the integrated circuit. 8. The process of Claim 1, wherein the substrate comprises a plurality of layers of different materials. 9. The method of Claim 8, wherein transferring the pattern to the semiconductor substrate comprises employing a different etch chemistry for each of the plurality of layers. 10. The method of Claim 1, wherein the substrate is an insulator. 11. The method of Claim 10, wherein transferring the pattern to the semiconductor substrate defines conductive lines of an array of a memory device. 12. The method of Claim 1, wherein providing a substrate comprises forming a pattern of spacers by pitch multiplication, wherein the mask material comprises the spacers. 13. A process for forming an integrated circuit, comprising: providing a pattern comprising a plurality of mask lines in a mask layer overlying a substrate, the mask lines comprising a precursor material; and growing the mask lines to a desired width by chemically reacting the precursor material to form a chemical compound occupying a larger volume than the precursor material. 14. The process of Claim 13, wherein growing the mask lines comprises performing a thermal oxidation. 15. The process of Claim 13, wherein the plurality of mask lines is formed by pitch multiplication. 16. The process of Claim 13, wherein the mask lines comprise silicon. 17. The process of Claim 13, wherein an amorphous carbon is disposed between the mask layer and the substrate. 18. The process of Claim 13, further comprising transferring the pattern to a hard mask layer between the mask layer and the substrate after growing the mask lines. 19. The process of Claim 18, wherein the hard mask layer comprises aluminum oxide. 20. The process of Claim 19, wherein transferring the pattern to a hard mask layer comprises etching the hard mask layer with BC13/C12 plasma. 21. The process of Claim 18, further comprising selectively removing the mask layer relative to the hard mask layer after transferring the pattern to a hard mask layer. 22. The process of Claim 18, further comprising transferring the pattern to an additional mask layer between the hard mask layer and the substrate after transferring the pattern to a hard mask layer. 23. The process of Claim 22, wherein the additional mask layer comprises amorphous carbon. 24. The process of Claim 13, wherein the desired width corresponds to a critical dimension of conductive lines to be formed in the substrate. 25. A process for forming an integrated circuit, comprising: providing a patterned mask layer overlying a substrate, the mask layer comprising a precursor material; chemically reacting the precursor material to form an etch stop material; and subsequently transferring the pattern in the mask layer to an underlying layer. 26. The process of Claim 25, wherein chemically converting enlarges a volume of the precursor material. 27. The process of Claim 26, wherein chemically converting comprises performing a thermal oxidation. 28. The process of Claim 25, wherein the patterned mask layer comprises a plurality of mask lines formed by pitch multiplication. 29. The process of Claim 25, wherein the precursor material is selected from the group consisting of silicon, titanium, tantalum and tungsten. 30. The process of Claim 29, wherein the etch stop material comprises an oxide or a nitride. 31. The process of Claim 25, wherein an amorphous carbon is disposed between the mask layer and the substrate and wherein subsequently transferring comprises transferring the pattern to the amorphous carbon layer. 32. The process of Claim 25, wherein transferring the pattern to the amorphous carbon layer comprises performing a SO2 plasma etch. 33. A method of semiconductor processing, comprising: providing a substrate, wherein a temporary layer overlies the substrate and a photodefinable layer overlies the temporary layer; forming a pattern in the photodefinable layer; transferring the pattern to the temporary layer to form a plurality of placeholders in the temporary layer; depositing a blanket layer of spacer material over the plurality of placeholders; selectively removing the spacer material from horizontal surfaces; selectively removing the placeholders relative to the spacer material; and expanding the spacer material to a desired size. 34. The method of Claim 33, wherein selectively removing the placeholders forms a pattern of free-standing spacers and wherein expanding the spacer material is performed after selectively removing the placeholders. 35. The method of Claim 33, wherein expanding the spacer material is performed before selectively removing the spacer material from horizontal surfaces. 36. The method of Claim 33, wherein expanding the spacer material is performed after selectively removing the spacer material from horizontal surfaces and before selectively removing the placeholders. 37. The method of Claim 33, wherein the temporary layer comprises amorphous carbon. 38. The method of Claim 37, wherein the photodefinable layer comprises photoresist. 39. The method of Claim 38, wherein forming a pattern in the photodefinable layer comprises performing photolithography and subsequently isotropically etching the photodefinable layer. 40. The method of Claim 38, wherein a hard mask layer separates the temporary layer and the photodefinable layer. 41. The method of Claim 40, wherein the hard mask layer comprises a dielectric antireflective coating. 42. The method of Claim 41, wherein the dielectric antireflective coating comprises silicon oxynitride. 43. The method of Claim 41, wherein selectively removing the placeholders comprises: depositing a filler material over and around the spacer material; simultaneously etching the filler material and the hard mask layer; and subsequently simultaneously etching the filler material and the temporary layer. 44. The method of Claim 43, wherein depositing a filler material comprises depositing photoresist. 45. The method of Claim 44, wherein depositing photoresist comprises performing a spin-on process. 46. The method of Claim 43, wherein simultaneously etching the filler material and the hard mask layer comprises performing a CF4/He plasma etch. 47. The method of Claim 43, wherein subsequently simultaneously etching the filler material and the temporary layer comprises performing an O2 plasma etch. 48. The method of Claim 33, wherein depositing a blanket layer of spacer material comprises depositing a layer of silicon by chemical vapor deposition. 49. The method of Claim 48, wherein expanding the spacer material comprises forming silicon oxide. 50. The method of Claim 48, wherein selectively removing the spacer material from horizontal surfaces comprises anisotropically etching the silicon layer. 51. The method of Claim 50, wherein anisotropically etching the silicon layer comprises etching the silicon layer with HBr/Cl2 plasma. 52. A process for forming a memory device, comprising: forming a plurality of mask lines by pitch multiplication, wherein neighboring mask lines are separated from one another by an open space; and narrowing the open space between neighboring mask lines. 53. The process of Claim 52, wherein the mask lines comprise polysilicon or amorphous silicon. 54. The process of Claim 52, wherein narrowing the open space comprises reacting the mask lines to form a different chemical compound or alloy. 55. The process of Claim 54, wherein reacting the mask lines comprises expanding the mask lines by oxidation. 56. The process of Claim 55, wherein reacting the mask lines comprises completely oxidizing the mask lines. 57. The process of Claim 52, further comprising transferring a pattern formed by the mask lines to an underlying layer. 58. The process of Claim 57, wherein the underlying layer comprises amorphous carbon. 59. The process of Claim 58, wherein transferring the pattern to the amorphous carbon layer comprises transferring the pattern to a hard mask layer and then transferring the pattern from the hard mask layer to the amorphous carbon layer. 60. The process of Claim 59, wherein transferring the pattern to a hard mask layer comprises etching the hard mask layer with a BC13/C12 plasma. 61. The process of Claim 59, wherein transferring the pattern from the hard mask layer to the amorphous carbon layer comprises exposing the amorphous carbon layer to SO2- containing plasma. 62. A method for semiconductor processing, comprising: forming a plurality of mask lines by pitch multiplication; and expanding a volume of material forming the mask lines to a desired width by converting the material to an other material. 63. The method of Claim 62, wherein expanding a volume of material forming the mask lines comprises expanding a blanket layer of spacer material during forming a plurality of mask lines pitch multiplication. 64. The method of Claim 63, wherein comprising: forming a plurality of mandrels; depositing the blanket layer of the spacer material; expanding the spacer material; and etching horizontal surfaces to form spacers from the blanket layer of spacer material after expanding the spacer material, wherein the spacers form the mask lines. 65. The method of Claim 63, wherein forming a plurality of mask lines comprises: forming a plurality of mandrels; depositing the blanket layer of the spacer material; etching horizontal surfaces to form spacers from the blanket layer of spacer material, wherein the spacers form the mask lines; expanding the spacer material after etching horizontal surfaces; and subsequently preferentially removing the mandrels relative to the spacer material after expanding. 66. The method of Claim 62, wherein expanding a volume of material forming the mask lines comprises expanding a pattern of spacers after pitch multiplication. 67. The method of Claim 62, wherein converting the material to an other material comprises oxidizing the material forming the mask lines. 68. The method of Claim 62, wherein converting the material to an other material comprises nitriding the material forming the mask lines. 69. The method of Claim 62, further comprising exposing an underlying layer to reactants through openings between the mask lines. 70. The method of Claim 69, wherein the reactants are etchants. 71. The method of Claim 70, wherein exposing an underlying layer comprises etching amorphous carbon. 72. The method of Claim 70, wherein exposing an underlying layer comprises etching a conductive substrate. 73. The method of Claim 62, further comprising trimming the mask lines after expanding a volume of material forming the mask lines. 74. The method of Claim 62, wherein the mask lines comprise polysilicon or amorphous silicon. 75. The method of Claim 62, wherein the desired width is a critical dimension of conductive interconnect lines in an integrated circuit.
MASK MATERIAL CONVERSIONBackground of the InventionField of the InventionThis invention relates generally to integrated circuit fabrication and, more particularly, to masking techniques.Description of the Related ArtAs a consequence of many factors, including demand for increased portability, computing power, memory capacity and energy efficiency in modern electronics, integrated circuits are continuously being reduced in size. To facilitate this size reduction, the sizes of the constituent features, such as electrical devices and interconnect line widths, that form the integrated circuits are also constantly being decreased.The trend of decreasing feature size is evident, for example, in memory circuits or devices such as dynamic random access memories (DRAMs), static random access memories (SRAMs), ferroelectric (FE) memories, etc. To take one example, DRAM typically comprises millions of identical circuit elements, known as memory cells. In its most general form, a memory cell typically consists of two electrical devices: a storage capacitor and an access field effect transistor. Each memory cell is an addressable location that can store one bit (binary digit) of data. A bit can be written to a cell through the transistor and read by sensing charge on the storage electrode from the reference electrode side. By decreasing the sizes of constituent electrical devices and the conducting lines that access them, the sizes of the memory devices incorporating these features can be decreased. Additionally, storage capacities can be increased by fitting more memory cells into the memory devices.The continual reduction in feature sizes places ever greater demands on techniques used to form the features. For example, photolithography is commonly used to pattern features, such as conductive lines, on a substrate. The concept of pitch can be used to describe the size of these features. Pitch is defined as the distance between an identical point in two neighboring features. These features are typically defined by spaces between adjacent features, which are typically filled by a material, such as an insulator or conductor. As a result, pitch can be viewed as the sum of the width of a feature and of the width of the space separating that feature from a neighboring feature. Due to factors such as optics and light or radiation wavelength, however, photolithography techniques each have a minimum pitch below which a particular photolithographic technique cannot reliably form features Thus, the minimum pitch of a photolithographic technique can limit feature size reduction"Pitch doubling" is one method proposed for extending the capabilities of photolithographic techniques beyond their minimum pitch Such a method is illustrated in Figures 1A-1F and described in U.S. Patent No 5,328,810, issued to Lowrey et al With reference to Figure IA, photolithography is first used to form a pattern of lines 10 in a photoresist layer overlying a layer 20 of an expendable material and a substrate 30 As shown in Figure IB, the pattern is then transferred by an etch step (preferably anisotropic) to the layer 20, forming placeholders, or mandrels, 40. The photoresist lines 10 can be stripped and the mandrels 40 can be isotropically etched to increase the distance between neighboring mandrels 40, as shown in Figure 1C A layer 50 of material is subsequently deposited over the mandrels 40, as shown in Figure ID Spacers 60, i e , material extending or originally formed extending from sidewalls of another material, are then formed on the sides of the mandrels 40 by preferentially etching the spacer mate[pi]al from the ho[pi]zontal surfaces 70 and 80 in a directional spacer etch, as shown m Figure IE. The remaining mandrels 40 are then removed, leaving behind only the spacers 60, which together act as an etch mask for patterning underlying layers, as shown in Figure IF Thus, where a given pitch formerly included a pattern defining one feature and one space, the same width now includes two features and two spaces defined by the spacers 60. As a result, the smallest feature size possible with a photolithographic technique is effectively decreased. It will be appreciated that while the pitch is actually halved in the example above, this reduction in pitch is conventionally referred to as pitch "doubling," or, more generally, pitch "multiplication." That is, conventionally "multiplication" of pitch by a certain factor actually involves reducing the pitch by that factor The conventional terminology is retained herein.The critical dimension of a feature is the feature's minimum dimension. For features formed using the spacers 60, the critical dimension typically corresponds to the width of the spacers. The width of the spacers, m turn, is typically dependent upon a thickness 90 (see Figures ID and IE) of the layer 50. Thus, the layer 50 is typically formed to a thickness 90 corresponding to the desired critical dimension.The quality and uniformity of the spacers 60 directly affect the quality of the integrated circuits partially defined m the substrate 30 using the spacers as a mask. Where the desired spacers 60 are relatively wide compared to the mandrels 40 and/or the space separating the spacers 60, however, it has been observed that the resulting spacers 60 and the etch mask resulting from the spacers 60 can have poor uniformity. This poor uniformity, in turn, can cause poorly defined and non-uniform features to be formed m the substrate. As a result, the electrical performance of integrated circuits formed in the substrate may be degraded or the integrated circuits may be unusable. Accordingly, there is a need for methods of forming etch masks having highly uniform and well-defined patterns, especially m conjunction with spacers formed in pitch multiplication.Summary of the Invention According to one aspect of the invention, a method is provided for fabricating an integrated circuit. The method comprises providing a substrate having an overlying mask layer. The mask layer comprises mask material and openings which form a pattern. The mask material is oxidized and the pattern is subsequently transferred to the substrate.According to another aspect of the invention, a process is provided for forming an integrated circuit. The process comprises providing a pattern comprising a plurality of mask lines m a mask layer overlying a substrate The mask lines comp[pi]se a precursor mate[pi]al. The mask lines are grown to a desired width by chemically reacting the precursor mate[pi]al to form a chemical compound occupying a larger volume than the precursor material.According to another aspect of the invention, a process is provided for forming an integrated circuit. The process comprises providing a patterned mask layer overlying a substrate. The mask layer comprises a precursor mate[pi]al which is chemically reacted to form an etch stop material. The pattern in the mask layer is subsequently transferred to an underlying layer.According to yet another aspect of the invention, a method of semiconductor processing is provided. The method comprises providing a substrate. A temporary layer overlies the substrate and a photodefinable layer overlies the temporary layer. A pattern is formed in the photodefinable layer and transferred to the temporary layer to form a plurality of placeholders in the temporary layer. A blanket layer of spacer material is deposited over the plurality of placeholders. The spacer material is selectively removed from ho[pi]zontal surfaces. The placeholders are selectively removed relative to the spacer material. The spacer material is expanded to a desired size.According to another aspect of the invention, a process is provided for forming a memory device. The process comprises forming a plurality of mask lines by pitch multiplication. Neighboring mask lines are separated from one another by an open space and the open space between neighboring mask lines is narrowed. According to yet another aspect of the invention, a method is provided for semiconductor processing. The method comprises forming a plurality of mask lines by pitch multiplication. A volume of material forming the mask lines is expanded to a desired width by converting the material to an other material. Brief Description of the DrawingsThe invention will be better understood from the Detailed Description of the Preferred Embodiments and from the appended drawings, which are meant to illustrate and not to limit the invention, and wherein: Figures 1A-1F are schematic, cross-sectional side views of mask lines, formed in accordance with a prior art pitch multiplication method;Figure 2 is a schematic, cross-sectional side view of a partially fonned memory device, in accordance with preferred embodiments of the invention;Figure 3 is a schematic, cross-sectional side view of the partially formed memory device of Figure 2 after forming lines in a photo-definable layer, in accordance with preferred embodiments of the invention;Figure 4 is a schematic, cross-sectional side view of the partially formed memory device of Figure 3 after widening spaces between photoresist lines, in accordance with preferred embodiments of the invention; Figure 5 is a schematic, cross-sectional side view of the partially formed memory device of Figure 6 after etching through a hard mask layer, in accordance with preferred embodiments of the invention;Figure 6 is a schematic, cross-sectional side view of the partially formed memory device of Figure 5 after transferring a pattern from the photoresist and hard mask layers to a temporary layer, in accordance with preferred embodiments of the invention;Figure 7 is a schematic, cross-sectional side view of the partially formed memory device of Figure 6 after depositing a blanket layer of a spacer material, in accordance with preferred embodiments of the invention;Figure 8 is a schematic, cross-sectional side view of the partially formed memory device of Figure 7 after a spacer etch, in accordance with preferred embodiments of the invention;Figure 9 is a schematic, cross-sectional side view of the partially formed memory device of Figure 8 after being coated with a removable material, in accordance with preferred embodiments of the invention;Figure 10 is a schematic, cross-sectional side view of the partially formed memory device of Figure 9 after etching the photoresist and hard mask layers, in accordance with preferred embodiments of the invention;Figure 11 is a schematic, cross-sectional side view of the partially formed memory device of Figure 10 after removing the photoresist and temporary layers, in accordance with preferred embodiments of the invention; Figure 12 is a schematic, cross-sectional side view of the partially formed memory device of Figure 11 after enlarging the spacers to a desired width, in accordance with preferred embodiments of the invention;Figure 13 is a schematic, cross-sectional side view of the partially formed memory device of Figure 12 after transferring the spacer pattern to an underlying hard mask layer, in accordance with preferred embodiments of the invention;Figure 14 is a schematic, cross-sectional side view of the partially formed memory device of Figure 13 after removing the spacers, in accordance with preferred embodiments of the invention; Figure 15 is a schematic, cross-sectional side view of the partially formed memory device of Figure 1 having an additional masking layer, in accordance with preferred embodiments of the invention;Figure 16 is a schematic, cross-sectional side view of the partially formed memory device of Figure 15 after forming spacers, in accordance with preferred embodiments of the invention; Figure 17 is a schematic, cross-sectional side view of the partially formed memory device of Figure 16 after expanding spacers, in accordance with preferred embodiments of the invention;Figure 18 is a schematic, cross-sectional side view of the partially formed memory device of Figure 17 after etching through a hard mask layer, in accordance with preferred embodiments of the invention; Figure 19 is a schematic, cross-sectional side view of the partially formed memory device of Figure 18 after transferring the spacer pattern to the additional masking layer, in accordance with preferred embodiments of the invention;Figure 20 is a schematic, cross-sectional side view of the partially formed memory device of Figure 6 after depositing a blanket layer of a spacer material, in accordance with other preferred embodiments of the invention;Figure 21 is a schematic, cross-sectional side view of the partially formed memory device of Figure 20 after enlarging the blanket layer to a desired thickness, in accordance with other preferred embodiments of the invention; andFigure 22 is a schematic, cross-sectional side view of the partially formed memory device of Figure 21 after removing the hard mask and temporary layers, in accordance with other preferred embodiments of the invention.Detailed Description of the Preferred EmbodimentsIt has been found that the poor quality of some spacer patterns is due to difficulties depositing conformal layers of spacer material and/or etching this material to form spacers.Because spacers are typically formed out of the vertically extending parts of blanket layers of spacer material over a complex mask topography, the conformality of the layers will affect the uniformity, e.g., the widths, the heights and the physical placement, of the spacers formed from the layers. It will be appreciated that the more conformal a layer is, the more closely it replicates the shape of the surface on which it is deposited. As critical dimensions continue to decrease, however, the aspect ratios of the spaces, or openings, between mandrels continue to increase. This is partly due to the desire to pack features more closely together by reducing the widths of the spaces between mandrels. In addition, in common methods of transferring patterns, both the spacers and an underlying layer are exposed to an etchant, which preferentially etches away the substrate material. The etchants, however, also wear away the spacers, albeit at a slower rate. Thus, even as critical dimensions decrease, the vertical heights of the spacers must remain at a level that allow a pattern transfer to be completed before the spacers are completely worn away by etchants.Accordingly, deposition of highly conformal layers of spacer material can be increasingly difficult, due in part to increasingly limited diffusion of precursor gases into the bottom portions of the spaces between mandrels. This diffusion becomes increasingly more limited during the course of a deposition, as sidewalls fill in with spacer material, thereby further increasing the aspect ratios of the space between the sidewalls. For this reason, relatively thin layers are more easily and reliably deposited than relatively thick layers. As a result of poor conformality of relatively thick deposited layers, the uniformity of spacers formed from the layers can be also be poor.In addition, just as it may be difficult for precursors to reach the bottoms of high aspect ratio spaces, the aspect ratios of some spaces can also limit the amount of etchant that penetrates to the bottoms of those spaces. Consequently, when etching laterally extending parts of the layer of spacer material to define individual spacers, some spacer material may undesirably remain at the bottoms of these spaces, causing the formation of spacers having bottom surfaces with widths different from expected widths. Thus, difficulties in depositing and also etching layers of spacer materials make precise control over the widths of the spacers difficult.Advantageously, preferred embodiments of the invention allow for more precise control over the widths and uniformity of features formed using a mask pattern. In the preferred embodiments, the mask pattern is formed with a material that can itself be increased to a desired size or critical dimension by a subsequent process, such as oxidation. The mask pattern is then subjected to the expansion process to increase the widths of mask features to a desired width. The now-enlarged mask features can then be used to form a pattern in an underlying layer. As used herein, it will be appreciated that a "feature" refers to any volume or opening formed in a material, e.g., in a mask layer or in the substrate, and having discrete boundaries. Preferably, the pattern subjected to the enlargement process is a pattern of spacers formed by pitch multiplication. The spacers preferably comprise silicon, e g , polysilicon or amorphous silicon The enlargement process can be any process that causes the spacers to expand Where the spacers comprise silicon, the expansion process preferably comprises oxidation of the spacers to form silicon oxide Moreover, the spacers are oxidized until they grow to a desired width. After growing to the desired width, the spacers can be used to pattern features in underlying layers. Optionally, the spacers can be trimmed to a desired critical dimension after being oxidizedAdvantageously, by growing the spacers to a desired width after they are formed, a thinner layer of spacer material can be deposited. By depositing thinner layers than would otherwise be required for a desired critical dimension, the conformably of the layers is less dependent upon the limitations of the deposition and/or etching process As a result, the process window for forming spacers of a given c[pi]tical dimension is widened.In addition, as noted above, a spacer is typically formed to a particular height that is dictated in part by the requirements of a particular semiconductor process to be performed through the mask (e.g , etching, implantation, doping, oxidation, etc.) and particular mate[pi]als of the underlying substrate that are to be exposed to the process. For example, spacers are typically formed to a height that accounts for the removal of some material during subsequent etching of an underlying layer Advantageously, because spacers typically grow both laterally and vertically during, e.g , oxidation, the resulting taller spacers are less likely to be etched away when transferring the spacer pattern to an underlying layer. Also, because the initial height of the spacer formed by a spacer etch is dependent on the height of a mandrel, the mandrel's height can be less than the height that would be required if the spacers were not later enlarged. Consequently, because the height of the mandrels can be reduced, the aspect ratio of the spaces between the mandrels is also reduced, thereby further easing the requirements for the spacer material deposition and further increasing the process window.It will be appreciated that silicon nitrides and silicon oxides are particularly suitable as spacer materials for mask formation, due in part to the availability of selective etch chemistries relative to a variety of other mate[pi]als, including metals, oxides and silicon-containing substrates. Advantageously, the conversion of the silicon spacer into a silicon oxide allows preferred embodiments of the invention to be easily inserted into va[pi]ous process flows, especially for pitch multiplication, without needing to substantially alter the process flow. In addition, partial conversion of silicon spacers to silicon oxide still allows selective etch chemistries that will attack, e g , a carbon mask mate[pi]al without attacking either silicon oxide or residual silicon. Reference will now be made to the Figures, wherein like numerals refer to like parts throughout. It will be appreciated that Figures 2-22 are not necessarily drawn to scale. It will be also appreciated that, while the preferred embodiments will find application in any context in which it may be desirable to increase the size of the individual parts constituting a mask pattern after those parts are formed, in particularly advantageous embodiments the mask pattern comprises spacers formed by pitch multiplication. Thus, the pitch multiplied features preferably have a pitch below the minimum pitch of the photolithographic technique used for patterning the mandrels used to form the spacers. In addition, while the preferred embodiments can be used to form any integrated circuit, they are particularly advantageously applied to form devices having arrays of electrical devices, including logic or gate arrays and volatile and nonvolatile memory devices such as DRAM, ROM or flash memory. With reference to Figure 2, a partially formed integrated circuit 100 is provided. A substrate 110 is provided below various masking layers 120-150. The layers 120-150 will be etched to form a mask for patterning the substrate 110 to form various features, as discussed below.It will be appreciated that the "substrate" can include a layer of a single material, a plurality of layers of different materials, a layer or layers having regions of different materials or structures in them, etc. These materials can include semiconductors, insulators, conductors, or combinations thereof. For example, the substrate can comprise doped polysilicon, an electrical device active area, a suicide, or a metal layer, such as a tungsten, aluminum or copper layer, or a combination thereof. Thus, the mask features discussed below can directly correspond to the desired placement of conductive features, such as interconnects, in the substrate. In other embodiments, the substrate can be an insulator and the location of mask features can correspond to the desired location of insulators.The materials for the layers 120-150 overlying the substrate 110 are preferably chosen based upon consideration of the chemistry and process conditions for the various pattern forming and pattern transferring steps discussed herein. Because the layers between a topmost photodefinable layer 120 and the substrate 110 will function to transfer a pattern derived from the photodefinable layer 120 to the substrate 110, the layers between the photodefinable layer 120 and the substrate 110 are preferably chosen so that they can be selectively etched relative to other exposed materials. It will be appreciated that a material is considered selectively, or preferentially, etched when the etch rate for that material is at least about 5 times greater, preferably about 10 times greater and more preferably about 20 times greater than that for surrounding materials.In the illustrated embodiment, the photodefinable layer 120 overlies a first hard mask, or etch stop, layer 130, which overlies a temporary layer 140, which overlies a second hard mask, or etch stop, layer 150, which overlies the substrate 110 to be patterned, e.g., by etching through the second hard mask layer 150. The photodef[iota]nable layer 120 is preferably formed of a photoresist, including any photoresist known in the art. For example, the photoresist can be any photoresist compatible with 157 nm, 193 ran or 248 nm wavelength sytems, 193 nm wavelength immersion systems or electron beam systems. Examples of preferred photoresist materials include argon fluoride (ArF) sensitive photoresist, i.e., photoresist suitable for use with an ArF light source, and krypton fluoride (KrF) sensitive photoresist, i.e., photoresist suitable for use with a KrF light source. ArF photoresists are preferably used with photolithography systems utilizing relatively short wavelength light, e.g., 193 nm. KrF photoresists are preferably used with longer wavelength photolithography systems, such as 248 nm systems. The material for the first hard mask layer 130 preferably comprises an inorganic material, and exemplary materials include silicon oxide (SiO2), silicon or a dielectric anti-reflective coating (DARC), such as a silicon-rich silicon oxynitride. In the illustrated embodiment, the first hard mask layer 130 is a dielectric anti-reflective coating (DARC). The temporary layer 140 is preferably formed of amorphous carbon, which offers very high etch selectivity relative to the preferred hard mask materials. More preferably, the amorphous carbon is a form of amorphous carbon that is highly transparent to light and which offers further improvements in alignment. Deposition techniques for forming a highly transparent carbon can be found in A. Helmbold, D. Meissner, Thin Solid Films, 283 (1996) 196-203.Because the preferred chemistries for etching photoresist also typically etch significant amounts of amorphous carbon and because chemistries are available for etching amorphous carbon with excellent selectivity relative to a variety of non-photoresist materials, the hard mask layer 130, selected from such materials, preferably separates the layers 120 and 140. As noted above, the first hard mask layer 130 preferably comprises silicon oxide, silicon or a DARC, which can be preferentially removed relative to amorphous carbon. In addition, using DARCs for the first hard mask layer 130 can be particularly advantageous for forming patterns having pitches near the resolution limits of a photolithographic technique. The DARCs can enhance resolution by minimizing light reflections, which can decrease the precision with which photolithography can define the edges of a pattern. Optionally, a bottom anti-reflective coating (BARC) (not shown) can similarly be used in addition to the first hard mask layer 130 to control light reflections.The second hard mask layer 150 preferably comprises a dielectric anti-reflective coating (DARC) (e.g., a silicon oxynitride), silicon or aluminum oxide (Al2O3). In addition, a bottom anti-reflective coating (BARC) (not shown) can optionally be used to control light reflections. In the illustrated embodiment, the second hard mask layer 150 comprises Al2O3. In addition to selecting appropriate materials for the various layers, the thicknesses of the layers 120-150 are preferably chosen depending upon compatibility with the etch chemistries and process conditions described herein. For example, when transferring a pattern from an overlying layer to an underlying layer by selectively etching the underlying layer, materials from both layers are removed to some degree. Thus, the upper layer is preferably thick enough so that it is not worn away over the course of the pattern transfer. In the illustrated embodiment, the photodefmable layer 120 is preferably between about100 nm and about 300 nm thick and, more preferably, between about 150 nm and about 250 ran thick. The first hard mask layer 130 is preferably between about 10 nm and about 500 nm thick and, more preferably, between about 15 nm and about 300 nm thick. The temporary layer 140 is preferably between about 100 nm and about 300 nm thick and, more preferably, between about 100 nm and about 200 nm thick. The second hard mask layer 150 is preferably between about 10 nm and about 50 nm thick and, more preferably, between about 10 nm and about 30 nm thick.It will be appreciated that the various layers discussed herein can be formed by various methods known to those of skill in the art. For example, various vapor deposition processes, such as chemical vapor deposition, can be used to form hard mask layers. Spin-on-coating processes can be used to form photodefinable layers. In addition, amorphous carbon layers can be formed by chemical vapor deposition using a hydrocarbon compound, or mixtures of such compounds, as carbon precursors. Exemplary precursors include propylene, propyne, propane, butane, butylene, butadiene and acetelyne. A suitable method for forming amorphous carbon layers is described in U.S. Patent No. 6,573,030 Bl, issued to Fairbairn et al. on June 3, 2003. In a first phase of methods in accordance with the preferred embodiments and with reference to Figures 3-11, a pattern of spacers is formed by pitch multiplication.With reference to Figure 3, a pattern comprising spaces or trenches 122 delimited by photodefinable material features 124 is formed in the photodefinable layer 120. The trenches 122 can be formed by, e.g., photolithography, in which the layer 120 is exposed to radiation through a reticle and then developed. After being developed, the remaining photodefinable material, photoresist in the illustrated embodiment, forms features such as the illustrated lines 124 (shown in cross-section only).The pitch of the resulting lines 124 and spaces 122 is equal to the sum of the width of a line 124 and the width of a neighboring space 122. To minimize the critical dimensions of features formed using this pattern of lines 124 and spaces 122, the pitch is preferably at or near the limits of the photolithographic technique used to pattern the photodefinable layer 120. Thus, the pitch may be at the minimum pitch of the photolithographic technique and the spacer pattern discussed below can advantageously have a pitch below the minimum pitch of the photolithographic technique. As shown in Figure 4, the spaces 122 can optionally be widened by etching the photoresist lines 124, to form modified spaces 122a and lines 124a. The photoresist lines 124 are preferably etched using an isotropic etch, such as a sulfur oxide plasma, e.g., a plasma comprising SO2, O2, N2 and Ar. The extent of the etch is preferably selected so that the widths of the spaces 122a and the lines 124a are substantially equal to the desired spacing between the later-formed spacers, as will be appreciated from the discussion of Figures 8-10 below. Advantageously, this etch allows the lines 124a to be narrower than would be possible with using the photolithographic technique used to pattern the photodefinable layer 120. In addition, the etch can smooth the edges of the lines 124a, thus improving the uniformity of those lines 124a.The pattern in the (modified) photodefinable layer 120 is preferably transferred to the temporary layer 140 to allow for deposition of a layer 170 of spacer material (Figure 7). Thus, the temporary layer 140 is preferably formed of a material that can withstand the process conditions for spacer material deposition, discussed below. In other embodiments where the deposition of spacer material is compatible with the photodefinable layer 120, the temporary layer 140 can be omitted and the spacer material can be deposited directly on the photo-defined features 124 or the modified photodefined features 124a of the photodefinable layer 120 itself. In the illustrated embodiment, in addition to having higher heat resistance than photoresist, the material forming the temporary layer 140 is preferably selected such that it can be selectively removed relative to the material for the spacers 175 (Figures 8) and the underlying etch stop layer 150. As noted above, the layer 140 is preferably formed of amorphous carbon.The pattern in the photodefinable layer 120 is preferably first transferred to the hard mask layer 130, as shown in Figure 5. This transfer is preferably accomplished using an anisotropic etch, such as an etch using a fluorocarbon plasma, although a wet (isotropic) etch may also be suitable if the hard mask layer 130 is thin. Preferred fluorocarbon plasma etch chemistries include CF4, CFH3, CF2H2 and CF3H.The pattern in the photodefinable layer 120 is then transferred to the temporary layer 140, as shown in Figure 6, preferably using a SO2-containing plasma, e.g., a plasma containing SO2, O2 and Ar. Advantageously, the SO2-containing plasma can etch carbon of the preferred temporary layer 140 at a rate greater than 20 times and, more preferably, greater than 40 times the rate that the hard mask layer 130 is etched. A suitable SO2-containing plasma is described in U.S. Patent Application No. 10/931,772 of Abatchev et al, filed August 31, 2004, entitled Critical Dimension Control. It will be appreciated that the SO2-containing plasma can simultaneously etch the temporary layer 140 and also remove the photodefinable layer 120. The resulting lines 124b constitute the placeholders or mandrels with which a pattern of spacers 175 (Figure 8) will be formed.Next, as shown in Figure 7, a layer 170 of spacer material is preferably blanket deposited conformally over exposed surfaces, including the hard mask layer 130, the hard mask 150 and the sidewalls of the temporary layer 140. Optionally, the hard mask layer 130 can be removed before depositing the layer 170. The spacer mate[pi]al can be any material that can act as a mask for transferring a pattern to the underlying substrate 110, or that otherwise can allow processing of underlying structures through the mask being formed. The spacer mate[pi]al preferably: 1) can be deposited with good step coverage; 2) can be deposited at a temperature compatible with the temporary layer 140; 3) can be further processed to enlarge its dimensions; and 4) can be selectively etched relative to the temporary layer 140 and any layer underlying the temporary layer 140 after being enlarged. Preferred mate[pi]als include polysilicon and amorphous silicon. The layer 170 is preferably deposited to a thickness of between about 20 nm to about 60 nm and, more preferably, about 20 nm to about 50 nm. Preferably, the step coverage is about 80 % or greater and, more preferably, about 90 % or greater.As shown in Figure 8, the spacer layer 170 is then subjected to an anisotropic etch to remove spacer material from horizontal surfaces 180 of the partially formed integrated circuit 100. Such an etch, also known as a spacer etch, can be performed using HBr/Cl plasma. The etch can include a physical component and preferably also includes a chemical component, e g , a reactive ion etch (RIE), such as a Cl2, HBr etch. Such an etch can be performed, for example, using a LAM TCP9400 flowing about 0-50 seem Cl2 and about 0-200 seem HBr at about 7-60 mTorr pressure with about 300-1000 W top power and about 50-250 W bottom power.The hard mask layer 130 (if still present) and the temporary layer 140 are next removed to leave free standing spacers 175 (Figure 11). Because the spacers 175 may be thm and because the hard mask layer 130 may be formed of a material similar to the spacers 175, a space-fill layer 155 may be formed over and around the spacers 175 to help maintain the structural integrity of the spacers 175 and to aid in etching the layers 130 and 140, as shown in Figure 9. Preferably, the layer 155 comprises photoresist, which can be deposited in a spm-on process. In other embodiments, e g., where the spacers 175 are sufficiently wide and where adequate etch chemistries are available, the layers 130 and 140 may be removed without deposition of the layer 155.With reference to Figure 10, the hard mask layer 130, along with a top portion of the space-fill layer 155, is removed, for example, by plananzation. Preferred chemistries for etching the layers 130 and 155 include a two step etch: first using CF4ZHe plasma until the layer 130 (Figure 9) is removed and then using an O2 plasma to remove the temporary layer 140, along with a remaining portion of the space-fill layer 155. The resulting structure is shown in Figure 11. Alternatively, to remove the layer 130 in the first part of the etch, the layers 130 and 155 can be subjected to chemical mechanical polishing.Thus, a pattern of freestanding spacers 175 is formed. Preferred chemistries for etching the layers 140 and 155 include a sulfur oxide plasma etch. Advantageously, silicon is more readily etched, either isotropically and amsotropically, than mate[pi]als, such as silicon nitrides or silicon oxides, that are typically used for spacers. In some embodiments, the critical dimension of the spacers 175 is adjusted after the spacer etch by trimming the spacers 175.Thus, pitch multiplication has been accomplished. In the illustrated embodiment, the pitch of the spacers 175 is roughly half that of the photoresist lines 124 (Figure 3) originally formed by photolithography. Advantageously, spacers 175 having a pitch of about 100 nm or less can be formed. It will be appreciated that because the spacers 175 are formed on the sidewalls of the features or lines 124b, the spacers 175 generally follow the outline of the pattern of features or lines 124 originally formed in the photodefinable layer 120.Next, in a second phase of methods according to the preferred embodiments, the spacers 175 are enlarged so that their widths correspond to the desired critical dimensions of features that are to be formed in the substrate 110. Preferably, this enlargement is accomplished by reacting the spacers 175 to form a new compound or alloy occupying more space. In the illustrated embodiment having spacers formed of silicon, the enlargement process preferably comprises oxidation of the spacers. It will be appreciated that the spacers 175 grow upon being oxidized, as shown in Figure 12. The size of the spacers 175a will vary depending upon the extent to which the spacers 175 are oxidized. Thus, the duration and degree of the oxidation is preferably chosen so that the spacers 175 reach a desired width 95. The oxidation of the spacers 175 can be accomplished by any oxidation process known in the art, including thermal oxidation, oxidation using oxygen radicals or plasma, etc. In other embodiments, the spacers 175 can be enlarged by being nitrided by any nitridation process known in the art. Thus, a pattern of spacers 175 a having desired widths 95 can be formed.It will be appreciated that the spacers 175 can be formed of any material that can be expanded, can be conformally deposited and for which suitable etch chemistries are available. For example, the spacers 175 can be formed using titanium and can be enlarged by oxidation or nitridation to form TiO2 or TiN2. Other examples of materials include tantalum (which can be expanded by oxidation or nitridation to form tantalum oxide or tantanlum nitride) and tungsten (which can be expanded by oxidation or nitridation to form tungsten oxide or tungsten nitride).Preferably, the extent of the enlargement is chosen such that the spacers 175 are enlarged to a width substantially equal to the desired critical dimension of the features, such as interconnects, word lines, bit lines, transistor rows, or gaps between damascene lines, which will be patterned in the substrate 110 using the pattern formed by the spacers 175a. For example, the spacers 175a can be oxidized to a greater or less extent, depending upon whether the desired critical dimensions are only slightly or more substantially greater than the dimensions of the non- oxidized spacers 175. Thus, process conditions, such as duration, chemical reactivity, temperature, etc., are chosen to achieve the desired degree of spacer expansion. It will be appreciated that growth of the spacers 175 will also narrow the space separating those spacers 175. Preferably, the spacers 175 are positioned to account for this narrowing. In addition, the critical dimension of the spacers 175a can adjusted after the expansion by trimming the spacers 175a, e.g., with an isotropic etch. It will also be appreciated that the spacers 175a themselves may be used directly as a hard mask to pattern an underlying substrate 110. Preferably, however, the pattern of the spacers 175a is transferred to one or more underlying layers which offer better etch selectivity to the substrate 110 than the spacers 175a. With reference to Figure 13, the pattern made out by the spacers 175a can be transferred to the second hard mask layer 150. Preferably, the second hard mask layer 150 is etched using a BC13/C12 plasma etch.With reference to Figure 14, the spacers 175a can optionally be removed before patterning the substrate 110. The spacers 175a can be removed using a wet etch process. Advantageously, by removing the spacers 175a, the aspect ratio of the mask overlying the substrate 110 is reduced, thereby allowing etchants other processing chemicals to more easily reach the substrate and, so, improving the formation of vertical sidewalls or otherwise clearly delineating and completing processing.In other embodiments, as shown in Figure 15, an additional mask layer 160 can be utilized to pattern difficult to pattern substrates 110. Such substrates can include, for example, multiple layers, which require multiple successive etches to pattern. Due to the availability of chemistries that allow very selective removal of amorphous carbon relative to many silicon- containing substrate materials, the additional mask layer 160 is preferably formed of amorphous carbon.It will be appreciated that the steps discussed above may be applied to form spacers 175a overlying the additional mask layer 160. With reference to Figure 16, a pattern of spacers 175 is formed. As shown in Figure 17, the spacers 175 are then expanded, by, e.g., oxidation, to a desired width, as discussed above. The pattern of spacers 175a can then be transferred to the second hard mask layer 150, preferably using a BC13/C12 plasma etch, as shown in Figure 18. The pattern is then transferred to the additional mask layer 160, preferably by anisotropically etching the additional mask layer 160, as shown in Figure 19. Preferably, the anisotropic etch is comprises exposing the additional mask layer 160 to a SO2-containing plasma. In other embodiments, it will be appreciated that the spacers 175 may be removed before etching the layer 150 or before etching the substrate 110, as discussed above with respect to Figure 14.The substrate 110 can then be processed through the mask layers 160 and 150 and the spacers 175a to define various features, e.g., transistors, capacitors and/or interconnects. Where the substrate 110 comprises layers of different materials, a succession of different chemistries, preferably dry-etch chemistries, can be used to successively etch through the different layers. It will be appreciated that, depending upon the chemistry or chemistries used, the spacers 175a and the hard mask layer 150 may be etched. Amorphous carbon of the additional mask layer 160, however, advantageously offers excellent resistance to conventional etch chemistries, especially those used for etching silicon-containing materials. Accordingly, the additional mask layer 160 can effectively be used as a mask for etching through a plurality of substrate layers, or for forming high aspect ratio trenches. The additional mask layer 160 can later be removed for further processing of the substrate 110.It will be appreciated that, in any of the steps described herein, transferring a pattern from a first level to a second level involves forming features in the second level that generally correspond to features on the first level. For example, the path of lines in the second level will generally follow the path of lines on the first level and the location of other features on the second level will correspond to the location of similar features on the first level. The precise shapes and sizes of features can vary from the first level to the second level, however. For example, depending upon etch chemistries and conditions, the sizes of and relative spacings between the features forming the transferred pattern can be enlarged or diminished relative to the pattern on the first level, while still resembling the same initial "pattern." Thus, the transferred pattern is still considered to be the same pattern as the initial pattern. In contrast, forming spacers around mask features can change the pattern.It will be appreciated that formation of contacts according to the preferred embodiments offers numerous advantages. For example, because thinner layers are more easily conformally deposited than thicker layers, the layers of spacer material from which spacers are formed can be deposited with improved conformality. As a result, spacers can be formed from these layers with improved uniformity. Moreover, the relative thinness of these layers reduces the aspect ratios of trenches lined with the blanket layer of spacer material, thereby allowing etchants to more easily penetrate to the bottom of the trenches and, thus, facilitating the spacer etch.It will also be appreciated that various modifications of the illustrated embodiments are possible. For example, the pitch of the spacers 175 or 175a can be more than doubled. Further pitch multiplication can be accomplished by forming additional spacers around the spacers 175 or 175a, then removing the spacers 175 or 175a, then forming spacers around the spacers that were formerly around the spacers the 175 or 175a, and so on. An exemplary method for further pitch multiplication is discussed in U.S. Patent No. 5,328,810 to Lowrey et al.In addition, various other patterns, for patterning features of different sizes, can be overlaid or formed adjacent to the spacers 175 or 175a. For example, an additional photodefinable layer can be formed overlying the spacers 175 or 175a and then patterned to form the other patterns. Methods for forming such patterns are disclosed in U.S. Patent Application No. 10/931,771 of Tran et al, filed August 31, 2004, entitled Methods for Increasing Photo- Alignment Margins.Moreover, while all the spacers 175 can be oxidized to have a similar width, in other embodiments, only some of the spacers 175 may be oxidized. For example, some spacers 175 can be protected from oxidation by depositing and patterning a protective layer (for which selective etch chemistries are available) and then oxidizing exposed spacers.In addition, depending upon the material being converted and the extent of the conversion process, the oxidation or subsequent chemical conversion process may not appreciably increase the size of the spacers 175. In such case, the processes disclosed herein can nevertheless be applied to convert the spacers 175 to a material for which highly selective etch chemistries are available. As such, the conversion process can advantageously convert the spacers 175 to a better etch stop for subsequent etch steps. For example, a mask precursor material can be converted to a silicon or metal oxide or nitride, which can advantageously provide good etch selectivity to surrounding, i.e., underlying, materials. With reference to Figures 20-22, where the spacers 175 are enlarged, it will be appreciated that the spacers 175 or the layer 170 can be enlarged, e.g., by oxidation, at any point after deposition of the spacer material and before forming the free-standing spacers 175. For example, after depositing a blanket layer of spacer material 170 (Figure 20), the entire blanket layer 170 can be expanded, as shown in Figure 21 to form an expanded blanket layer 170a. As noted above, the expansion process, including process conditions {e.g., duration, chemical reactivity, temperature, etc.), is preferably chosen such that the blanket layer 170 expands to a desired thickness corresponding to a desired critical dimension, taking into account any horizontal shrinkage during the subsequent spacer etch. Thus, the expansion process may leave the layer 170 only partially oxidized. As shown in Figure 22, after a spacer etch, the mandrels 124b are then removed to leave the free-standing spacers 175a. Advantageously, because the spacers 175a are thicker than the spacers 175, a protective space-fill layer 155 (Figure 9) may not necessary and the mandrels 124b can be etched using an anisotropic etch, e.g., using a fluorocarbon plasma.In other embodiments, the spacers 175 can be expanded after the spacer etch and before etching the mandrels {e.g., the spacers 175 in the Figure 8 can be expanded). Advantageously, because the spacers 175 are allowed to grow laterally in only one direction, this type of expansion allows the distance between individual pairs of spacers 175 to be maintained constant, while reducing the distance between the constituent spacers of a pair of spacers 175. As noted above, however, the expansion step is preferably performed after forming the spacers 175 as freestanding structures, to facilitate etching of the layer 170. Also, while "processing" through the various mask layers preferably involve etching an underlying layer, processing through the mask layers can involve subjecting layers underlying the mask layers to any semiconductor fabrication process. For example, processing can involve doping, oxidation, nitridation or depositing materials through the mask layers and onto underlying layers.Accordingly, it will be appreciated by those skilled in the art that various other omissions, additions and modifications may be made to the methods and structures described above without departing from the scope of the invention. All such modifications and changes are intended to fall within the scope of the invention, as defined by the appended claims.
Apparatuses, Methods and Storage Media associated with offloading aspects of processing of mobile devices are disclosed. In embodiments, a mobile computing device may comprise one or more processors; memory coupled with the one or more processors; and a shim layer to compressively replicate memory blocks of the memory to a cloud server, compressively offload invocations of object methods of objects resident in a memory block of the memory to the cloud server, and to receive execution results of the invoked object methods. Other embodiments may be described and/or claimed.
A mobile computing device, comprising: one or more processors; memory coupled with the one or more processors; a shim layer to compressively replicate memory blocks of the memory to a cloud server, compressively offload invocations of object methods of objects resident in a memory block of the memory to the cloud server, and to receive execution results of the invoked object methods. The mobile computing device of claim 1, wherein the shim layer includes a replication agent to compressively replicate memory blocks of the memory to the cloud server continuously every t units of time. The mobile computing device of claim 2, wherein the replication agent is to apply a sampling matrix .PHI. to a memory block s to generate an encoding y of the memory block s, and to transmit encoding y to the cloud server. The mobile computing device of claim 3, wherein the replication agent is to apply a partial discrete cosine transform matrix .PHI. to the memory block s to generate the encoding y of the memory block s. The mobile computing device of any one of claims 1-4, wherein the shim layer includes an object method offloader to redirect invocation of object methods to the cloud server, and to receive execution results of the invoked object methods. The mobile computing device of claim 5, wherein the object method offloader is to determine whether an object of an object method being invoked is allocated from a replicated memory block. The mobile computing device of claim 6, wherein the object method offloader is to cause the object method to be invoked and executed on the mobile computing device, on determination that the object of the object method being invoked is not allocated from a replicated memory block. The mobile computing device of claim 6, wherein the object method offloader is to compressively encode a memory block associated with the object method being invoked, send the compressively encoded memory block to a cloud server, and redirect the object method to be invoked and executed on the cloud server, on determination that the object of the object method being invoked is allocated from a replicated memory block. A method for mobile computing, comprising: compressively replicating, by a mobile computing device, memory blocks of memory of the mobile computing device to a cloud server; monitoring, by the mobile computing device, for object method invocations; on detection of an invocation of an object method, selectively redirecting, by the computing device, the invocation of the object method to the cloud server to cause the object method to be invoked and executed on the cloud server; and receiving, by the computing device, execution results of the object methods which invocations are redirected to the cloud server. The method of claim 9, wherein selectively redirecting comprises determining whether an object of an object method being invoked is allocated from a replicated memory block. One or more computer-readable media having instructions stored thereon that cause a mobile computing device, in response to execution by the mobile computing device, to: compressively replicate memory blocks of memory of the mobile computing device to a cloud server; selectively redirect invocation of object methods to the cloud server; and receive execution results of the invoked object methods which invocations are redirected to the cloud server. The computer-readable media of claim 11, wherein to selectively redirect comprises to determine whether an object of an object method being invoked is allocated from a replicated memory block. An apparatus for mobile computing, comprising: one or more processors; memory coupled with the one or more processors; means for compressively replicating memory pages of the memory to a cloud server; means for monitoring for object method invocations; means for selectively redirecting the invocation of the object method to the cloud server to cause the object method to be invoked and executed on the cloud server, on detection of an invocation of an object method; and means for receiving execution results of the object methods which invocations are redirected to the cloud server. The apparatus of claim 14, wherein means for selectively redirecting comprises means for determining whether an object of an object method being invoked is allocated from a replicated memory block. A cloud server, comprising: one or more processors; memory coupled with the one or more processors; and a cloud daemon to receive encodings of memory blocks of memory compressively replicated from one or more mobile devices, and invocations of object methods redirected from the one or more mobile devices; to decode the encodings and update corresponding memory blocks on the cloud server; and to invoke and execute the object methods on the cloud server, and return execution results of the invoked object methods to the one or more mobile devices. The cloud server of claim 15, wherein the cloud daemon comprises a replication agent to receive compressively encoded memory blocks of memory from one or more mobile devices, encode corresponding replica memory blocks on the cloud server, determine and decode to recover updates to the replicate memory blocks, and apply the recovered updates to the replica memory blocks on the cloud server, continuously every t units of time. The cloud server of claim 16, wherein the replication agent is to apply a sampling matrix .PHI. to a replica memory block to generate an encoding y i-1 of the replica memory block. The cloud server of claim 17, wherein the replication agent is to apply a partial discrete cosine transform matrix .PHI. to the replica memory block to generate the encoding y i-1 of the replica memory block. The cloud server of claim 17, wherein the replication agent is to further receive a compressive encoding y i of the replicated memory block, and calculate a compressively encoded update y' = y i-1 - y i to the replica memory block. The cloud server of claim 19, wherein the replication agent is to further decode the compressively encoded update y' to recover an update .DELTA.s to the replica memory block, and to apply the update .DELTA.s to the replica memory block. The cloud server of any one of claims 15-20, wherein the cloud daemon includes an object method servicer to receive invocations of object methods redirected from the one or more mobile devices; and to invoke and execute the object methods, and return execution results of the invoked object methods to the one or more mobile devices. The cloud server of claim 21, wherein the object method servicer is to translate an object pointer to a location in an address space of a mobile device to an object pointer to a location in an address space of the cloud server; and to serialize the execution results of the invoked object methods, and return the serialized execution results to the one or more mobile devices. A method for cloud computing, comprising: receiving, by a cloud server, encodings of memory blocks of memory compressively replicated from one or more mobile devices; decoding, by the cloud server, the encodings, and updating corresponding replica memory blocks of the cloud server; receiving, by the cloud server, invocations of object methods redirected from the one or more mobile devices; invoking and executing, by the cloud server, the object methods; and returning, by the cloud server, execution results of the invoked object methods to the one or more mobile devices. One or more computer-readable media having instructions stored thereon that cause a cloud server, in response to execution by the cloud server, to: receive encodings of memory blocks of memory compressively replicated from one or more mobile devices; decode the encodings and update corresponding replica memory blocks on the cloud server; receive invocations of object methods offloaded from the one or more mobile devices; and invoke and execute the object methods, and return execution results of the invoked object methods to the one or more mobile devices. A cloud server, comprising: one or more processors; memory coupled with the one or more processors; and means for receiving encodings of memory blocks of memory compressively replicated from one or more mobile devices; means for decoding, by the cloud server, the encodings, and updating corresponding replica memory blocks in the memory; means for receiving invocations of object methods redirected from the one or more mobile devices; means for invoking and executing, by the cloud server, the object methods; and means for returning execution results of the invoked object methods to the one or more mobile devices.
CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 MOBILE APPLICATION ACCELERATION VIA FINE-GRAIN OFFLOADING TO CLOUD COMPUTING INFRASTRUCTURES RELATED APPLICATION The present application claims priority to U.S. Provisional Application No. 61/950,758, entitled "Mobile Application Acceleration Via Fine-Grain Offloading to The Cloud," filed March 10, 2014. TECHNICAL FIELD The present disclosure is generally related to the field of computing, more specifically, to apparatuses, methods and storage media associated with mobile devices offloading aspects of processing to a cloud computing infrastructure. BACKGROUND The limitations of mobile device hardware can significantly restrict what mobile applications can do. Despite the arrival of multi-core processors and GPUs on smartphones, tablets, and other user equipment, the growing sophistication of mobile applications routinely pushes against the processor and battery limits of modern mobile devices. Some special-purpose systems, such as web services like Sin i and Google Now, have started to mitigate these constraints by offloading some computations to the cloud. However, these web services generally preclude shifting arbitrary workloads to the cloud. At present, there is currently no principled way for application developers to have a unified application codebase that can run on both the device and the cloud. Just as in Sin, application developers may be required to statically partition their application into device- specific and cloud-specific components. Once implemented, this partitioning may not be changed easily or dynamically, rendering runtime optimization impossible. DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of this specification, show certain aspects of the subject matter disclosed herein and, together with the description, help explain some of the principles associated with the subject matter disclosed herein. In the drawings, FIG. 1 depicts an example system incorporated with the mobile device offloading technology of the present disclosure, in accordance with some example embodiments; and FIGs. 2 and 3 show example results. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 FIG. 4 illustrates compressive replication encoding, in accordance with example embodiments. FIG. 5 illustrates the mobile device and cloud server of FIG. 1 in further detail, in accordance with example embodiments. FIG. 6 illustrates an example client-side replication process, in accordance with example embodiments. FIG. 7 illustrates an example client-side offloading process, in accordance with example embodiments. FIG. 8 illustrates an example server-side replication process, in accordance with example embodiments. FIG. 9 illustrates an example service process, in accordance with example embodiments Like labels are used to refer to same or similar items in the drawings. DETAILED DESCRIPTION Apparatuses, methods and storage media associated with mobile devices offloading aspects of processing to a cloud are disclosed herein. A strategy for removing hardware constraints of mobile devices may be to opportunistically offload computations to one or more servers in the cloud, where more capable hardware can do the heavy lifting associated with computation and the like. The subject matter disclosed herein relates to a platform for dynamically and transparently shifting arbitrary, fine-grain workloads from a mobile device up to a cloud computing infrastructure. The platform may accomplish this through compressive offloading (which is generally based on compressive sensing). The offloading provided by the platform disclosed herein may, in some example implementations, provide an order-of-magnitude acceleration and 60% longer battery life for the end user equipment (for example, a smartphone, tablet, and any other processor- based device) including a mobile application, such as a handwriting recognition application and the like. Offloading may not only be beneficial to end user equipment, but also to cloud providers¨the former may experience a performance boost and the latter may receive a steady stream of small computations to flexibly fill periods of under- utilization. The subject matter disclosed herein may provide a general, reusable framework for mobile devices to dynamically shift arbitrary, fine-grain workloads up to the cloud at runtime. The fine granularity may provide on-the-go mobile users with high system CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 responsiveness. A fine partitioning of work (e.g., at the level of an object method invocation) may incur less disruption to the user equipment in the event the device becomes disconnected from the cloud and a local restart of the task is required. Fine-grain workloads may also offer cloud service providers a way to maximize the utilization of their cloud infrastructure by providing a ready stream of small jobs that can be flexibly used to fill troughs in utilization. Both parties can derive significant benefit from these kinds of workloads due to the disparity in the hardware resources each commands¨the end user equipment/user may view the work as computationally complex and is more than happy to have it accelerated by someone else, while the cloud provider perceives the work as computationally cheap, but useful for leveling out utilization. To extract these gains, embodiments of the present disclosure require that the mobile device and the cloud behave as a single, tightly coupled system, i.e., embodiments of the present disclosure cast the mobile devices and the cloud as a distributed shared memory (DSM) system, in which memory on the local mobile device is continuously replicated to a remote cloud server. Any object resident in local memory may thus have a replica on the cloud-base server and any method invoked on a local object may be transparently redirected to a remote copy for faster execution on more capable cloud hardware, to be described more fully below. In the following detailed description, the mobile device offloading technology will be described with references to the accompanying drawings which form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents. Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without parting from the spirit or scope of the present disclosure. It should be noted that like elements disclosed below are indicated by like reference numbers in the drawings. Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, the term "module" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Referring now Figure 1, wherein a system incorporated with the mobile device offloading technology of the present disclosure, according to various embodiments, is shown. As illustrated, in the example system 100 of FIG. 1, compressive offloading uses compressive sensing to tightly replicate a memory block 106 (of a mobile application 104) and its resident objects 108 from a mobile device 102 to a cloud server 112, resulting in object replicas 118 created in memory 116 of cloud daemon 114. Each object 108 and its replicas may include the object's variables and methods. Accordingly, local method invocations may thus be transparently redirected to the remote object replicas 118 in remote memory 116 (e.g. of cloud daemon 114 of cloud server 112) for faster execution by more capable cloud-based hardware, software, and/or a combination of both. In a sense, memory 106 and memory 116 can be considered a DSM. Thus, allocating objects from this DSM becomes a principled way for application developers to program the cloud in a lightweight way. However, implementing such a DSM is not trivial due to the constraints on latency, network bandwidth, power, and computation overhead imposed by the mobile device. This is further complicated by the fact that memory input/output (I/O), which is direct and random-access, is typically not naturally amenable to standard, efficient transaction logging techniques. Existing replication methods that, e.g., rely on CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 communicating and comparing hashes to identify delta offsets and to generate a delta encoding, can have high computation and network overheads and thus do not respect the resource constraints listed above. The subject matter disclosed herein may thus provide a platform that uses compressive sensing to realize the tight coupling necessary for dynamically shifting arbitrary, fine-grain workloads from a mobile device to the cloud. Further described below are an example implementation for the core compressive offloading mechanism, a prototype implementation on an operating system (for example, iOS and the like), and an initial performance evaluation of the system. In embodiment, compressive offloading 122 may be based generally on compressive sensing. Compressive sensing is a sub-Nyquist random sampling technique in which a signal s E RN with sparsity rate k/N (i.e., only k coefficients in s are non-zero) is sampled or encoded by an M xN linear operator 0 (called the sampling matrix) to produce samples y E Rm. When 0 is a random matrix, and M = 0(k log(N/k)), i.e., M N, s can be exactly recovered or decoded by using convex optimization to solve the minimization problem Min 11,404 StlbjeCt y .. <Vs , (1) or by using other methods, including so-called greedy methods. Accordingly, under the present disclosure, fast and network-efficient memory replication may be achieved via compressive sensing. Referring also to FIG 4, wherein compressive replication encoding is shown. As illustrated, memory block s 204 may be encoded into encoding y 206 by applying matrix 0 202, e.g., a partial discrete cosine transform to memory block s 204. Memory I/O (i.e., deltas to memory) typically constitutes a sparse signal that can be compressively sampled. This approach, referred to herein as compressive replication, may, in some implementations, have one or more advantages. For example, compressive replication may require no network overhead to determine the deltas and their offsets because these are automatically recovered during decoding. Moreover, compressive replication may be resource-commensurate because the encoder on the mobile device has low computational complexity while the decoder on the cloud server has higher complexity. On system startup, both the local and remote ends (mobile device 102 and cloud server 112) may be configured to know the sampling matrix 0, the sparsity rate setting CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 k/N, and that a length N memory block, whose byte values are represented by so, is already synchronized; both local and remote can thus calculate yo = Os . At some later point in time, a process on the mobile device may modify the contents of the local memory block. When k elements have changed, we denote the state of the block as Si and encode it by calculating yi = Osi. This encoding is then transmitted over the network to the cloud server. On receipt, the cloud server calculates y' = yo - yi, which satisfies the equation Y Yo = +so 44141 (2) wherein the unknown quantity so - si is the delta encoding sought to be recovered. The solution to this can be found using convex optimization, iterative greedy algorithms based on matching pursuit, message passing algorithms, iterative hard thresholding methods, and/or the like. Once solved, so - si can be subtracted from so to obtain si. By extension, a subsequent set i of k new updates to the local block will generate a new compressive sample y,. Upon receipt of this, the remote end calculates yi - y, and applies the same decoding scheme above to recover s,. For the disclosed system, minimizing replication latency may be a goal since latency dictates the granularity of the work that can be offloaded. For example, if replication updates take 5 seconds (s) to complete, then all work that completes in less than 5s on a user equipment, such as a tablet, smartphone, and/or the like, would receive no (or little) benefit from offloading. Replication latency may include three delays: encoding time, network transmission time, and/or decoding time. The choice of the sampling matrix 0 may impact the encoding time, especially on resource-constrained user equipment, such as mobile device hardware. In practice, encoding with random sampling matrices, such as those with coefficients drawn at random from Gaussian or Bernoulli distributions, may require matrix multiplication, which may be too slow for use on mobile device hardware. An M x N partial discrete cosine transform (pDCT) (i.e., an N x N type-II DCT matrix with N - M rows deleted) may, in some implementations, perform better than other approaches because its uses a Fast Fourier Transform (FFT) under the hood and is thus fundamentally faster than a straight matrix multiplication. In some implementations, using the FFT directly instead of the DCT may result in an even faster encoding operation. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Network transmission time may be minimized by having the mobile device further compressing a pDCT encoding, provided that the time taken to compress and decompress does not add significantly to the overall replication latency. In some implementations, the fastest compression and decompression of a pDCT encoding may be achieved using the snappy algorithm. On the cloud server side, using an accelerated iterative hard thresholding (AIHT) decoding algorithm may provide a short decoding time, mainly because AIHT eschews costly matrix inversions at each iteration, unlike basis pursuit (e.g., 11- minimization) or matching pursuit algorithms. To extract even greater decoding speed, the disclosed system may implement AIHT in CUDA or OpenCL to take advantage of GPU hardware acceleration. This may provide an additional, attractive category of fine- grain computations that cloud providers could use to improve utilization of their more expensive GPU hardware. The specific combination of using pDCT encoding, snappy compression/decompression and/or AIHT decoding may, in some implementations, reduce replication latency from the user equipment to the cloud to the point that makes compressive replication tractable on mobile device hardware. In the disclosed system, the local mobile device end may manage multiple memory blocks simultaneously and replicate each to the remote end independently. These memory blocks may be of the same or different size N, and may each have a different sampling matrix 0 and sparsity rate setting k/N. Each memory block may be designated to store objects with sizes falling within a distinct range. For example, three memory blocks of size N = 64KB may be replicated independently: Block 1 may be used to allocate objects of 1KB or smaller, Block 2 for objects larger than 1KB but smaller than 4KB, and Block 3 for objects greater than 4KB but less than 64KB. Referring now to FIG. 5, wherein the mobile device and cloud server of FIG. 1 are illustrated in further detail, in accordance with example embodiments. In embodiments, compressive replication may be performed continuously and/or periodically in the background, and computation offloading may be controlled and managed by two system components. On the mobile device 502, a shim layer 506 may be introduced into the mobile application process (virtual memory address space) 504 operating on top of runtime environment 514 that (1) manages the allocation and deallocation of objects from the replicated memory blocks 508; (2) serves as the replication agent (encoder) 510 and CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 (3) serves as the object method offloader 512 to intercept and manage the redirection of object method invocations to the cloud server 522. As earlier described, replicated objects in replicated memory blocks 508 may include variables and methods of the objects. On the cloud server 522, a daemon 524 (1) may include a replication agent (decoder) 530 to decode and apply updates to its replica memory block(s) 528 and (2) an object method servicer 532 to service the offloaded object method invocations. In some embodiments of this system, where bidirectional replication is supported, strong consistency semantics may be used. In some other embodiments, issues of data consistency can be avoided by limiting to unidirectional replication from the mobile device to the cloud server. This may allow a simple versioning for each memory block (encoding the block increments the version), although other versioning approaches may be used as well. On the mobile device/user equipment, the shim layer 506 tags object method invocations with the version at the time of invocation. At the daemon 524, offloaded method invocations and replica updates may be queued and serviced in version order. In addition to mobile application process 504, system runtime 514 and operating system 516, mobile device 502 may include one or more single or multi-core processors, volatile and/or non-nonvolatile memory, mass/persistent storage, input/output devices (such as keyboard, cursor control device, display (e.g., touch sensitive display), and/or wired or wireless networking/communication interfaces, known in the art. Volatile and/or non-nonvolatile memory, and mass/persistent storage may be referred to as computer- readable storage medium. Similarly, in addition to cloud daemon 524, cloud server 522 may include one or more single or multi-core processors, volatile and/or non- nonvolatile memory, mass/persistent storage, input/output devices (such as keyboard, cursor control device, display (e.g., touch sensitive display), and/or wired or wireless networking/communication interfaces, known in the art. These elements may vary and differ in size/capacity/capability, depending whether they are employed on mobile device 502 or cloud server 522. In other words, except for the compressive offloading technology of the present disclosure, mobile device 502 and server 522 may be, otherwise, any one of a number of mobile devise/servers known in art. Examples of mobile devices may include, but are not limited to, wearable devices, mobile phones, e- readers, tablets, laptops, and so forth. Examples of servers may include, but are not limited to, standalone or blade servers. Further, while for ease of understanding, shim layer 506 has been described as having replication agent (encoder) 510 and object method offloader 512, and CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 cloud daemon 524 as having replication agent (decoder) 530 and object method servicer 532, in embodiments, replication agent (encoder) 510 and object method offloader 512 may share some of their common functions, or utilize services of operating system 516. Similarly, replication agent (decoder) 530 and object method servicer 532 may share are some of their common functions, or utilize services of the operating system (not shown) of cloud server 522. Referring now to FIG. 6, wherein a flow diagram illustrating an example client- side replication process, in accordance with example embodiments, is shown. As illustrated, process 600 for replication of memory blocks may include operations performed at blocks 602-606. Process 600 may be performed e.g., by the earlier described replication agent (encoder) of the shim layer of the mobile device. In alternate embodiments, process 600 may be performed with more or less operations, or some operations combined. As shown, process 600 may start at block 602. At block 602, a memory block may be compressively encoded as earlier described. At block 604, the encoding may be sent to a cloud server. At block 606, process 600 may pause for t milliseconds (which may be a configuration parameter). On determination/expiration of the pause/wait period, process 600 may return to block 602 and continue therefrom, as earlier described. Process 600 may operate continuously as described, until offloading is disabled or the host mobile device enters a sleep state or power off state. FIG. 7 illustrates an example offloading process, in accordance with example embodiments. As illustrated, process 700 for offloading of methods may include operations performed at blocks 702-714. Process 700 may be performed e.g., by the earlier described offloader of the shim layer of the mobile device. In alternate embodiments, process 700 may be performed with more or less operations, or some operations combined. As shown, process 700 may start at block 702. At block 702, an invocation of an object method may be detected. Next, at block 704, a determination may be made on whether the object was allocated from a replicated memory block. If a result of the determination indicates the object was not allocated from a replicated memory block, process 700 may proceed to block 706. At block 706, the object method may be executed on device. Thereafter, process 700 may end. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 On the other hand, if a result of the determination indicates the object was allocated from a replicated memory block, process 700 may proceed to block 708. At block 708, the memory block may be encoded. Next, at 710, the encoding may be sent to a cloud server. Then, at block 712, object method invocation may be redirected to the cloud server. At block 714, response from the cloud server containing results of the object method invocation may be received. Thereafter, process 700 may end. FIG. 8 illustrates an example replication process of the replication agent (decoder) of the cloud server of FIG. 5, in accordance with example embodiments. As illustrated, process 800 for replication of memory blocks may include operations performed at blocks 802-810. Process 802 may be performed e.g., by the earlier described replication agent (decoder) of the daemon of the cloud server. In alternate embodiments, process 800 may be performed with more or less operations, or some operations combined. Process 800 may start at block 802. At block 802, encoding y, may be received. Next, at block 804, replica memory block may be encoded to obtain encoding y,_1. At block 806, y' may be calculated by calculating the difference of yi_i ¨ yi. At block 808, y' may be decoded to obtain As, which is equal to the difference of ¨ s1. Next, at block 810, update As may be applied to replica memory block. FIG. 9 illustrates an example service process, in accordance with example embodiments. As illustrated, process 900 for servicing a redirected object method invocation may include operations performed at blocks 902-910. Process 902 may be performed e.g., by the earlier described object method servicer of the daemon of the cloud server. In alternate embodiments, process 900 may be performed with more or less operations, or some operations combined. Process 900 may start at block 902. At block 902, an object method redirection may be received from a mobile device. Next, at block 904, the address of object pointer may be translated from the device address space to the server address space. At block 906, the redirected object method may be executed. On execution of the redirect object method, at block 908, the results of the execution may be serialized. At block 910, the serialized result may be sent to the mobile device, where the object method was initially invoked (prior to redirection). Thereafter, process 900 may end. The offloading mechanism shares similarities with traditional RPC systems, but has a difference in that object marshaling, which is typically slow and therefore negatively CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 impacts perceived system responsiveness, is supported but not the primary way in which methods and method parameters are passed to the remote end. Instead, since objects in memory are already replicated in the background and since the disclosed system components may control the entire replication and offloading process at both local and remote endpoints, the disclosed system may be able to pass pointers and perform address translation wherever appropriate. This means the disclosed system may, in some implementations, handle only heap-allocated pure objects. In some other implementations, it may handle both stack-allocated and heap-allocated objects that are either pure or composite. The disclosed system may be configured to only perform compressive offloading if the end user equipment/user has given permission via a system preference setting on the mobile device/user equipment. But once permission is given, the system may decide when to perform the offloading. At a basic level, it only does so when proper network conditions exist. The system may include methods to determine whether the network conditions are proper. Beyond this, the decision to offload can also take into account other factors. For instance, the system might prefer to offload in order to stretch the device's battery budget, or the cloud provider might send a backpressure signal to limit offloading when its data centers are heavily loaded. The following provides an example implementation consistent with the above, although other implementations may be realized consistent with the subject matter disclosed herein. The disclosed system architecture described above may be considered device agnostic and may be applied to platforms supporting interpreted languages such as JavaScript or compiled languages such as Objective-C and the like. In an example implementation, a system includes an iOS ecosystem, using an iPad 3 running iOS 6.1.3 as the mobile device/user equipment and an Amazon EC2 g2.2xlarge instance in us- east-la running Ubuntu 12.04LTS as the cloud server, although other hardware and/or software may be used as well. Moreover, although some of the examples described herein refer to mobile devices, the devices may be stationary as well. Targeting the iOS ecosystem may provide some technical advantages since it uses Objective-C, which is a superset of C. The disclosed system may thus have a level of access low enough to perform its own memory management. The shim layer may be implemented as a software library (libupshift) against which an iOS application links. The CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 shim may implement a memory manager which makes an initial block allocation out of the app's process heap and then privately manages this block for object allocation and deallocation. Replicating this memory may be possible because (1) modern ARM processors (e.g., the iPad 3's Cortex-A9) are bi-endian and are therefore byte order- compatible with x86 Amazon servers; and (2) the disclosed system may manage its own memory, so there is some control over byte alignment and padding. At present, the disclosed system may use pure Objective-C objects, which are allocated out of the memory by using the upshift_alloc object method instead of the Objective-C root object's alloc method, although other methods may be used. Whereas the alloc method allocates an object from the application process heap memory, the upshift_alloc object method allocates an object out of the memory that is privately managed by the shim layer. The default alloc may be overridden by using a replacement Objective-C category method. Redirecting method invocations may be handled by libupshift at runtime via method swizzling: Objective-C is late-binding, so method implementations may be replaced at runtime with a libupshift method that offloads the invocation over the network to the cloud daemon. When an iOS app is compiled, any objects allocated with upshift_alloc are also cross-compiled for the Amazon EC2 environment. In the disclosed system, we abstract app code requiring this cross-compiling into separate modules and perform the cross- compiling. The resulting library may be dynamically loaded by the daemon and would provide class definitions for objects that are in the disclosed system server's replica memory. Since Objective-C objects are actually just C structs, they can be made accessible on the daemon after address translation and pointer casting. The mobile device and cloud server may communicate using a custom application- layer network protocol (the UpShift protocol) that uses the Transmission Control Protocol (TCP) or another reliable transport protocol, such as a datagram protocol based on the User Datagram Protocol (UDP), as its underlying transport protocol. The transport layer may be encrypted (e.g., via TLS or SSL), and may thus provide cryptographic security for the application-layer UpShift protocol. The UpShift protocol header may include fields designating a unique protocol identifier, a protocol version number, a message type, and/or a message length. At a minimum, the UpShift protocol may support one or more of the following message types: CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 = Authentication request: This is sent from the shim to the cloud daemon and may transmit authentication credentials to the cloud infrastructure. = Authentication response: This is sent from the cloud daemon in response to an authentication request from a shim. It may inform the shim whether the presented authentication credentials are valid. = Initialization: This is sent from the shim to the cloud daemon. It may specify the unique identifier for the shim, the number and size of each memory block to be replicated, the sampling matrix 0 and sparsity rate k/N for each memory block, and a list of object types that an application may instantiate. When received by the daemon, the daemon may (1) allocate and initialize the replica memory blocks, and (2) initialize its offloading environment by loading the dynamically linked libraries that define the listed object types. = Shutdown: This is sent from the shim to the cloud daemon. It may specify the unique identifier for the shim layer and may cause the daemon to (1) unload any dynamically linked libraries it loaded on initialization, and (2) deallocate the memory blocks it allocated on initialization. = Replication Update: This is sent from the shim to the cloud daemon. It may specify a memory block (via an identifier such as its start address) and its current version number, and contain the compressive sample (encoding) of the current memory block. = Method Redirection: This is sent from the shim to the cloud daemon. It may contain the name or identifier of the object method being offloaded, and object method parameters (e.g., pointer addresses to other objects resident in a shim- managed memory block). = Method Response: This is sent from the cloud daemon in response to a method redirection message. It may contain the return value of an offloaded method executed by the daemon or an error value. The following provides some example performance results and/or tradeoffs, although other results may be realized as well. Replication latency limits the range of workload sizes that can be offloaded to the cloud; the lower we can drive latency, the wider the range and the more responsive the system will feel. However, minimizing replication latency is not straightforward because its constituent parts¨encoding time, network transmission time, and decoding time¨are CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 not independent. For example, achieving a fast encoding time could give a worse compression ratio, which may drive up network bandwidth utilization. The following describes resource trade-offs that may provide a reasonable performance. Since compressive (pDCT) encoding may incur no network overhead, a fair comparison might be against blind compression of an entire memory block by zlib or snappy. The compressed block is transmitted over the network to a cloud server, where it is decompressed and used to overwrite its replica. As another point of comparison, pDCT encoding is performed and then the resulting encoding is further compressed using snappy (pDCT+snappy). FIG. 2 shows the average encoding time on the iPad of each of the candidate encoding methods¨zlib, snappy, pDCT, and pDCT+snappy¨across different memory block sizes N (denoted as Input size) with k/N = 0.01. For the pDCT methods, we took M = 7k samples, which is a very conservative sampling rate; snappy encoding is fastest, and zlib is slowest, with pDCT and pDCT+snappy falling in the middle. For example, when N = 64KB, snappy requires 4ms, zlib 487ms, and pDCT and pDCT+snappy roughly 53ms. We use N = 64KB throughout the rest of this evaluation because it may represent a reasonable memory block size and gives fair encoding and decoding times for all the methods. Next, decoding time is considered. Here, recall that compressive replication trades a low complexity encoder for a high complexity decoder. Whereas zlib and snappy have negligible decoding times on an Amazon server, the compressive decoding takes on average 70ms to decode N = 64KB. Table 1 below summarizes the total latency estimates for snappy, zlib and pDCT+snappy when we assume a 802.11g uplink transmission rate of 54Mbps and estimate a one-way Internet routing delay of 10ms per 1500-byte packet from the iPad to our Amazon server. Just looking at the total latencies, it is tempting to conclude that snappy has bested all the other methods. However, a different conclusion emerges when we also take into consideration the compression ratio. Here, pDCT+snappy outperforms snappy significantly, reducing bandwidth utilization 52% while giving up only 116ms in latency and providing us better trade-off between latency and compression ratio than the other methods. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Table 1 Enc Tx Dec Tot CR Size snappy 4 15 - 19 3.8:1 17.2 zlib 487 13 - 500 6.0:1 10.9 pDCT+snappy 53 12 70 135 7.3:1 9.0 Table 1: Breakdown of the worst-case total latency of a memory block synchronization update using either snappy, zlib or pDCT+snappy. All latencies shown are in milliseconds. We assume an uplink transmission rate of 54Mbps and a one-way routing delay of 10ms. We also show the achieved compression ratios (CR) and the size in KB of a single encoded update for each scheme, respectively. To demonstrate that our prototype system may produce practical performance gains, an example iOS application was used that performs handwriting recognition of for example Chinese characters, although other applications may be used as well. In this example, Chinese handwriting recognition was selected mainly because each character is written with a prescribed number of strokes; thus, stroke count provides a quantifiable measure of the computational complexity of the recognition task. The mobile application may be implemented based on the open source Zinnia and Tegaki projects, which provide a trained support vector machine model for recognizing traditional Chinese characters. The user handwrites a Chinese character on the tablet screen and the app captures the strokes as a series of stroke vectors in an in- memory object. This stroke vector object is then fed into a model evaluation object method, producing a classification and thus the Unicode character. When the stroke vector object is upshift_alloc'd, the data are replicated and the model evaluation method is offloaded to the cloud server. In a performance test, a comparison is made of the time required to recognize handwritten characters of increasing complexity locally on the iPad vs. offloaded to a cloud server. As shown in FIG. 3a, when the on-device (302) is compared to the offloaded (304) recognition times, compressive offloading actually increases the recognition time for the lowest complexity (3-stroke) characters (on-device average: 922ms, offloaded average: 1165ms). This is expected, due to the offloading overhead. However, the figure also shows that on-device computation time scales poorly with complexity; as character complexity CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 increases 10-fold (from a stroke count of 3 to 30), the average on-device recognition time increases 13.62-fold. When offloaded, the increase is just 1.65-fold. Much of this attractive slow-growth behavior of compressive offloading can be attributed to the raw computing capacity of the Amazon EC2 server, but the point to emphasize here is that such computing power is only effectively utilized because compressive replication has a low overhead. Compressive offloading may provide significant performance acceleration. Even for moderately complex 20-stroke characters, the on-device recognition time averages 7,249ms; compressive offloading averages just 1,687ms, which is a substantial 4.2-fold speedup. Better still, the acceleration (306) increases as the complexity increases, as shown in FIG. 3b. For high-complexity 30-stroke characters, the speedup due to offloading is more than 6.5-fold. The difference to the app user equipment may be striking, especially when more than one character must be recognized at a time (e.g., in a tract of handwritten text). While the acceleration achievable through compressive offloading may be considered substantial, to be practical, it may not be a result of greater battery utilization. Thus, we consider the battery efficiency of compressive offloading and took into account the power drawn for computing the encoding and transmitting it over Wi-Fi. FIG. 3c compares the battery utilization when this experiment is run on-device (308) and offloaded (310). With compressive offloading, the battery depletion rate is reduced substantially. In fact, as the linear regression lines show, with the same battery budget, compressive offloading allows the user to perform 60% more recognition tasks. Taken together, these results show that compressive offloading is win-win for end users: it can provide significant advantages in both speed and battery efficiency for real-world mobile apps. For the cloud provider, computations that for example take the iPad an excruciatingly long 10 seconds to execute take barely a few hundred milliseconds. At scale, these small workloads can be load-balanced to fill slack anywhere in the data center. Example 1 may be a mobile computing device, comprising: one or more processors; and memory coupled with the one or more processors. The mobile computing device may further comprise a shim layer to compressively replicate memory blocks of the memory to a cloud server, compressively offload invocations of object methods of objects resident in a memory block of the memory to the cloud server, and to receive execution results of the invoked object methods. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 2 may be example 1, wherein the shim layer may include a replication agent to compressively replicate memory blocks of the memory to the cloud server continuously every t units of time. Example 3 may be example 2, wherein the replication agent may apply a sampling matrix 0 to a memory block s to generate an encoding y of the memory block s. Example 4 may be example 3, wherein the replication agent may apply a partial discrete cosine transform matrix 0 to the memory block s to generate the encoding y of the memory block s. Example 5 may be example 3, wherein the replication agent may further transmit encoding y to the cloud server. Example 6 may be example 5, wherein the replication agent may further compress encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 7 may be any one of examples 1-6, wherein the shim layer may include an object method offloader to redirect invocation of object methods to the cloud server, and to receive execution results of the invoked object methods. Example 8 may be example 7, wherein the object method offloader may determine whether an object of an object method being invoked is allocated from a replicated memory block. Example 9 may be example 8, wherein the object method offloader may cause the object method to be invoked and executed on the mobile computing device, on determination that the object of the object method being invoked is not allocated from a replicated memory block. Example 10 may be example 8, wherein the object method offloader may compressively encode a memory block associated with the object method being invoked, send the compressively encoded memory block to a cloud server, and redirect the object method to be invoked and executed on the cloud server, on determination that the object of the object method being invoked is allocated from a replicated memory block. Example 11 may be example 10, wherein the object method offloader may apply a sampling matrix 0 to a memory block s to generate a compressive encoding y of the memory block s. Example 12 may be example 11, wherein the object method offloader may apply a partial discrete cosine transform matrix 0 to the memory block s to generate the compressive encoding y of the memory block s. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 13 may be example 11, wherein the object method offloader may further transmit the compressive encoding y to the cloud server. Example 14 may be example 13, wherein the object method offloader may further compress the compressive encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 15 may be a method for mobile computing, comprising: compressively replicating, by a mobile computing device, memory blocks of memory of the mobile computing device to a cloud server; and monitoring, by the mobile computing device, for object method invocations. The method may further comprise on detection of an invocation of an object method, selectively redirecting, by the computing device, the invocation of the object method to the cloud server to cause the object method to be invoked and executed on the cloud server; and receiving, by the computing device, execution results of the object methods which invocations are redirected to the cloud server. Example 16 may be example 15, wherein compressively replicating may comprise compressively replicating memory blocks of the memory to the cloud server continuously every t units of time. Example 17 may be example 16, wherein compressively replicating may comprise applying a sampling matrix 0 to a memory block s to generate an encoding y of the memory block s. Example 18 may be example 17, wherein compressively replicating may comprise applying a partial discrete cosine transform matrix 0 to the memory block s to generate the encoding y of the memory block s. Example 19 may be example 17, wherein compressively replicating further may comprise transmitting encoding y to the cloud server. Example 20 may be example 19, wherein compressively replicating further may comprise compressing encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 21 may be any one of examples 15-20, wherein selectively redirecting may comprise determining whether an object of an object method being invoked is allocated from a replicated memory block. Example 22 may be example 21, wherein selectively redirecting may comprise causing the object method to be invoked and executed on the mobile computing device, on CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 determination that the object of the object method being invoked is not allocated from a replicated memory block. Example 23 may be example 21, wherein selectively redirecting may comprise compressively encoding a memory block associated with the object method being invoked, sending the encoding of the memory block to the cloud server, and redirecting the object method to be invoked and executed on the cloud server, on determining that the object of the object method being invoked is allocated from a replicated memory block. Example 24 may be example 23, wherein compressively encoding may comprise applying a sampling matrix 0 to a memory block s to generate a compressive encoding y of the memory blocks. Example 25 may be example 24, wherein compressively encoding may comprise applying a partial discrete cosine transform matrix 0 to the memory block s to generate the compressive encoding y of the memory blocks. Example 26 may be example 24, wherein selectively redirecting further may comprise transmitting the compressive encoding y to the cloud server. Example 27 may be example 26, wherein selectively redirecting further may comprise compressing the compressive encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 28 may be one or more computer-readable media having instructions stored thereon that cause a mobile computing device, in response to execution by the mobile computing device, to: compressively replicate memory blocks of memory of the mobile computing device to a cloud server; selectively redirect invocation of object methods to the cloud server; and receive execution results of the invoked object methods which invocations are redirected to the cloud server. Example 29 may be example 28, wherein to compressively replicate may comprise to compressively replicate memory blocks of the memory to the cloud server continuously every t units of time. Example 30 may be example 29, wherein to compressively replicate may comprise to apply a sampling matrix 0 to a memory block s to generate an encoding y of the memory block s. Example 31 may be example 30, wherein to apply may comprise to apply a partial discrete cosine transform matrix 0 to the memory blocks to generate the encoding y of the memory block s. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 32 may be example 30, wherein to compressively replicate further may comprise to transmit encoding y to the cloud server. Example 33 may be example 32, wherein to compressively replicate further may comprise to compress encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 30 may be any one of examples 28-33, wherein to selectively redirect may comprise to determine whether an object of an object method being invoked is allocated from a replicated memory block. Example 35 may be example 34, wherein to selectively redirect may comprise to cause the object method to be invoked and executed on the mobile computing device, on determination that the object of the object method being invoked is not allocated from a replicated memory block. Example 36 may be example 34, wherein to selectively redirect may comprise to compressively encode a memory block associated with the object method being invoked, send the compressively encode memory block to a cloud server, and redirect the object method to be invoked and executed on the cloud server, on determination that the object of the object method being invoked is allocated from a replicated memory block. Example 37 may be example 36, wherein to compressively encode may comprise to apply a sampling matrix 0 to a memory block s to generate a compressive encoding y of the memory block s. Example 38 may be example 37, wherein to apply may comprise to apply a partial discrete cosine transform matrix 0 to the memory block s to generate the compressive encoding y of the memory block s. Example 39 may be example 37, wherein to selectively redirect further may comprise to transmit the compressive encoding y to the cloud server. Example 40 may be example 39, wherein to selectively redirect further may comprise to compress the compressive encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 41 may be an apparatus for mobile computing, comprising: one or more processors; memory coupled with the one or more processors; means for compressively replicating memory pages of the memory to a cloud server; means for monitoring for object method invocations; means for selectively redirecting the invocation of the object method to the cloud server to cause the object method to be invoked and executed on the CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 cloud server, on detection of an invocation of an object method; and means for receiving execution results of the object methods which invocations are redirected to the cloud server. Example 42 may be example 41, wherein means for compressively replicating may comprise means for compressively replicating memory blocks of the memory to the cloud server continuously every t units of time. Example 43 may be example 42, wherein means for compressively replicating may comprise means for applying a sampling matrix 0 to a memory block s to generate an encoding y of the memory block s. Example 44 may be example 43, wherein means for compressively replicating may comprise means for applying a partial discrete cosine transform matrix 0 to the memory blocks to generate the encoding y of the memory blocks. Example 45 may be example 43, wherein means for compressively replicating further may comprise means for transmitting encoding y to the cloud server. Example 46 may be example 45, wherein means for compressively replicating further may comprise means for compressing encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 47 may be any one of examples 41-46, wherein means for selectively redirecting may comprise means for determining whether an object of an object method being invoked is allocated from a replicated memory block. Example 48 may be example 47, wherein means for selectively redirecting may comprise means for causing the object method to be invoked and executed on the mobile computing device, on determination that the object of the object method being invoked is not allocated from a replicated memory block. Example 49 may be example 47, wherein means for selectively redirecting may comprise means for compressively encoding a memory block associated with the object method being invoked, sending the encoding of the memory block to the cloud server, and means for redirecting the object method to be invoked and executed on the cloud server, on determining that the object of the object method being invoked is allocated from a replicated memory block. Example 50 may be example 49, wherein means for compressively encoding may comprise means for applying a sampling matrix 0 to a memory block s to generate a compressive encoding y of the memory block s. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 51 may be example 50, wherein means for compressively encoding may comprise means for applying a partial discrete cosine transform matrix 0 to the memory blocks to generate the compressive encoding y of the memory blocks. Example 52 may be example 50, wherein means for selectively redirecting further may comprise means for transmitting the compressive encoding y to the cloud server. Example 53 may be example 52, wherein means for selectively redirecting further may comprise means for compressing the compressive encoding y to reduce its size prior to transmitting encoding y to the cloud server. Example 54 may be a cloud server, comprising: one or more processors; memory coupled with the one or more processors; and a cloud daemon to receive encodings of memory blocks of memory compressively replicated from one or more mobile devices, and invocations of object methods redirected from the one or more mobile devices; to decode the encodings and update corresponding memory blocks on the cloud server; and to invoke and execute the object methods on the cloud server, and return execution results of the invoked object methods to the one or more mobile devices. Example 55 may be example 54, wherein the cloud daemon may comprise a replication agent to receive compressively encoded memory blocks of memory from one or more mobile devices, encode corresponding replica memory blocks on the cloud server, determine and decode to recover updates to the replicate memory blocks, and apply the recovered updates to the replica memory blocks on the cloud server, continuously every t units of time. Example 56 may be example 55, wherein the replication agent may apply a sampling matrix 0 to a replica memory block to generate an encoding yi_i of the replica memory block. Example 57 may be example 56, wherein the replication agent may apply a partial discrete cosine transform matrix 0 to the replica memory block to generate the encoding yi_l of the replica memory block. Example 58 may be example 56, wherein the replication agent may further receive a compressive encoding yi of the replicated memory block, and calculate a compressively encoded update y' = yi_l ¨yi to the replica memory block. Example 59 may be example 58, wherein the replication agent may further decode the compressively encoded update y' to recover an update As to the replica memory block, and to apply the update As to the replica memory block. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 60 may be example 59, wherein the compressive encoding yi is compressed to reduce its size, and the replication agent may further decompress the compressed compressive encoding yi before calculating the compressively encoded update y. Example 61 may be one of examples 54-60, wherein the cloud daemon includes an object method servicer to receive invocations of object methods redirected from the one or more mobile devices; and to invoke and execute the object methods, and return execution results of the invoked object methods to the one or more mobile devices. Example 62 may be example 61, wherein the object method servicer may translate an object pointer to a location in an address space of a mobile device to an object pointer to a location in an address space of the cloud server. Example 63 may be example 61, wherein the object method servicer may serialize the execution results of the invoked object methods, and return the serialized execution results to the one or more mobile devices. Example 64 may be a method for cloud computing, comprising: receiving, by a cloud server, encodings of memory blocks of memory compressively replicated from one or more mobile devices; decoding, by the cloud server, the encodings, and updating corresponding replica memory blocks of the cloud server; receiving, by the cloud server, invocations of object methods redirected from the one or more mobile devices; invoking and executing, by the cloud server, the object methods; and returning, by the cloud server, execution results of the invoked object methods to the one or more mobile devices. Example 65 may be example 64, wherein receiving encodings may comprise receiving compressively encoded memory blocks of memory from one or more mobile devices; encoding corresponding replica memory blocks on the cloud server; determining and decoding to recover updates to the replicate memory blocks; and applying the recovered updates to the replica memory blocks on the cloud server, continuously every t units of time. Example 66 may be example 65, wherein encoding corresponding replica memory blocks may comprise applying a sampling matrix 0 to a replica memory block to generate an encoding yi_i of the replica memory block. Example 67 may be example 66, wherein applying may comprise applying a partial discrete cosine transform matrix 0 to the replica memory block to generate the encoding yi_i of the replica memory block. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 68 may be example 66, wherein determining updates may comprise receiving a compressive encoding yi of the replicated memory block, and calculating a compressively encoded update y' = yi_i ¨ yi to the replica memory block. Example 69 may be example 68, wherein decoding may comprise decoding the compressively encoded update y' to recover an update As to the replica memory block. Example 70 may be example 69, wherein the compressive encoding yi is compressed to reduce its size, and decoding further may comprise decompressing the compressed compressive encoding yi before calculating the compressively encoded update y. Example 71 may be example 64, wherein invoking may comprise translating an object pointer to a location in an address space of a mobile device to an object pointer to a location in an address space of the cloud server. Example 72 may be any one of examples 64-71, wherein returning may comprise serializing the execution results of the invoked object methods, and returning the serialized execution results to the one or more mobile devices. Example 73 may be one or more computer-readable media having instructions stored thereon that cause a cloud server, in response to execution by the cloud server, to: receive encodings of memory blocks of memory compressively replicated from one or more mobile devices; decode the encodings and update corresponding replica memory blocks on the cloud server; receive invocations of object methods offloaded from the one or more mobile devices; and invoke and execute the object methods, and return execution results of the invoked object methods to the one or more mobile devices. Example 74 may be example 73, wherein to receive encodings, to decode and to update may comprise to receive compressively encoded memory blocks of memory from one or more mobile devices, to encode corresponding replica memory blocks on the cloud server, to determine and decode to recover updates to the replicate memory blocks, and to apply the recovered updates to the replica memory blocks on the cloud server, continuously every t units of time. Example 75 may be example 74, wherein to apply may comprise to apply a sampling matrix 0 to a replica memory block to generate an encoding yi_i of the replica memory block. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 76 may be example 75, wherein to apply may comprise to apply a partial discrete cosine transform matrix 0 to the replica memory block to generate the encoding yi_l of the replica memory block. Example 77 may be example 75, wherein to determine may comprise to further receive a compressive encoding yi of the replicated memory block, and calculate a compressively encoded update y' = yi_i ¨ yi to the replica memory block. Example 78 may be example 77, wherein to decide may comprise to decode the compressively encoded update y' to recover an update As to the replica memory block. Example 79 may be example 78, wherein the compressive encoding yi is compressed to reduce its size, and to decode further comprise to decompress the compressed compressive encoding yi before calculating the compressively encoded update y. Example 80 may be example 73, wherein to receive invocations and to invoke and execute may comprise to translate an object pointer to a location in an address space of a mobile device to an object pointer to a location in an address space of the cloud server. Example 81 may be any one of examples 73-80, wherein to return may comprise to serialize the execution results of the invoked object methods, and transmit the serialized execution results to the one or more mobile devices. Example 82 may be a cloud server, comprising: one or more processors; memory coupled with the one or more processors; and means for receiving encodings of memory blocks of memory compressively replicated from one or more mobile devices; means for decoding, by the cloud server, the encodings, and updating corresponding replica memory blocks of the memory; means for receiving invocations of object methods redirected from the one or more mobile devices; means for invoking and executing, by the cloud server, the object methods; and means for returning execution results of the invoked object methods to the one or more mobile devices. Example 83 may be example 82, wherein means for receiving encodings may comprise means for receiving compressively encoded memory blocks of memory from one or more mobile devices; means for encoding corresponding replica memory blocks on the cloud server; means for determining and decoding to recover updates to the replicate memory blocks; and means for applying the recovered updates to the replica memory blocks on the cloud server, continuously every t units of time. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 Example 83 may be example 83, wherein means for encoding corresponding replica memory blocks may comprise means for applying a sampling matrix 0 to a replica memory block to generate an encoding yi_i of the replica memory block. Example 85 may be example 84, wherein means for applying may comprise means for applying a partial discrete cosine transform matrix 0 to the replica memory block to generate the encoding yi_l of the replica memory block. Example 86 may be example 84, wherein means for determining updates may comprise means for receiving a compressive encoding yi of the replicated memory block, and means for calculating a compressively encoded update y' = yi_i ¨ yi to the replica memory block. Example 87 may be example 86, wherein means for decoding may comprise means for decoding the compressively encoded update y' to recover an update As to the replica memory block. Example 88 may be example 87, wherein the compressive encoding yi is compressed to reduce its size, and means for decoding further may comprise means for decompressing the compressed compressive encoding yi before calculating the compressively encoded update y'. Example 89 may be example 82, wherein means for invoking may comprise means for translating an object pointer to a location in an address space of a mobile device to an object pointer to a location in an address space of the cloud server. Example 90 may be any one of examples 82-89, wherein means for returning may comprise means for serializing the execution results of the invoked object methods, and means for returning the serialized execution results to the one or more mobile devices. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term "machine-readable medium" refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non- transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively, or additionally, store such machine instructions in a transient manner, such as for example, as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive track pads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. CA 02938697 2016-08-03 WO 2015/138504 PCT/US2015/019776 The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations may be provided in addition to those set forth herein. Moreover, the implementations described above may be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flow depicted in the accompanying figures and/or described herein does not require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. Furthermore, the specific values provided in the foregoing are merely examples and may vary in some implementations. Although various aspects of the invention are set out in the claims, other aspects of the invention comprise other combinations of features from the described implementations with the features of the claims, and not solely the combinations explicitly set out in the claims. It is also noted herein that while the above describes example implementations of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications that may be made without departing from the scope of the present invention as defined in the appended claims.
An apparatus and method for verifying a process step in the fabrication of an integrated circuit device is implemented. A ring oscillator is fabricated on the dice constituting the integrated circuit device being manufactured. The ring oscillator structure is adapted for sensitizing the ring oscillator to variations in the process step being verified. During test of the wafer containing the dice, a scan of the frequency of the ring oscillator across the wafer for each die under test is made. Deviations in the ring oscillator frequency from a preselected nominal value delimit regions of the wafer for which the process step is marginal.
What is claimed is: 1. A process step verification method comprising the steps of:providing a ring oscillator on each die being verified, said ring oscillator having a structure adapted for sensitizing said ring oscillator to a predetermined process step; measuring a period of said ring oscillator for the die under test; and comparing said period with a preselected specification. 2. The method of claim 1 wherein said step of comparing said period comprises the steps of:determining a difference between said period of said ring oscillator and a preselected nominal value; and comparing said difference with a preselected deviation specification. 3. The method of claim 1 further comprising the steps of:identifying each die in which said period is outside said preselected specification; and mapping each such die to a corresponding physical location on a wafer, whereby one or more portions of said wafer having a marginal performance characteristic are identified. 4. The method of claim 1 wherein said ring oscillator structure includes a plurality of cascaded inverters, each inverter of said plurality having an output coupled to an input of a next inverter by a polysilicon interconnect having a preselected length.5. The method of claim 4 wherein said process step comprises a salicidation step.6. The method of 4 wherein said ring oscillator is sensitive to a variation in a distributed resistance of the polysilicon interconnects.7. The method of claim 3 further comprising the step of logging said period of said ring oscillator to a database, and wherein said step of identifying said die includes logging a position of said die to said database in association with said period.8. A data processing system for process step verification comprising:circuitry operable for measuring a period of a ring oscillator for a die under test, wherein said ring oscillator is provided on each die being verified, said ring oscillator having a structure adapted for sensitizing said ring oscillator to a predetermined process step; and circuitry operable for comparing said period with a preselected specification. 9. The system of claim 8 wherein said circuitry operable for comparing said period comprises:circuitry operable for determining a difference between said period of said ring oscillator and a preselected nominal value; and circuitry operable for comparing said difference with a preselected deviation specification. 10. The system of claim 8 further comprising:circuitry operable for identifying each die wherein said period is outside said preselected specification; and circuitry operable for mapping each such die to a corresponding physical location on a wafer, whereby one or more portions of said wafer having a marginal performance characteristic are identified. 11. The system of claim 8 wherein said ring oscillator structure includes a plurality of inverters, each inverter of said plurality having an output coupled to an input of a next inverter by a polysilicon interconnect having a preselected length.12. The system of claim 11 wherein said process step comprises a salicidation step.13. The system of claim 11 wherein said ring oscillator is sensitive to a variation in a distributed resistance of the polysilicon interconnects.14. The system of claim 10 further comprising circuitry operable for logging said period of said ring oscillator to a database, and wherein said circuitry operable for identifying said die comprises circuitry operable for logging a position of said die to said database in association with said period.15. A computer program product embodied in a storage medium, the program product for process step verification including a program of instructions for performing the method steps of:measuring a period of a ring oscillator for a die under test, wherein said ring oscillator is provided on each die being verified, said ring oscillator having a structure adapted for sensitizing said ring oscillator to a predetermined process step; and comparing said period with a preselected specification. 16. The program product of claim 15 wherein said step of comparing said period comprises:determining a difference between said period of said ring oscillator and a preselected nominal value; and comparing said difference with a preselected deviation specification. 17. The program product of claim 15 further including instructions for performing the steps of:identifying each die wherein said period is outside said preselected specification; and mapping each such die to a corresponding physical location on a wafer, whereby one or more portions of said wafer having a marginal performance characteristic are identified. 18. The program product of claim 15 wherein said ring oscillator structure includes a plurality of cascaded inverters, each inverter of said plurality having an output coupled to an input of a next inverter by a polysilicon interconnect having a preselected length.19. The program product of claim 18 wherein said process step comprises a salicidation step.20. The program product of claim 18 wherein said ring oscillator is sensitive to a variation in a distributed resistance of the polysilicon interconnects.21. The program product of claim 17 further comprising a program of instructions for performing the step of logging said period of said ring oscillator to a database, and wherein said instructions for performing the step of identifying said die comprises instructions for performing the step of logging a position of said die to said database in association with said period.
TECHNICAL FIELDThe present invention relates in general to the fabrication of integrated circuit devices, and in particular to the verifying fabrication of salicide layers and other structural features in an integrated circuit device.BACKGROUND INFORMATIONMetal oxide semiconductor transistors used in modem integrated circuit devices typically employ polysilicon gate electrodes. The conductivity of the polysilicon is increased by the formation of a metal salicide layer on the polysilicon. Typically, titanium (Ti) is used, forming a TiSi2 salicide layer although other methods, for example, cobalt (Co) may also be used. (The deposition of the layer is typically done using a self-aligned salicidation process, and the resulting layer is typically referred to as a salicide layer or simply the salicide.)Poor salicide formation during fabrication leads to reduced performance of the integrated circuit device. This may be particularly acute over the shallow trench isolation (STI) step. The STI step isolates the complementary active elements in the complementary metal oxide semiconductor (CMOS) devices. Poor salicide formation over the STI step can lead to short but high resistance paths. Furthermore, the formation of salicide can vary across the surface of the wafer whereby the formation of marginal salicide may be restricted to only a portion of the wafer surface. Existing test processes are not sensitive to this coverage variability. These typically employ a polysilicon serpentine deposited on a portion of the wafer between dies and measuring the resistance of the polysilicon. Consequently, there is a need in the art for apparatus and methods to characterize salicide layers in CMOS integrated circuit devices, and in particular methods and apparatus which permit characterization at the die level.SUMMARY OF THE INVENTIONThe aforementioned needs are addressed by the present invention. Accordingly there is provided, in a first form, a process step verification method. The method includes providing a ring oscillator on each die being verified, the ring oscillator having a structure adapted for sensitizing the ring oscillator to a predetermined process step. A period of the ring oscillator for the die under test is measured and compared with a preselected specification.There is also provided, in a second form, a data processing system for process step verification. The system includes circuitry operable for measuring a period of a ring oscillator for a die under test, in which the ring oscillator is provided on each die being verified. Each ring oscillator has a structure adapted for sensitizing the ring oscillator to a predetermined process step. The system also contains circuitry operable for comparing the period with a preselected specification.Additionally there is provided, in a third form, a computer program product embodied in a storage medium. The program product for process step verification constitutes a program of instructions for performing the steps of a method which includes measuring a period of a ring oscillator for a die under test, in which the ring oscillator is provided on each die being verified. The ring oscillator has a structure adapted for sensitizing the ring oscillator to a predetermined process step. The method also includes comparing the period with a preselected specification.The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:FIGS. 1A, 1B and 1C illustrate an inverter circuit that may be used with the present invention,FIG. 2 illustrates, in partial schematic form, a ring oscillator which may be used in an embodiment of the a present invention;FIG. 2B illustrates, in partial schematic form, a ring oscillator which may be used in another embodiment of the present invention;FIG. 2C illustrates, in partial schematic form, a ring oscillator which may be used in an additional embodiment of the present invention;FIGS. 3A and 3B illustrate, in flow chart form, a verification methodology in accordance with an embodiment of the present invention; andFIG. 4 illustrates, in block diagram form a data processing system implemented in accordance with an embodiment of the present invention.DETAILED DESCRIPTIONA ring oscillator is fabricated on each die constituting the integrated device being manufactured. As is well known in the art, the dice are formed on a wafer of semiconductor material, typically silicon, each wafer containing a plurality of integrated circuit dice. During wafer test, a scan of the frequency of the ring oscillator for each die under test is made across the wafer. Deviations in the ring oscillator frequency from a preselected nominal value delimit regions of the wafer in which the salicidation or other process step is marginal.In the following description, numerous specific details are set forth such as specific word or byte lengths, etc. to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits have been shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, details concerning timing considerations and the like have been omitted in as much as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art.Refer now to the drawings wherein depicted elements are not necessarily shown to scale and wherein like or similar elements are designated by the same reference numeral through the several views.Refer now to FIG. 1A illustrating inverter 100 which may be used in the present invention the principles of which are described further below. Inverter 100 includes p-type metal oxide semiconductor field effect transistor (PFET) 102 and n-type metal oxide semiconductor field effect transistor (NFET) 104. The common drains of PFET 102 and NFET 104 form the output of inverter 100. Between the input to the inverter and the gates, 106 and 108, of PFET 102 and NFET 104, respectively, are parasitic resistances arising in the underlying resistivity of the polysilicon forming the input electrodes and gate contacts of the devices and the interconnections there between. These parasitic resistances, which are distributed resistances, are schematically indicated in FIG. 1A by the lumped elements, R. Additionally, parasitic capacitances couple the gate electrodes, 106 and 108, to the supply rails. These are shown in FIG. 1A as a capacitances C' and C''. Other sources of parasitic capacitance may also occur in the fabrication of an integrated circuit device, and may also be exploited to verify process integrity at the die level as will be discussed further below in conjunction with an alternative embodiment of the present invention. When verifying salicide formation in accordance with the principles of the present invention, the propagation delay through inverter 100 may principally be determined by the time constants of the RC network formed by the distributed resistances, R and the capacitances C' and C''.Inverter 100, and the nature of the parasitic elements therein, may be further understood by referring now to FIG. 1B illustrating in simplified form, a plan view of an inverter structure corresponding to the inverter of FIG. 1A. A p-active region 110, corresponding to PFET 102, FIG. 1A, and an n-active region 112, corresponding to NFET 104 are separated by a shallowed trench isolation (STI) region 114. A polysilicon gate interconnect electrode 116 forms the common gate electrode of inverter 100, and bears input pad 118. Polysilicon gate electrode 116 incorporates a salicide layer (not shown in FIG. 1B) to provide a low-resistivity gate electrode. Lumped resistors, R are overlaid on the structure in FIG. 1B to serve as a mnemonic device representing the parasitic, distributed resistance associated with the polysilicon gate electrode 116, and particularly the salicide layer formed thereon. For simplicity, the capacitors C' and C'' of FIG. 1A are not shown in FIG. 1B.Refer now to FIG. 1C illustrating, in simplified form, a cross-sectional view through the inverter structure in FIG. 1B. In FIG. 1C is depicted salicide layer 120 on polysilicon electrode 116. Field oxide 122 is deposited within STI region 114 and isolates the active regions of the inverter. Polysilicon electrode 116 spans field oxide 122. Again, for illustrative purposes, lumped resistors R1, R'1, R2, R'2, and R3 are overlaid on the structure in FIG. 1C as a mnemonic device representing the distributed resistances associated with the salicide layer 120 over active regions 110 and 112, STI edges 122, and STI region 114, respectively. The resistances associated with the salicide layer 120 are affected by the quality of the formation of the layer, and a marginal fabrication step may give rise to an enhanced resistance being contributed by the corresponding portion of salicide layer 120. For example, if the portion of the salicide layer 120 over one or both of the STI edges 122 is thinned, the corresponding resistances, R2 and R2' may be increased over the nominal value of these resistances.Such imperfections in the salicide layer 120 can degrade performance of the integrated circuit device being fabricated. Moreover, process variations giving rise to marginal layer formation may be localized over portions of the wafer being manufactured. As previously discussed, present methods for verifying polysilicon integrity may be insensitive to these effects.Localized imperfections in the salicide layer, such as layer 120 in FIG. 1C, may be detected by providing on each die, a ring oscillator formed from inverter elements 100. Such a ring oscillator 200 is illustrated in FIG. 2A. Ring oscillator 200 includes an odd number, n, of inverters 100. Output 202 of ring oscillator 200 constitutes an oscillatory signal having a period determined by the propagation delays through the n inverters 100. In turn, the propagation delays are determined by the RC time constant of the parasitic resistance and capacitance network of the corresponding inverter 100, as discussed in conjunction with FIG. 1A. In particular, in implementing a ring oscillator sensitive to the salicidation step, the interconnect between inverters 100 in FIG. 2A may be fabricated from salicide bearing polysilicon in which the interconnect has a preselected length. The length of the polysilicon interconnect is selected such that the variation in ring oscillator frequency due to variations in the RC time constants of the parasitic elements is swamped by variations in the polysilicon resistance. Thus, a ring oscillator formed from inverters having imperfections in the formation of the salicide layer such that the salicide layer is thinned over a portion of the inverter structure, for example, will exhibit an output signal having a period that is increased over the period of a ring oscillator formed from inverters having the nominal salicide layer. Thus, by measuring the oscillation period, or equivalent, frequency, of the output of the ring oscillator, a device having a marginal salicidation may be detected. Additionally, in an alternative embodiment in accordance with the present invention, the printing of the polysilicon lines themselves, in vertical or horizontal configurations, may be verified by fabricating the ring oscillator with vertically or horizontally drawn inverters, respectively. Furthermore, by scanning the period measurement across the wafer for each die, localized regions of marginal salicidation may be ascertained.Additionally, other process variations giving rise to marginal devices may be determined in similar fashion. In general, an integrated circuit element may have other sources of parasitic resistance and capacitance in addition to those discussed in conjunction with FIGS. 1A-C. These may include, for example, source/drain diffusion resistance and junction capacitance, metal interconnect resistances and contact resistance in vias. Process variations can also give rise to localized regions on a wafer having degraded performance because the process variation results in the enhancement of a parasitic element. For example, thinning of an oxide insulating layer may give rise to an increase in the capacitance between conducting elements separated by the oxide layer with a concomitant degradation in the performance of the associated integrated circuit device. Other examples include gate oxide thickness variation, increased contact resistance due to poor cleans between layers, or pattern deformation, and enhanced capacitance due to reduced interconnect spacing. Such parasitic elements form RC networks that load the outputs of active devices in the integrated circuit, and a plurality of inverters forming a ring oscillator will experience additional propagation delays due to these parasitic networks as well.Such a ring oscillator is illustrated in FIG. 2B. Each inverter 212 is loaded by networks R'C' and R''C'' which are between the corresponding inverter output and the supply rails. In general, the propagation delays due to the aforesaid RC networks contribute propagation delays in addition to the parasitic elements within inverters 212 analogous to those discussed hereinabove in conjunction with FIGS. 1A-1C. However, the principles of the present invention may also be applied to determine process variations associated with the RC networks in FIG. 2B. In such an embodiment, a ring oscillator 210 is provided on each die on the wafer as previously described. The design of the inverter elements constituting the ring oscillator is selected such that the delays introduced by the parasitic elements associated with the inverter structure itself are rendered sufficiently small relative to the delays arising from the networks R'C'' and R''C''. For example, an inverter structure may be fabricated on the die with shorter polysilicon interconnects, whereby the propagation delays associated with the distributed resistance of a polysilicon are small compared to the delays introduced by the networks as depicted in FIG. 2B. Design features of the ring oscillator stages may then be selected to sensitize the ring oscillator to process steps of interest. Embodiments sensitive to contact resistance, fan out, gate versus junction capacitance, metallization capacitance may be implemented. Similar to the sensitivity to salicide formation discussed above, the inverters may be coupled together using a contact chain formed by interlinking a string of contacts to provide a ring oscillator on each die sensitive to contact resistance to verify the process steps used in contact fabrication. In an alternative embodiment, device timing sensitivity to fan out may be verified by fabricating a ring oscillator as in FIG. 2A in which each of the n stages includes a plurality of inverters having inputs connected in parallel and having the parallel inputs coupled to the output of one of the inverters of the plurality of the preceding stage, as shown in FIG. 2C. Another alternative embodiment, for verifying the gate versus junction capacitance effects may be implemented via a ring oscillator formed from cascaded gate elements having a transistor stack forming the input path. By fabricating the ring oscillator with the stages cascaded through the top transistor in the stack (with the other gate inputs tied to a supply rail as appropriate to form an inverter) and again with the stages cascaded through the bottom transistor in the stack, the sensitivity of device speed to gate versus junction capacitance may be verified. Referring again to FIG. 2B, the parasitic capacitances C' and C'', may be dominated by the capacitance between metallization by tying the metal to the inverter outputs, in yet another embodiment of a ring oscillator in accordance with the principles of the present invention. Typically in the fabrication of an integrated circuit device, there are multiple metal levels, and the intermetal capacitances C' and C'' may, alternatively, be implemented in the ring oscillator as line-to-line capacitances (i.e., the same level) or top-to-bottom (i.e. different level) metallization capacitance, depending on the process step to be verified. By scanning across the wafer and determining the periods of the ring oscillators, regions of the wafer having marginal device performance may be identified.The principles of the present invention may be further understood by referring now to FIG. 3A, illustrating in flow chart form, a portion of verification process 300 in accordance with the present invention. In step 302, a ring oscillator is provided on each die of the wafer being fabricated. The structure of the inverters constituting each ring oscillator is implemented in accordance with a predetermined set of parameters which are selected to sensitize the ring oscillator to variations in the fabrication process steps being verified. In step 304, testing of the wafer begins with a first die. The period of the ring oscillator on the current die is measured in step 306, and in step 308 is determined if the ring oscillator measurement is within preselected screening limits. These limits are chosen to eliminate ring oscillators that have failed completely, or are otherwise unrelated to the process of interest. For example, limits of 10 KHz-1000 MHz may be set for a ring oscillator reading that oscillates, at normal process conditions, at 10 MHz. If, the current die is within the screening limits, in step 310 the ring oscillator measurement is logged to a database. The die may be identified and logged based on its position on the wafer. The position may be specified in terms of a artesian (x,y) coordinate system defined for the wafer. Additionally, each wafer is identified by a serial number which may also be associated with the log being generated.If, the die under test falls outside of the screening limits, step 310 is bypassed, and in step 312 it is determined if the current die under test is the last die to be tested. If not, in step 314 the process proceeds to the next die and loops back to step 306. The process then loops over steps 306-312 until all dice have been tested, and then, step 312 proceeds by the "Yes" branch to step 316. In step 316, the ring oscillator data logged in step are analyzed.Refer now FIG. 3B, illustrating, in flow chart form, analysis step 316, in further detail. In step 330, a statistical sample of the ring oscillator measurement data is selected. A statistical sample provides a reference datum against which the ring oscillator measurements, in particular the ring oscillator periods, or equivalently frequencies, determined in step 306, above may be used below to verify the fabrication step under test, as discussed below. A statistical sample of a ring oscillator measurements may be taken across a sample size that spans a single wafer, or, alternatively, a wafer lot (typically twenty-five wafers) or a collection of lots. The size of the sample depends on the effect of interest. For example, lot or collection of lot sized samples may typically be used when examining overall yield trends, whereas wafer size samples may typically be used when considering process dependent signals, as in the present invention. The statistical sample will provide a set of measured oscillator periods having a distribution of measured periods, or, equivalently, frequencies. The measured frequencies will have a set of statistical properties, such as a mean frequency, or equivalently period, and a width of the statistical distribution of frequencies about the mean. The width of the distribution is typically expressed in terms of the standard deviation.In step 332, the analysis begins with the first die logged. In step 334, it is determined if the measured frequency falls within a selected specification, which specification may be in terms of the statistical properties selected in step 330. For example, in step 334 it may be determined if the frequency of the ring oscillator on the die being analyzed is within a standard deviation of the mean frequency of the statistical sample. Alternatively, specification may be defined in terms of a multiple of the standard deviation, or of a first and third quartile limit, and yet another alternative embodiment, a ten percent (10%)/ninety percent (90%) limit. (These latter two are specific measures that are generally referred to in the statistics art as quantiles.) As would be recognized by an artisan of ordinary skill, the principles of the present invention apply independently of the particular statistical measure selected.If, in step 334 the frequency of the ring oscillator on the die under test is outside of the selected specification, then the coordinates of the die are logged to the database in step 336, or otherwise recorded. If however, the ring oscillator frequency on the die under test is within specification, in step 338 it is determined if the current die is the last die logged into the database. If not, the methodology of the present invention proceeds to the next die in step 340, and then loops over the remaining dice logged into the database in step 310, FIG. 3A. Note that in an alternative embodiment, all dice on a wafer may be logged, and the preliminary screening for failed ring oscillators performed in the analysis step 316. An artisan of ordinary skill would appreciate that such an alternative embodiment would be within the spirit and scope of the present invention. If however, in step 338, all the logged dice have been screened, in step 342 a map of the defective region on the corresponding wafer is output, which map maybe generated from the x-y coordinates of the dice logged in step 336. In this way, a fabrication process step being verified that is marginal over a localized region of the wafer may be identified. Additionally, those dice that are out of specification, and which would have reduced performance may be culled without sacrificing the entire wafer.The method of the present invention may be performed by a data processing system coupled to the wafer under test. InFIG. 4, an example is shown of a data processing system 400 which may be used for the invention. The system has a central processing unit (CPU) 410, which is coupled to various other components by system bus 412. Read only memory ("ROM") 416 is coupled to the system bus 412 and includes a basic input/output system ("BIOS") that controls certain basic functions of the data processing system 100. Random access memory ("RAM") 414, I/O adapter 418, and communications adapter 434 are also coupled to the system bus 412. I/O adapter 418 may be a small computer system interface ("SCSI") adapter that communicates with a disk storage device 420. Communications adapter 434 interconnects bus 412 with the wafer under test. Input/Output devices are also connected to system bus 412 via user interface adapter 422 and display adapter 436. Keyboard 424, track ball 432 and mouse 426 are all interconnected to bus 412 via user interface adapter 422. Display monitor 438 is connected to system bus 412 by display adapter 436. In this manner, a user is capable of inputting to the system throughout the keyboard 424, trackball 432 or mouse 426 and receiving output from the system via display 438.Preferred implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product. According to the computer system implementation, sets of instructions for executing the method or methods are resident in the random access memory 414 of one or more computer systems configured generally as described above. Until required by the computer system, the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 420 (which may include a removable memory such as an optical disk or floppy disk for eventual use in the disk drive 420). Further, the computer program product can also be stored at another computer and transmitted when desired to the user's work station by a network or by an external network such as the Internet. One skilled in the art would appreciate that the physical storage of the sets of instructions physically changes the medium upon which it is stored so that the medium carries computer readable information. The change may be electrical, magnetic, chemical, biological, or some other physical change. While it is convenient to describe the invention in terms of instructions, symbols, characters, or the like, the reader should remember that all of these and similar terms should be associated with the appropriate physical elements.Note that the invention may describe terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator. However, for at least a number of the operations described herein which form part of at least one of the embodiments, no action by a human operator is desirable. The operations described are, in large part, machine operations processing electrical signals to generate other electrical signals.Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
The invention relates to a semiconductor device having metal posts for stress relief at flatness discontinuties. Furthermore the invention discloses a semiconductor device (100) which includes a firstbody (102) having a first coefficient of thermal expansion (CTE) and a first surface (102a), a third body (111) having a third CTE and a third surface (111d) facing the first surface, and a fourth surface at an angle with respect to the third surface defining an edge of the third body (111), and a second body (103) having a second CTE higher than the first and the third CTE, the second body (103)contacting the first and the third surfaces. A post having a fourth CTE lower than the second CTE, transects the second body and contacts the edge.
1. A semiconductor device, comprising:a first body, wherein the first body includes metal and has a first coefficient of thermal expansion, a first CTE, and a first surface;A third body, wherein the third body includes metal and has a third CTE and a third surface facing and partially contacting the first surface, and a fourth surface oriented relative to the third surface. an angle to define an edge of the third body between the third surface and the fourth surface;A second body, wherein the second body includes a polymeric material and has a second CTE that is higher than the first CTE and the third CTE, the second body contacting the first surface and the third surface ;as well asA post having a fourth CTE lower than the second CTE, the post being transverse to the second body and contacting the edge.2. The device of claim 1, wherein the cylinder has top and bottom surfaces and sides forming a shape selected from the group consisting of: circular, hexagonal, and rectangular.3. The device of claim 2, wherein the pillars comprise metal.4. The device of claim 1, wherein the edges comprise corners or peaks.5. The device of claim 1, wherein the first CTE is 16×10-6K-1, the second CTE is 33×10-6K-1, and the third CTE is 16×10 -6K-1.6. The device of claim 1, further comprising a substrate attached to the third body.7. The device of claim 6, further comprising a polymeric encapsulating compound covering a portion of the third body, the second body, and the substrate.8. The device of claim 1, wherein the third body includes an under-bump metallization layer of copper or copper alloy covering the first surface of the first body and touching a portion of the post.9. A semiconductor device, comprising:A semiconductor chip having an integrated circuit covered by a metal layer, and the metal of the metal layer has a first coefficient of thermal expansion, that is, a first CTE;A polymer layer, the polymer layer is on the metal layer, the polymer layer has a window with side walls, the window exposes the surface of the metal layer, the polymer layer has a thickness higher than the first the second CTE of one CTE;A metal bump having a neck touching the window sidewall and contacting the surface of the metal layer and a body extending on the surface of the metal layer to an edge , the metal of the metal bump has a third CTE lower than the second CTE; andA cylinder transversely intersecting the polymer layer at the edge, the cylinder contacting the edge and the metal layer, the material of the cylinder having a third CTE lower than the second CTE. Four CTEs.10. The device of claim 9, further comprising an attachment material connecting the metal bump to a substrate.11. The device of claim 10, further comprising a polymeric encapsulation compound covering portions of the metal bumps and at least a portion of the metal layer, the polymer layer, and the substrate.12. The device of claim 9, wherein the pillar has a first surface and a second surface and a formation between the first surface and the second surface is selected from the group consisting of Sides: round, hexagonal and rectangular.13. The device of claim 9, wherein the metal layer includes copper.14. The device of claim 9, wherein the metal bumps comprise copper.15. The device of claim 9, wherein the polymer layer is comprised of polyimide.16. The device of claim 9, wherein the pillars comprise a circular shape and wherein the pillars comprise copper or benzocyclobutene.17. A method of manufacturing a semiconductor device, the method comprising:depositing a polyimide layer on the metal layer of the semiconductor chip;opening a first window in the polyimide layer to expose the metal layer;forming a column in the first window; andforming a metal bump aligned with the post on an edge of the metal bump;The thermal expansion coefficient of the polyimide layer is higher than the thermal expansion coefficient of the column.18. The method of claim 17, wherein before opening the first window:depositing a photoresist layer on the polyimide layer; andA second window is opened in the photoresist layer, the second window being aligned with the first window.19. The method of claim 18, forming the cylinder comprising:sputtering an adhesive material on the metal layer in the first window;Electroplating a first metal layer on the adhesive material;Electroplating a second metal layer on the first metal layer; andThe photoresist layer is removed.
Semiconductor device with pillars for stress relief at surface discontinuitiescross referenceThis application claims the benefit of Provisional Application No. 62/441,152, filed on December 30, 2016, the entire contents of which are incorporated herein by reference.Technical fieldThe present disclosure relates generally to the field of semiconductor devices and processes, and more particularly to the structure and fabrication of packaged semiconductor devices having solder bumps protected by a barrier that blocks moisture intrusion and relieves stress.Background techniqueAmong plastic-encapsulated Quad Flat No-Lead (QFN) packages for semiconductor devices, the HotRodTM package series has recently emerged as being thermally enhanced and therefore particularly suitable for high-power devices. High popularity. HotRod packages use a metal leadframe and solder pad designed with a power bus and thick copper leads; the semiconductor chip terminals are connected to the leadframe via solder balls or bumps rather than via wire bonds. The solder bump connection of the chip is conventionally called flip-chip assembly. The leadframe construction results in a cost-effective advanced molded package with improved electrical and thermal performance relative to traditional leaded packages. Additionally, the elimination of wire bonds between chip terminals and leads improves application efficiency by minimizing electrical parasitics. Package size is determined by the size of the chip being packaged and the number of signal pins.In addition to flip-chip mounting of the chip inside the package, the external terminals of the semiconductor package are attached to pads of the substrate or board through an array of solder bumps. For these device connections, as well as chip connections within packages, it is known from the introduction of flip-chip assembly technology that due to the differences between semiconductor device materials (such as silicon, metal leads and metal pads) and plastic materials (such as packages and substrates) Due to the large difference in coefficient of thermal expansion (CTE) between solder joints, solder joints may be subjected to severe thermomechanical stress.The reliability of solder connections can be tested by subjecting the assembled semiconductor device to temperature cycling, in which the assembled package is subjected to rapid temperature swings between -55°C and +125°C. These temperature fluctuations have been shown to subject the assembly to both compressive and tensile stresses and can lead to metal fatigue and microcracking at the joints. Microcracks may in turn be severely aggravated by the presence of moisture.Moisture-induced failure of plastic-encapsulated semiconductor devices has been observed and studied for many years. For example, plastic packages made from epoxy-based molding compounds can be penetrated by discrete water molecules over a period of about a day. However, as long as there is good adhesion between the plastic compound within the package and the device components (semiconductor chip, metal lead frame, substrate, etc.) and penetrating water molecules cannot accumulate to form a water film on the free surface, Then this penetration will not lead to problematic situations.In contrast, when some interfaces have delaminated and a thin film of water has been able to form, a rapid rise in temperature may evaporate the water and induce swelling internal pressure between the component and the encapsulating material. The expansion pressure can be high enough to bulge the packaging material at thin spots and eventually cause cracks through the packaging material. As an example, when a packaged device is heated in order to reflow the device solder balls to attach the device to a board, the temperature may rise rapidly beyond the boiling point of water. In the literature, the phenomenon of localized package cracking caused by vapor pressure is known as the popcorn effect. The popcorn effect has been a frustrating reliability issue for many years, along with observed device failures.Various approaches have been attempted to prevent device delamination and package cracking by enhancing adhesion between different device components (packaging compounds, semiconductor chips, substrates, lead frames, etc.). These efforts are: chemically purifying the molding compound; activating the leadframe metal surface just before the molding process, such as by plasma; enhancing the leadframe metal pairing by oxidizing the base metal or by depositing a specific metal layer (such as coarse tin) Affinity of the polymeric compound; coining the leadframe to create dimples and other three-dimensional surface features and roughness to improve interlocking of the encapsulation material with the surface of the enclosing component. However, the results of all these efforts were only partial and limited.Contents of the inventionOne embodiment discloses a semiconductor device. The semiconductor device includes: a first body having a first coefficient of thermal expansion (CTE) and a first surface; a third body having a third CTE and a third surface facing the first surface, and relative to the third surface a fourth surface angled to define an edge of the third body; and a second body having a second CTE higher than the first CTE and the third CTE, the second body contacting the first and third surfaces. A post having a fourth CTE lower than the second CTE traverses the second body and contacts the edge.Another embodiment discloses a semiconductor device. The semiconductor device includes a semiconductor chip having an integrated circuit topped by a metal layer and the metal of the layer has a first coefficient of thermal expansion (CTE). The polymer layer is on the metal layer and has a window with sidewalls. This window exposes the surface of the metal layer. The polymer layer has a second CTE that is higher than the first CTE. The semiconductor device includes a metal bump having a neck and a body, where the neck touches the window sidewalls and contacts the surface of the metal layer. The body extends on a surface parallel to the metal layer up to the edge. The metal of the bump has a third CTE that is lower than the second CTE. The pillars cut across the polymer layer at the edges. The pillar contacts this edge and the metal layer. The metal of the pillar has a fourth CTE that is lower than the second CTE.Yet another embodiment discloses a method for fabricating a semiconductor device. A polyimide layer is deposited on the metal layer of the semiconductor chip. Then, a first window is opened in the polyimide layer to expose the metal layer. Posts are formed in the window and then metal bumps are formed that align with the posts on the edges of the metal bumps.Description of drawingsFigure 1A is a cross-section of a portion of a semiconductor device with metal bumps attached to a substrate.Figure IB is a cross-section detailing the position of the cylinder in locations where thermo-mechanical stress is concentrated.Figure 2 shows a photomicrograph of a crack through a polymer layer caused by a concentration of thermomechanical stress at the location of a discontinuous surface in the device of Figure 1A.Figure 3 shows a photomicrograph of a crack through a metal layer caused by thermomechanical stress concentrations in the device of Figure 1A.4 depicts steps in an exemplary fabrication process flow to pattern a polymer layer on top metal of a semiconductor device, the pattern including windows through the polymer layer to expose the metal surface.Figure 5 illustrates the process steps of spin coating a patterned polymer layer with a photoresist layer.Figure 6 illustrates the process steps for forming openings in a photoresist layer that are nested with windows in a polymer layer.Figure 7 depicts the steps of electroplating a metal layer in each window of a polymer layer according to an exemplary manufacturing process flow.Figure 8 illustrates the process steps of removing the photoresist layer of Figure 5, thereby exposing the top metal surface in the window of the polymer layer.Figure 9 shows the process steps of spin-coating another layer of photoresist on the device surface, followed by opening a window exposing the top metal and the surface of the pillar.Figure 10 depicts the steps of sputtering an adhesive under-bump metal layer through exposed surfaces of a post, top metal, and adjacent polymer layer, and electroplating metal bumps on the under-bump metal layer in accordance with a manufacturing process flow.Figure 11 illustrates the process steps of stripping the photoresist layer of Figure 9.Figure 12 illustrates the steps of an alternative manufacturing process flow in which a photoresist layer is spin-coated onto the top surface metal of a semiconductor device.Figure 13 depicts the steps of opening a window in a photoresist layer where a pillar is placed, according to an alternative manufacturing process flow.Figure 14 illustrates the steps of electroplating pillars in a photoresist window according to an alternative manufacturing process flow.Figure 15 shows the step of stripping the photoresist layer of Figure 12, thereby exposing the plated pillars for further processing.Figure 16 shows a 3D view of a portion of a semiconductor device according to an embodiment.Detailed waysAnalysis of semiconductor devices that failed in highly accelerated stress tests involving exposure to moisture as well as temperature cycling and electrical biasing revealed that in the vast majority of failures, the root cause was made from polymeric compounds or from adjacent metal layers Cracks in the layers, causing electrical short circuits.Statistically, most cracks are found at device locations where adjacent surfaces of the component parts are discontinuous or vary substantially. Such discontinuities include the necks of bumps (which narrow the bumps attached to other components) and protrusions and peaks in otherwise smooth two- and three-dimensional surfaces.Modeling of thermomechanical stresses during test and operating conditions revealed discontinuities in two and three dimensions causing thermomechanical stresses to be concentrated in narrow areas and volumes. While elastic and plastic materials are typically characterized by their ability to tolerate uniform stress distributions or small stress concentrations, persistent exposure or concentration maxima are likely to exceed the tolerance limits of the material characteristics and initiate microcracking.Addresses the high stress regions that occur in a body with a high coefficient of thermal expansion that is bounded by a body with a lower CTE by transecting the high CTE body with a solid cylinder at a location in the surface that exhibits a discontinuity in the surface. micro-crack problem. The approach has been confirmed by data and modeling of thermomechanical stresses in semiconductor devices under test and operating conditions.As an example of semiconductor technology, many types of devices have integrated circuit chips capped by a copper layer with a first CTE, which is protected by a polyimide layer of uniform thickness; the polyimide has a higher CTE than the first The second CTE of CTE. The polyimide layer has a surface parallel to the copper layer and a window with sidewalls such that the window exposes the surface of the copper layer. To connect the semiconductor chip to the substrate, the windows of the polyimide layer are mated with copper bumps (the same CTE as the first CTE) having a neck and a body. The neck touches the window sidewall and contacts the surface of the copper layer in the window; in addition, the body extends to a straight edge on the surface of the polyimide layer parallel to the copper layer, wherein the body is discontinuously angled away from the polyimide layer. imide layer.Thermomechanical stresses are concentrated at the location of the bump edges during testing and operation. In order to suppress the occurrence of microcracks in the polyimide layer, a cylindrical or cube-shaped copper pillar (the same CTE as the first CTE) crosses the polyimide layer at the edge of the bump and contacts the edge of the bump and copper layer.1A is a composite diagrammatic illustration showing exemplary embodiments of the present disclosure on the left and some examples of problems solved by the disclosed techniques and structures on the right. FIG. 1A depicts a cross-section of a portion 100 of an exemplary semiconductor device attached to a substrate 120 . The semiconductor device includes a chip 101 that is typically made of silicon and enclosed in a package 140 that is typically made of a polymeric material with inorganic fillers, such as an epoxy-based molding compound. The substrate 120 may be, for example, a metal leadframe, a carrier laminated with several layers of metal and insulators, or an organic substrate. Regarding the CTE of the materials mentioned above, silicon is known for its small CTE (approximately 2×10 −6 K −1 ), while the CTE of substrate 120 can be almost an order of magnitude larger.As shown in FIG. 1A , on the surface of the chip 101 is a metal layer 102 which has a surface facing away from the semiconductor chip 101 . Since layer 102 is preferably made of copper, it is often referred to as the COA (Copper Above Everything) layer. It is referred to herein as the first subject. The surface of the COA layer facing away from the chip 101 is referred to herein as the first surface. The first surface faces layer 103 .Layer 103 is made of polymeric material, preferably polyimide. Layer 103 is referred to herein as the second body. Layer 103 achieves passivation, protection and stress buffering of the semiconductor chip. The layer 103 follows the first surface smoothly and has a thickness 103b preferably in the range of 5 μm to 10 μm. The thickness 103b is uniform across the first surface. Polyimide has a CTE of approximately 33×10-6K-1.As shown in Figure 1A, polymer layer 103 has a window of width 103a allowing access and contact with layer 102. The metal layer 110 covers the area opened by the window and extends further along the surface of the polymer layer 103 beyond the window sidewalls to the overall width 111a. Layer 110 is made from an alloy of refractory metals such as titanium and tungsten and may have a uniform thickness of approximately 100 nm. It acts as a seed layer and provides good contact and adhesion to copper layer 102 to metal bumps or pillars to be plated on layer 110 . Therefore, layer 110 is often referred to as an under-bump metal (UBM) layer.In FIG. 1A , bumps or posts 111 are formed across the width 111a of the UBM layer 110 . The bumps are preferably made of copper or copper alloy. The bumps 111 together with the UBM layer 110 are referred to herein as the third body. As shown in Figure 1A, the third body follows the polymer layer 103 smoothly. The surface of the third body facing the polymer layer 103 is referred to herein as the third surface.Bumps 111 have been deposited on seed layer 110 (see process flow below) so that they form sidewalls approximately perpendicular to polymer layer 103 where layer 110 stops. The third surface therefore has an edge and is therefore discontinuous at this location, where the side wall starts at an angle 111b of approximately 90 degrees to the third surface. In the exemplary device of Figure 1A, the sidewalls remain parallel to each other (at a distance 111a) along most of the bump height 111c. In other devices, the bump sidewalls may not remain constant. As mentioned, bump 111 exhibits a corner 111 b formed with UBM layer 110 and therefore with polymer layer 103 . In the exemplary device of Figure 1A, the corners are formed by bump surfaces that are approximately perpendicular to each other; in other devices, the surfaces may form a sharper or obtuse angle. However, in all device examples, the surface contours are discontinuous; they change abruptly. The metal of bumps 111 is preferably copper with a CTE of approximately 16×10 −6 K −1 .As further shown in FIG. 1A , bumps 111 are attached to substrate 120 by reflow solder 130 . After chip 101 is attached and connected to substrate 120, device 100 is encapsulated in package 140, which may include a polymeric encapsulating compound. The polymeric encapsulating compound can be filled with inorganic fillers to bring the compound's high CTE closer to the low CTE of silicon.After the polymeric compound is cured (polymerized), the reliability of the adhesion of the packaged device components can be tested. Standardized test conditions stress the device to confirm that device parts made of dissimilar materials remain reliably adhered to each other under operating conditions in order to rule out delamination, power interruption, and moisture intrusion into the packaged device as a deleterious consequence of corrosion, splitting, and failure. possibility.Among the standardized reliability tests is the highly accelerated stress test, in which packaged devices composed of components with different CTEs are first exposed to a humid environment and then exposed to rapid temperature cycles between -55°C and +125°C. These temperature fluctuations subject the assembly to both compressive and tensile stresses. The stresses caused by temperature cycling are called thermomechanical stresses. Thermomechanical stress is a continuous stress as long as the assembled surfaces of device components with different CTE are continuous. It should be noted that when the surface is continuous, there are no stress singularities. On the other hand, modeling and experience have shown that when parts are made of materials with different CTEs and one of the parts contains discontinuities with surface boundary lines—for example, where there are sudden changes in surface topography, thermal Mechanical stresses are concentrated and reach peak values. When stress exceeds the mechanical strength of one of the materials being tested, the stress can lead to cracks in insulator components, and fatigue and eventual cracking of metal locations under high stress.For devices, such as the one in Figure 1A, high stress concentrations have been found in a triple-point volume where bodies made of materials with different coefficients of thermal expansion meet and in addition at least one body undergoes topology Sudden changes (e.g. features such as edges, corners or peaks). In Figure 1A, the three bodies are a bump 111 made of a metal such as copper, a layer 103 made of a polymeric material such as polyimide, and a package made of a polymeric compound filled with inorganic fillers. 140. The bumps 111 have sharp edges.Figure 2 is a cross-sectional view showing a three-point volume and a crack through a polymer layer. In Figure 2, the polymer layer is polyimide, and the cracks are the result of thermomechanical stress buildup during temperature cycling testing.As the cross-section micrograph in Figure 2 demonstrates, high stress locations at the edge of the bump can cause cracks through the insulating layer. The crack will not stop at the interface with the metal (eg, copper layer 102 ) but will continue along the interface of polymer layer 103 and copper layer 102 and then extend deeper into the body of layer 102 . This extension is demonstrated in the photomicrograph of Figure 3.According to the Griffith energy-balance concept of crack formation in brittle solids, a change in the length of a nascent crack or notch does not change the sum of all energies; in other words, the sum of surface energy and mechanical energy must constant. This means that for crack propagation, the surface energy can generally increase, but the mechanical energy must decrease. Mechanical energy itself consists of the sum of the strain potential energy stored in the material and the potential energy of the externally applied loading system. That is, as long as any of these energies can assume a lower value, the energy released can be invested in creating a larger surface and propagating the crack. Proliferated cracks can be significantly accelerated by immersed water molecules and water films. The molecules of the polymeric material of layer 103 (eg, polyimide) have a tendency to adsorb water molecules and form a thin film of water. Applying the Griffith equilibrium requirement to the semiconductor device of Figure 1A, nascent cracks can propagate spontaneously as long as the applied uniform stress (for example, during operation or testing of the semiconductor device) is greater than the failure stress (see Figure 2) and continues until it is blocked by the end of the test (see Figure 3) or inhibited by the barrier. The failure stress at the crack front is in turn proportional to the square root of the free surface energy per unit area and to the square root of Young's modulus (material constant), and inversely proportional to the square root of the length of the initiating crack.The solution to the problems caused by stress cracks is to use metal barriers or posts. An example of a barrier or column 150 is illustrated in Figures 1A and 1B. The location and size of pillars 150 can be selected to simultaneously reduce initial stresses, prevent any nascent cracks, and block any moisture from infiltrating into the polymer layer or body. The most effective size and location of pillars 150 may be determined by modeling the thermomechanical stresses experienced on the assembly during temperature cycling of the packaged semiconductor device or by experimental temperature cycling (eg, through highly accelerated stress testing). Posts 150 are positioned at discontinuities in the surface.The cross-sectional view of FIG. 1A shows the post 150 on the left side of the metal bump 111 between the metal layer 102 and the metal bump 111 , which contains the refractory metal layer 110 . Posts 150 generally surround portions of metal bumps 111 that are in contact with metal layer 102 . Viewed along the axis parallel to the arrow marked 111c, the cylinder will appear as a ring with an approximately circular, hexagonal or rectangular shape depending on the shape of the metal bump 111.The device portion of FIG. 1A illustrates a pillar 150 embedded in a semiconductor device at a location identified through modeling as the peak location of thermomechanical stress. Figure 1B shows similar device locations in more detail. Metal layer 102 (eg, copper COA layer) has a first CTE and a first surface 102a. Facing the metal layer 102, the metal bumps 111 are attached to the metal layer 102 using a refractory metal layer 110 that has good adhesion to the metal layer 102. Metal bump 111 has a third CTE. The metal bumps 111 form steps of height 103b and length 103c to accommodate extensions 103c of a protective and stress-absorbing polymer layer 103, preferably made of polyimide. Along the extension, the polymer layer contacts the first surface 102a of the metal layer 102 and the metal bump surface 111d. Polymer layer 103 may be referred to herein as a second body having a second CTE. Regarding the modeling example of Figure IB, height 103b is approximately 5 μm and length 103c is between approximately 8 μm and 9 μm (with a gentle onset of height 103b, length 103c decreases to effective length 103d). Bump surface 111d is along length 103d and faces surface 102a.The first and third CTEs may be different or they may be the same; since the first and third bodies are typically metal, they will be similar in most cases. However, since the second body is preferably a polymeric compound, most devices have a second CTE that is greater than the first and third CTEs.Length 103d suddenly discontinues at corner 111b. The pillar 150 is embedded in the polymer layer 103 so that it has the same height as the layer thickness 103b and thus touches the metal layer 102. In the example of FIG. 1B , post 150 consumes a certain length (eg, 5 μm) above bump 111 and another length (eg, 5 μm) above the polymeric compound of package 140 . In other devices, post 150 may be completely above bump 111 . For ease of manufacturing (see below), post 150 is preferably made of a metal such as copper. In other devices, post 150 is made from benzocyclobutene (BCB).Posts 150 reduce thermomechanical stress between metal bumps 111 , metal layer 102 (covering silicon) and polymeric encapsulation material 140 . In addition, the pillars 150 block the absorption of water molecules through the polymeric compound (polyimide) of the layer 103 and the penetration of the water film toward the bumps 111 . Experience has confirmed that when the semiconductor device contains pillars 150 in high stress locations, no cracks are present.Other embodiments are processes for low-cost fabrication of pillars 150 at locations where concentrations of thermomechanical stress on the device must be rendered harmless. An exemplary process flow is illustrated in Figures 4-7 and continues in Figures 8-11. Another exemplary process flow is shown in Figures 12-15 and continues in Figures 8-11.As illustrated in Figure 4, an insulating polymer layer 103 (preferably polyimide) is deposited (preferably by spin coating techniques) on the surface of a top metal layer 102 (preferably copper) placed on a silicon chip 101. The layer 103 preferably has a uniform thickness 103b between about 5 μm and 10 μm, the layer 103 having a surface facing away from the metal layer 102 . The layer 103 is then patterned by opening a window into the layer. Select the location and size of the window based on device needs. In one illustrative example, the plurality of windows 450 of FIG. 4 has a length of 10 μm and the windows 403a are configured such that the windows 403a allow the length 103a of the surface of the top metal layer 102 to be exposed. In an example, length 103a may be approximately 25 μm, which is reduced from the 35 μm length often found in conventional devices. Therefore, a length of 25 μm leaves a polymer overlap 404 of about 18 μm to 19 μm, instead of the traditional 13 μm to 14 μm. To achieve the goals of stress reduction and moisture barrier in the finished device, it may be advantageous to open multiple windows 450 through the polymer layer 103 in strategic locations in the device.As illustrated in Figure 5, a photoresist material is spin-coated onto patterned polymer layer 103 to form layer 510. Using processes for masking, exposing, developing, and etching, a window is opened in photoresist layer 510 that is aligned with window 450 . The resulting opening is designated 650 in Figure 6.As illustrated in Figure 7, a metal layer 150 (preferably made of copper) is electroplated to a height equal to the thickness 103b of the polymer layer 103. The surfaces of layer 150 and layer 102 are coplanar. The plated layer 150 acts as a metal (copper) pillar in the finished device to relieve thermomechanical stress and block moisture intrusion. It should be mentioned that the adhesion of the plated pillars to the underlying metal is preferably enhanced by a sputtering process before the electroplating process. For the sputtering process, the device with open window 650 is transferred to the equipment's vacuum and plasma chamber to sputter metal. While cooling within the chamber and preferably below ambient temperature, the device is plasma-cleaned. In addition to cleaning the surface of the adsorbed film (especially the water monolayer), the plasma also accomplishes some roughening of the surface; both effects enhance the adhesion of the sputtered metal layer. At least one layer 750 of metal is then sputtered onto the exposed surface of layer 102 at a uniform energy and rate. Preferably, the step of sputtering comprises sputtering a first layer of a metal selected from the group consisting of titanium, tungsten, tantalum, zirconium, chromium, molybdenum and alloys thereof, and there is no need to delay sputtering on the first layer At least one second layer of metal selected from the group consisting of copper, silver, gold and alloys thereof. The sputtered layer serves as the seed metal for the electroplated and thicker metal layer 150 that becomes operational as the pillar 150 .As illustrated in Figure 8, photoresist layer 510 is stripped away, exposing the surface of layer 102 across length 103a. Subsequently, another layer of photoresist 910 is spin-coated over the device, as illustrated in Figure 9. A window through diameter 111a of photoresist layer 910 is opened. Window 111a exposes the copper pillar 150 and the coplanar surface of the polymer layer 103 adjacent the pillar, as well as the top metal layer 102 exposed across length 103a.As illustrated in Figure 10, at least one metal layer 110 is sputtered onto the exposed surface using a technique similar to the procedure described above. Metal layer 110 preferably includes one or more refractory metal layers, followed by a copper layer. Due to the plasma technology used, the metal layer 110 has good adhesion to the underlying surface. Layer 110 is planar on the coplanar surfaces of pillar 150 and adjacent portions of layer 103 . After layer 110 is deposited, copper bumps 111 are electroplated on layer 110 . The bumps 111 adhere well to the layer 110 and due to the metal layer 110 the bumps 111 also adhere to the top metal layer 102 . Finally, the photoresist layer 910 is stripped.The results are illustrated graphically in Figure 11. The copper pillar 150 is now installed to fulfill its function of reducing the thermo-mechanical stress at the high stress location of the bump edge 111 b and blocking moisture from penetrating into the polyimide layer 103 of the overlap region 404 .Another embodiment includes a modified manufacturing method for forming pillars that reduce thermomechanical stress and block moisture intrusion into a packaged semiconductor device. As illustrated in Figure 12, photoresist material is spin-coated onto the top metal layer 102 over the semiconductor (silicon) die 101 of the device to form a uniform height layer 1210 (referred to as the first photoresist agent layer). The thickness of the selection layer 1210 is as high as the desired height of the pillar; in Figure 12, the height of the photoresist layer 1210 is designated 1210a. Preferably, height 1210a is between 5 μm and 10 μm.As shown in Figure 13, a window 1350 (referred to as a first window) is opened in the photoresist layer 1210 by known processes for masking, exposing, developing and etching to expose the surface of the top metal layer 102 . As an example, the length of window 1350 may be 10 μm. The location of the window is chosen to be consistent with known locations of high thermomechanical stress in the finished device to be fabricated.As illustrated in Figure 14, a metal layer 150 is electroplated in the window using a configuration of window 1350 such that the window 1350 is filled to the height 1210a of the photoresist layer 1210 such that the plated metal can become variable in the finished device. It is a metal cylinder to relieve thermomechanical stress and prevent moisture infiltration. Post 150 is preferably made of copper and acts as a metal post in the finished device to relieve thermomechanical stress and block moisture intrusion. However, it should be mentioned that the sputtering process of the seed metal layer is preferably preceded by the electroplating process in order to enhance the adhesion of the electroplated pillars to the underlying metal 102 . As described above, at least one metal layer is sputtered on the exposed surface of the top metal layer 102; this layer preferably includes one or more refractory metal layers, followed by a copper layer.As depicted in Figure 15, the photoresist layer 1210 is removed, releasing the pillars 150. As illustrated in Figure 8, an insulating polymer layer 103 (preferably polyimide) is then deposited (preferably by spin coating techniques) on the surface of a top metal layer 102 (preferably copper) placed on the silicon chip 101 . Polymer layer 103 has a thickness 103b equal to height 1210a of pillar 150 (between approximately 5 μm and 10 μm for the example device of FIG. 1A ). The polymer layer 103 is coplanar with the surface of the pillar 150 .The polymer layer 103 is then patterned by opening a window 403a into the layer (referred to as the second window). The location and size of the window are selected according to device needs; in the exemplary device of Figure 8, window 403a is placed between pillars 150. Window 403a is configured such that it allows exposure of length 103a of the surface of top metal layer 102. The length 103a may be about 25 μm, and the polyimide overlap 404 between the window and the post 150 may be about 18 μm to 19 μm.As illustrated in Figure 9, a photoresist layer 910 (referred to as the second photoresist layer) is spin-coated over the device. The photoresist is stripped to length 111a, opening a window (referred to as the third window) through diameter 111a of photoresist layer 910. The window 111a has sidewalls and exposes the copper pillar 150 and the coplanar surface of the polyimide layer 103 adjoining the pillar, as well as the top metal layer 102 exposed by the window in the polyimide layer 103 .As shown in Figure 10, at least one metal layer 110 is sputtered onto the exposed surface using techniques similar to the procedures described above. Metal layer 110 preferably includes a layer of one or more refractory metals, followed by a layer of copper. Due to the plasma technology used, the metal layer 110 has good adhesion to the underlying surface. Layer 110 has length 111a and is planar on the coplanar surfaces of pillar 150 and layer 103 . After deposition of layer 110, copper bumps 111 are electroplated on layer 110. The bump 111 has a length 111a and adheres well to the layer 110 and, due to the metal layer 110, also adheres to the top metal layer 102.Finally, the photoresist layer 910 is stripped. The results are illustrated graphically in Figure 11. The metal bump 111 is discontinuous on its surface above the coplanar pillar 150 and polymer layer surface and at the bump edge 111b at the sidewall of the aforementioned window 111a (third window). The copper pillar 150 is now installed to fulfill its function of reducing the thermo-mechanical stress at the high stress location of the bump edge 111 b and blocking moisture from penetrating into the polyimide layer 103 of the overlap region 404 .Figure 16 shows a 3D view of a portion of a semiconductor device according to an embodiment. For simplicity, only quadrants of semiconductor devices are shown. The semiconductor device includes metal bumps 111 connecting substrate 120 to chip 101 . An enlarged view of a portion 160 of a semiconductor device is shown with the top portion of package 140 removed. It can be seen from the top view of the semiconductor device that the metal bump 111 has a circular shape. As seen from the top view of the semiconductor device, the pillar 150 also has a ring shape.While exemplary embodiments have been described, this description is not intended to be construed in a limiting sense. Those skilled in the art, upon reference to the specification, will recognize various modifications and combinations of the illustrated embodiments, as well as other embodiments. As examples, these techniques and structures are not intended to be applicable only to active semiconductor devices with low and high pin counts, such as transistors and integrated circuits, but are also applicable to combinations of active and passive components on leadframe pads .As another example, these techniques and structures are not intended to be applicable only to silicon-based semiconductor devices, but also to devices using gallium arsenide, gallium nitride, silicon germanium, and any other semiconductor material employed in the industry. In addition, lead frames with cantilevered leads, quad flat no-lead (QFN) lead frames, and small form factor no-lead (SON) lead frames can all be used.As another example, these techniques and structures are applicable to leadframes, laminated substrates, and any other substrate or support structure that contains a metallurgical surface configuration suitable for solder attachment.It is therefore intended that the appended claims cover any such modifications or embodiments.
A test structure for use in determining an effective channel length of a transistor is disclosed herein. The test structure comprises a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above the substrate, the first resistor having a first width defined by the spacing between the first pair of structures, a second resistor comprised of a second doped region formed in the substrate between a second pair of spaced-apart structures positioned above the substrate, the second resistor having a second width defined by the spacing between the second pair of structures, the second width being greater than the first width, and a plurality of conductive contacts electrically coupled to each of the first and second doped regions. The method disclosed herein comprises determining the extent of lateral encroachment of the doped regions under the structures based upon the following formula: DELTAw=(R1W1 -R2W2)/(R1-R2). The effective channel length of the transistor may be determined by subtracting the DELTAw value from the length of the gate electrode.
What is claimed is: 1. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart inactive gate electrode structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart inactive gate electrode structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being greater than said first width; and a plurality of conductive contacts electrically coupled to each of said first and second doped regions. 2. The test structure of claim 1, wherein said first and second doped regions are comprised of N-type dopant material.3. The test structure of claim 1, wherein said first and second doped regions are comprised of P-type dopant material.4. The test structure of claim 1, wherein each of said first and second resistors have a length that is at least twenty times said first width of said first resistor.5. The test structure of claim 1, wherein said first and second resistors are formed in a scribe line of said semiconducting substrate.6. The test structure of claim 1, wherein said first and second resistors are formed adjacent each other.7. The test structure of claim 1, wherein said first and second doped regions are comprised of dopant material implanted in at least one of a source/drain extension implant process and a source/drain implant process.8. The test structure of claim 1, further comprising first and second active regions formed in said substrate, said first and second doped regions being formed in said first and second active regions, respectively, said first and second active regions being doped with a dopant material that is of a type opposite of a dopant material used to form said first and second doped regions.9. The test structure of claim 1, wherein said first and second pairs of structures are comprised of at least one of polysilicon and a metal.10. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting, substrate between a first pair of spaced-apart inactive gate electrode structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart inactive gate electrode structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being at least 1.5 times said first width of said first resistor; and a plurality of conductive contacts electrically coupled to each of said first and second doped regions. 11. The test structure of claim 10, wherein said first and second doped regions are comprised of N-type dopant material.12. The test structure of claim 10, wherein said first and second doped regions are comprised of P-type dopant material.13. The test structure of claim 10, wherein each of said first and second resistors have a length that is at least twenty times said first width of said first resistor.14. The test structure of claim 10, wherein said first and second resistors are formed in a scribe line of said semiconducting substrate.15. The test structure of claim 10,herein said first and second resistors are formed adjacent each other.16. The test structure of claim 10, wherein said first and second doped regions are comprised of dopant material implanted in at least one of a source/drain extension implant process and a source/drain implant process.17. The test structure of claim 10, further comprising first and second active regions formed in said substrate, said first and second doped regions being formed in said first and second active regions, respectively, said first and second active regions being doped with a dopant material that is of a type opposite of a dopant material used to form said first and second doped regions.18. The test structure of claim 10, wherein said first and second pairs of structures are comprised of at least one of polysilicon and a metal.19. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart inactive gate electrode structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart inactive gate electrode structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being at least 1.5 times said first width of said first resistor, each of said first and second resistors having a length that is at least 20 times said first width of said first resistor; and a plurality of conductive contacts electrically coupled to each of said first and second doped regions. 20. The test structure of claim 19, wherein said first and second doped regions are comprised of N-type dopant material.21. The test structure of claim 19, wherein said first and second doped regions are comprised of P-type dopant material.22. The test structure of claim 19, wherein said first and second resistors are formed in a scribe line of said semiconducting substrate.23. The test structure of claim 19, wherein said first and second resistors are formed adjacent each other.24. The test structure of claim 19, wherein said first and second doped regions are comprised of dopant material implanted in at least one of a source/drain extension implant process and a source/drain implant process.25. The test structure of claim 19, further comprising first and second active regions formed in said substrate, said first and second doped regions being formed in said first and second active regions, respectively, said first and second active regions being doped with a dopant material that is of a type opposite of a dopant material used to form said first and second doped regions.26. The test structure of claim 19, wherein said first and second pairs of structures are comprised of at least one of polysilicon and a metal.27. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being greater than said first width, said first and second resistors being formed in a scribe line of said semiconducting substrate; and a plurality of conductive contacts electrically coupled to each of said first and second doped regions. 28. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being greater than said first width; a plurality of conductive contacts electrically coupled to each of said first and second doped regions; and first and second active regions formed in said substrate, said first and second doped regions being formed in said first and second active regions, respectively, said first and second active regions being doped with a dopant material that is of a type opposite of a dopant material used to form said first and second doped regions. 29. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being at least 1.5 times said first width of said first resistor, said first and second resistors being formed in a scribe line of said semiconducting substrate; and a plurality of conductive contacts electrically coupled to each of said first and second doped regions. 30. A test structure for use in determining an effective channel length of a transistor, comprising:a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above said substrate, said first resistor having a first width defined by the spacing between said first pair of structures; a second resistor comprised of a second doped region formed in said substrate between a second pair of spaced-apart structures positioned above said substrate, said second resistor having a second width defined by the spacing between said second pair of structures, said second width being at least 1.5 times said first width of said first resistor; a plurality of conductive contacts electrically coupled to each of said first and second doped regions; and first and second active regions formed in said substrate, said first and second doped regions being formed in said first and second active regions, respectively, said first and second active regions being doped with a dopant material that is of a type opposite of a dopant material used to form said first and second doped regions.
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention is generally directed to the field of semiconductor processing, and, more particularly, to a method of measuring the effective channel length of a transistor, and a test structure for accomplishing same.2. Description of the Related ArtThere is a constant drive within the semiconductor industry to increase the operating speed of integrated circuit devices, e.g., microprocessors, memory devices, and the like. This drive is fueled by consumer demands for computers and electronic devices that operate at increasingly greater speeds. This demand for increased speed has resulted in a continual reduction in the size of semiconductor devices, e.g., transistors. That is, many components of a typical field effect transistor (FET), e.g., channel length, junction depths, gate insulation thickness, and the like, are reduced. For example, all other things being equal, the smaller the channel length of the transistor, the faster the transistor will operate. Thus, there is a constant drive to reduce the size, or scale, of the components of a typical transistor to increase the overall speed of the transistor, as well as integrated circuit devices incorporating such transistors.By way of background, FIG. 1 depicts an illustrative prior art transistor 10 formed above a semiconducting substrate 11. The transistor 10 is generally formed in an active region 23 of the substrate 11 as defined by trench isolation regions 24. The transistor 10 is generally comprised of a gate insulation layer 13, a gate electrode 14, sidewall spacers 15, and source/drain regions 16. The gate electrode 14 also has a length, as indicated by the dimension 21. The various components of the transistor 10 shown in FIG. 1, as well as the methods of making such components, are well-known to those skilled in the art and will not be repeated in greater detail herein. At the point of fabrication depicted in FIG. 1, a layer of insulating material 17 and a plurality of conductive plugs 18 that are electrically coupled to the source/drain regions 16 have been formed above the transistor 10.In modern semiconductor devices, an important parameter of transistor devices is the effective channel length (Leff) of the device. For example, the effective channel length of a transistor has a great impact on a variety of device performance characteristics, e.g., the switching speed of the transistor, leakage currents, etc. In general, the effective channel length is defined as the distance between the source/drain regions 16, as indicated by the arrow 12 in FIG. 1. As shown in FIG. 1, the source/drain regions 16 extend somewhat under the sidewalls 20 of the gate electrode 14. The combined amount of this source/drain encroachment under the sidewalls 20 is generally referred to in the industry as "[Delta]L." The effective channel length for a transistor may be determined by subtracting the [Delta]L value from the length 21 of the gate electrode 14 (Leff=Gate Length-[Delta]L).A variety of techniques are employed in the industry in attempts to calculate or determine the effective channel length of a transistor. Some of those techniques involve applying a voltage across the source/drain regions 16, via conductive plugs 18, and employing a test instrumentation device 19 to measure a resistance of the channel region of the transistor 10. During the course of forming a transistor, a variety of dopant materials are implanted into the channel region of the transistor 10. For example, the channel region of a typical transistor may be subjected to threshold voltage implants, punch-through voltage implants, and so-called halo implants to achieve one or more desired effects on the resulting transistor. Unfortunately, it is believed that such heavy doping schemes lead to erroneous results from conventional transistor-based algorithms for calculating the effective channel length of the device, which make many assumptions about things such as channel doping levels and uniformity erroneous.The present invention is directed to a method that may solve, or at least reduce, some or all of the aforementioned problems.SUMMARY OF THE INVENTIONThe present invention is directed to a method of measuring the effective channel length of a transistor, and a test structure for accomplishing same. In one illustrative embodiment, the structure is comprised of a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above the substrate, the first resistor having a first width defined by the spacing between the first pair of structures, and a second resistor comprised of a second doped region formed in the substrate between a second pair of spaced-apart structures positioned above the substrate, the second resistor having a second width defined by the spacing between the second pair of structures. In the test structure, the width of the second resistor is greater than the width of the first resistor. The test structure further comprises a plurality of conductive contacts electrically coupled to each of the first and second doped regions. In another illustrative embodiment, the width of the second resistor is at least 1.5 times the width of the first resistor.In one embodiment, the method disclosed herein comprises forming a first resistor comprised of a first doped region formed in a semiconducting substrate between a first pair of spaced-apart structures positioned above the substrate, the first resistor having a first width defined by the spacing between the first pair of structures, forming a second resistor comprised of a second doped region formed in the substrate between a second pair of spaced-apart structures positioned above the substrate, the second resistor having a second width defined by the spacing between the second pair of structures, the second width being greater than the first width, and forming a plurality of conductive contacts that are electrically coupled to each of the first and second doped regions. The method further comprises determining a resistance for each of the resistors by performing a process that at least comprises applying a voltage across the doped region of each of the first and second resistors, calculating, based upon the determined resistance of the first and second resistors, a [Delta]w value that corresponds to an amount of lateral encroachment of each of the doped regions under the spaced-apart structures, and determining an effective channel length for a transistor by subtracting the determined [Delta]w value from the length of a gate electrode of the transistor. In one particularly illustrative embodiment, the [Delta]w value may be calculated in accordance with the following equation: [Delta]w=(R1W1-R2W2)/(R1-R2).BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a cross-sectional view of an illustrative prior art transistor;FIG. 2 is a cross-sectional view of an illustrative test structure in accordance with one embodiment of the present invention; andFIG. 3 is a plan view of the illustrative test structure shown in FIG. 2.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives filling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to the attached figures. Although the various regions and structures of a semiconductor device are depicted in the drawings as having very precise, sharp configurations and profiles, those skilled in the art recognize that, in reality, these regions and structures are not as precise as indicated in the drawings. Additionally, the relative sizes of the various features and doped regions depicted in the drawings may be exaggerated or reduced as compared to the size of those features or regions on fabricated devices. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention.In general, the present invention is directed to a method of measuring the effective channel length of a transistor, and a test structure for accomplishing same. As will be readily apparent to those skilled in the art upon a complete reading of the present application, the present method is applicable to a variety of technologies, e.g., NMOS, PMOS, CMOS, etc., is readily applicable to a variety of devices, including, but not limited to, logic devices, memory devices, etc.One illustrative test structure that may be employed with the present invention is depicted in FIGS. 2 and 3. As shown therein, the test structure is comprised of a first resistor 30 and a second resistor 32. The resistors 30, 32 depicted in FIGS. 2 and 3 may be formed from materials commonly used in semiconductor processing, and may be made using a variety of known manufacturing techniques. The present invention will be discussed in the context of determining the effective channel length for an illustrative NMOS transistor using the structures and methods described herein. However, as will be recognized by those skilled in the art after a complete reading of the present application, the present invention is readily applicable to other semiconductor devices, e.g., PMOS devices.As shown in FIG. 2, a plurality of transistor-type structures 31 are formed above active regions 40 formed in a semiconducting substrate. The active regions 40 are defined by isolation regions 41 formed in the semiconducting substrate. The transistor structures are comprised of a gate electrode, a gate insulation layer 36, sidewalls 37, and sidewall spacers 38. For purposes of explanation, the gate electrodes have been numbered 34A-D, but they may be referred to generically by the number 34 An implant region 45 is formed between adjacent transistor-type structures 31 in each of the active areas 40, as shown in FIG. 2. The structures depicted in FIGS. 2 and 3 may be fabricated using a variety of known techniques. For example, active regions 40 may be defined in the substrate by forming trenches in the substrate and filling the trenches with an appropriate insulating material, e.g., silicon dioxide. The width 53 of the active regions 40 may be varied as a matter of design choice. Then, using a patterned layer of photoresist (not shown), a variety of ion implant processes may be performed on the active region 40. For example, for an illustrative NMOS transistor, a variety of implant processes, such as a threshold voltage implant, a punch-through implant, and a well implant, may be implanted into the active region 40. In effect, for an illustrative NMOS device, the active region 40 may be considered to be a P-well.The gate insulation layer 36 may be comprised of a thermally grown layer of silicon dioxide having a thickness ranging from approximately 2-30 nm Although not important for purposes of the present invention, in the depicted embodiment, since the gate insulation layer 36 is thermally grown, it only extends to the edge of the active region 40. If desired, the gate insulation layer 36 could also be formed by a deposition process, such that the gate insulation layer 36 extends completely under the gate electrode 34. Alternatively, the width 53 of the active region 40 could be increased, such that a thermally grown layer of silicon dioxide would extend completely under the gate electrode 34. The length 44 of the gate electrodes 34A-D may be varied as a matter of design choice. In some embodiments, the length 44 of the gate electrodes 34A-D may range from approximately 0.18-0.5 [mu]m, and they may have a thickness of approximately 0.05-0.4 [mu]m. The sidewall spacers 38 may be comprised of a variety of materials, such as silicon dioxide or silicon oxynitride. The gate electrodes 34 may be comprised of any material suitable for such purposes, e.g., polysilicon, a metal, etc.The gate electrodes 34 in the resistors 30, 32 are inactive, i.e., they are not coupled to any power supply. In effect, the gate electrodes 34 in the depicted embodiment are "dummy" spaced-apart structures. One or more of the test structures, comprised of resistors 30, 32, may be formed on test wafers or in the scribe lines of actual production wafers. Moreover, the resistors 30, 32 may be formed adjacent one another or they may be formed apart from one another in different regions of the substrate. Of course, more than one pair of resistors may be formed on a given wafer.The implant regions 45 may be formed by performing a variety of implant processes typically performed on modern semiconductor devices. For an illustrative NMOS device, these implant regions would be formed using N-type dopant materials, e.g., arsenic, phosphorous. For example, the implant regions 45 may be formed using only a source/drain extension implant process that may be self-aligned with respect to the sidewalls 37 of the adjacent gate electrodes, i.e., gate electrodes 34A-B, 34C-D. Alternatively, the implant regions 45 may be formed by performing a source/drain extension implant process and a source/drain implant process performed after the sidewall spacers 38 are formed. Additional implants may also be performed to form the implant regions 45. Ultimately, the implant regions 45 should be implanted in accordance with the implant processes performed on actual production devices the results of which are desired to be tested using the present structure and methodology. It should also be noted that the implant regions 45 depicted in FIG. 2 are representative of the implant regions 45 after one or more anneal processes have been performed on the resistors 30, 32, i.e., after the implanted dopant atoms have migrated from their implanted position.As shown in FIG. 3, in the first resistor 30, the distance between the sidewalls 37 of the gate electrodes 34A and 34B is represented by a dimension W1, whereas, the distance between the sidewalls 37 of the gate electrodes 34C and 34D of the second resistor structure 32 is represented by a dimension W2. That is, the first resistor 30 has a width W1 that is defined by the spaced-apart structures 34A-B. The second resistor 32 has a width W2 that is defined by the spaced-apart structures 34C-D.The absolute values of W1 and W2 may be varied as a matter of design choice. For example, W1, the width of the narrowest resistor 30, may be of any desired size, e.g., as small as possible. Alternatively, the W1 dimension may be limited by certain design rules applicable to the semiconductor devices. For example, the design rules may establish a minimum spacing between adjacent polysilicon line-type structures, and W1 may be set at that minimum spacing. W2, the width of the widest resistor 32, may also vary as a matter of design choice. In general, the width W2 should be at least 1.5 times the width W1.Moreover, as shown in FIG. 3, the active regions 40 have a length dimension "L" that is at least approximately twenty times the width of the narrower resistor 30 (20*W1). That is, the length of the active areas 40 is at least twenty times as long as the W1 dimension. Conductive contacts 46 are formed at each end of the implant region 45 for both of the resistors 30, 32. The contacts 46 may be comprised of a variety of materials and may take on a variety of shapes, e.g., circular, square, rectangular, etc. As will be recognized by those skilled in the art, the contacts 46 are the means by which electrical testing of the resistors 30, 32 will be performed.The theory behind the methodology employed in the present invention may best be explained by the following mathematical equations:RW=R1 (W1-[Delta]w)/L (Equation 1)R1=R2 (W2-[Delta]w)/L (Equation 2)[Delta]w=(R1W1-R2W2)/(R1-R2) (Equation 3)In the equations, Rs is equal to the sheet resistance of the resistor, R1 is the resistance of the first resistor 30, W1 is the width of the first resistor 30, L is the length of the resistor, and [Delta]w is a factor representing the lateral extension of the doped regions under the gate electrodes. For example, with reference to FIG. 2, [Delta]w for the first resistor 30 is equal to the combined encroachment of the implant region 45 under the gate electrodes 34A-B, as represented by dimensions 51, 52. The dimensions 51, 52 may be different. The Rs and [Delta]w values are, by definition, the same for both resistors 30, 32. Equation 2 is a similar equation except cast in terms of the characteristics of the second resistor 32. Equation 3 is the result of solving Equations 1 and 2 simultaneously with the premise that the length of both of the resistors is the same, and with the understanding that the sheet resistance (Rs) and [Delta]w are, by definition, equal for both resistors 30, 32.The resistance of the first and second resistors, R1 and R2, respectively, are measured by simply forcing a voltage potential (V) between the contacts 46 for each resistor, measuring the current flowing in the resistor, and solving for the resistance in accordance with the equation R=V/I. Applying the resistance values for each resistor to Equation 3 above, [Delta]w may be readily calculated. The calculated factor [Delta]w is equal to the [Delta]l factor used in calculating the effective channel length of a transistor using traditional algorithms. Thus, the effective length of a channel of a transistor may be determined in accordance with the following equation:Leff=Gate Length-[Delta]w (Equation 4)Through use of the present invention, the effective channel length of transistors may be determined irrespective of the nominal channel length of the devices.The present invention is directed to a method of measuring the effective channel length of a transistor, and a test structure for accomplishing same. In one illustrative embodiment, the test structure is comprised of a first resistor 30 comprised of a first doped region 45 formed in a semiconducting substrate between a first pair of spaced-apart structures 34A-B positioned above the substrate, wherein the first resistor 30 has a first width (W1) defined by the spacing between the first pair of structures 34A-B, a second resistor 32 comprised of a second doped region 45 formed in the substrate between a second pair of spaced-apart structures 34C-D positioned above the substrate, wherein the second resistor 32 has a second width (W2) defined by the spacing between the second pair of structures 34C-D, the second width (W2) being greater than the first width (W1), and a plurality of conductive contacts 46 electrically coupled to each of the first and second doped regions 45. In another illustrative embodiment, the width (W2) of the second resistor is at least 1.5 times the width (W1) of the first resistor.A method of determining an effective channel length of a transistor is also disclosed herein. In one embodiment, the method comprises forming a first resistor 30 comprised of a first doped region 45 formed in a semiconducting substrate between a first pair of spaced-apart structures 34A-B positioned above the substrate, the first resistor 30 having a first width (W1) defined by the spacing between the first pair of structures 34A-B, forming a second resistor 32 comprised of a second doped region 45 formed in the substrate between a second pair of spaced-apart structures 34C-D positioned above the substrate, the second resistor 32 having a second width (W2) defined by the spacing between the second pair of structures 34C-D, wherein the second width (W2) is greater than the first width (W1), and forming a plurality of conductive contacts 46 that are electrically coupled to each of the first and second doped regions 45. The method further comprises determining a resistance for each of the resistors 30, 32 by performing a process that at least comprises applying a voltage across the doped region 45 of each of the first and second resistors 30, 32, calculating, based upon the determined resistance of the first and second resistors 30, 32, a [Delta]w value that corresponds to an amount of lateral encroachment 51, 52 of each of the doped regions 45 under the spaced-apart structures, and determining an effective channel length for a transistor by subtracting the [Delta]w value from the length of a gate electrode of the transistor. For example, the [Delta]w value may be subtracted from the transistor length 21 of the transistor 10 depicted in FIG. 1.Alternatively, assuming that the length 44 of the gate electrodes 34 correspond to the length of gate electrodes formed for production devices, the effective channel length for the production transistors may be determined by subtracting [Delta]w from the length dimension 44.In either case, this equates to subtracting the [Delta]w value from the length of the gate electrode of the transistor in question.The present invention is useful in calculating the effective channel length of a transistor. Moreover, it is believed that the present methodologies are not adversely impacted by the heavy doping of the channel regions in a transistor commonly found in modern semiconductor devices. As a result, more accurate and reliable information may be obtained as to the effective channel length of a transistor. In turn, this information may be used to adjust one or more process parameters and/or to eliminate or reduce the production of semiconductor devices of an unacceptable quality.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled ink the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
Embodiments of the present disclosure describe phase-change memory cell implant for dummy array leakage reduction. In an embodiment, an apparatus includes a plurality of phase-change memory (PCM) elements, wherein individual PCM elements of the plurality of PCM elements are dummy cells including a bottom electrode layer, a select device layer disposed on the bottom electrode layer, a middle electrode layer disposed on the select device layer, a phase-change material layer disposed on the middle electrode layer, and a top electrode layer disposed on the phase-change material layer, wherein the phase-change material layer is doped with an impurity to reduce cell leakage of the dummy cells. Other embodiments may be described and/or claimed.
1.A device comprising:A plurality of phase change memory (PCM) elements, wherein individual ones of the plurality of PCM elements are dummy cells, the dummy cells comprising:Bottom electrode layerA selection device layer disposed on the bottom electrode layer;An intermediate electrode layer disposed on the selection device layer;A phase change material layer disposed on the intermediate electrode layer; andA top electrode layer disposed on the phase change material layer,Wherein the phase change material layer is doped with impurities to reduce cell leakage of the dummy cell.2.The apparatus of claim 1, wherein the selection device layer is doped with impurities to reduce cell leakage of the dummy cell.3.The apparatus of claim 2, wherein the selection device layer and the phase change material layer are doped with the same impurity.4.The apparatus of claim 3, wherein the selection device layer has a higher concentration of the impurity than the phase change material layer.5.The apparatus according to any one of claims 1 to 4, wherein:The phase change material layer and the selection device layer comprise a chalcogenide material; andThe impurity is selected from the group consisting of arsenic, germanium, oxygen, silicon, carbon, boron and nitrogen.6.The apparatus according to claim 5, wherein the impurity is Si, C or Ge.7.The apparatus according to claim 6, wherein the impurity is Si.8.The apparatus according to any one of claims 1 to 4, further comprising a tile comprising a cell of the cell array, wherein the dummy cell is provided at an edge of the tile.9.The apparatus according to claim 8, wherein an active cell of the tile of the cell is electrically coupled with the dummy cell and is not doped with the impurity.10.A method comprising:The stacked layers of phase change memory (PCM) devices are formed byDepositing a bottom electrode layer on the word line metal layer;Depositing a select device layer on the bottom electrode layer;Depositing an intermediate electrode layer on the selection device layer;Depositing a phase change material layer on the intermediate electrode layer; andThe phase change material layer is doped with impurities in a region of the stacked layer corresponding to dummy cells to reduce cell leakage of the dummy cells.11.The method of claim 10, further comprising:Impurities are doped to the select device layer to reduce cell leakage of the dummy cells.12.The method of claim 11, wherein the selection device layer and the phase change material layer are doped with the same impurity during the same implantation process.13.The method according to claim 12, wherein doping the selection device layer provides the impurities in the selection device layer that are higher in concentration than the impurities in the phase change material layer concentration.14.The method of any one of claims 10-13, wherein:The phase change material layer and the selection device layer comprise a chalcogenide material; andThe impurity is selected from the group consisting of arsenic, germanium, oxygen, silicon, carbon, boron and nitrogen.15.The method according to claim 14, wherein the impurity is Si.16.The method according to any one of claims 10 to 13, wherein the dummy cell is disposed at an edge of a tile including a cell of the cell array.17.The method of claim 16 wherein the area of ​​the active cell of the tile of the cell is protected by a patterned mask layer such that during the doping of the phase change material layer the The source cell is not doped with the impurity.18.A system comprising:Circuit board;A die coupled with the circuit board, the die comprising:A plurality of phase change memory (PCM) elements, wherein individual ones of the plurality of PCM elements are dummy cells, the dummy cells comprising:Bottom electrode layerA selection device layer disposed on the bottom electrode layer;An intermediate electrode layer disposed on the selection device layer;A phase change material layer disposed on the intermediate electrode layer; andA top electrode layer disposed on the phase change material layer,Wherein the phase change material layer is doped with impurities to reduce cell leakage of the dummy cell.19.The system of claim 18, wherein the selection device layer is doped with impurities to reduce cell leakage of the dummy cells.20.The system of claim 19, wherein the selection device layer has a higher concentration of the impurities than the phase change material layer.21.The system of any one of claims 18-20, wherein the system is a mobile computing device comprising one or more of the following items coupled to the circuit board: an antenna, a display, a touch screen display, Touch screen controller, battery, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, gage counter, accelerometer, gyroscope, speaker or camera.
Phase change memory cell injection for dummy array leakage reductionCross reference to related applicationsThis application claims the benefit of U.S. Application No. 14 / 581,921, entitled PHASE-CHANGE MEMORY CELLIMPLANT FOR DUMMY ARRAY LEAKAGE REDUCTION, filed on December 23, 2014, which is incorporated herein by reference in its entirety for all purposes .Technical fieldEmbodiments of the present disclosure generally relate to the field of integrated circuits, and more specifically to phase change memory cell injection for dummy array leakage reduction.Background techniquePhase change memory (PCM) technology (eg, Multi-Stack Crosspoint PCM) is a promising alternative to other non-volatile memory (NVM) technologies. Currently, non-uniform chemical mechanical polishing (CMP) or other problems such as load effects can lead to leakage from vertical cells of a cell array that includes, for example, dummy cells of an array.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. For convenience of explanation, the same reference numerals denote the same structural elements. In the drawings, the embodiments are illustrated by way of example and not by way of limitation.FIG. 1 schematically illustrates a top view of an example die in a wafer form and a singular form according to some embodiments.Figure 2 schematically illustrates a cross-sectional side view of an integrated circuit (IC) assembly according to some embodiments.Figure 3 schematically illustrates a cross-sectional side view of a PCM device in accordance with some embodiments.FIG. 4 schematically illustrates a cross-sectional side view of a stacked layer of selectively doped PCM devices in accordance with some embodiments. FIG.FIG. 5 schematically illustrates a cell array of a PCM device including an active cell and a dummy cell according to some embodiments.FIG. 6 is a flowchart of a method of fabricating a PCM device in accordance with some embodiments. FIG.FIG. 7 schematically illustrates an example system including a PCM device according to various embodiments described herein.detailed descriptionEmbodiments of the present disclosure describe phase change memory cell injection for dummy array leakage reduction. In the following detailed description, reference is made to the accompanying drawings, which form part of the description, wherein like reference numerals refer to like parts throughout, and wherein like parts of the figures, Thematic Examples. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the embodiments is defined by the claims and their equivalents.For the purposes of the present disclosure, the phrase "A and / or B" means (A), (B) or (A and B). For the purposes of the present disclosure, the phrase "A, B and / or C" means (A), (B), (C), (A and B), (A, B and C).The description may use the terms "in an embodiment" or "in various embodiments," which may each refer to the same embodiment or to one or more of the different embodiments. In addition, the terms "comprising", "including", "having" and the like are synonymous with respect to the embodiments of the present disclosure. The term "coupled" may refer to direct connection, indirect connection, or indirect communication.As used herein, the term "module" may refer to an application specific integrated circuit (ASIC) that executes one or more software or firmware programs, a combinational logic circuit, a state machine, and / or other suitable components that provide the described functionality, , Electronic circuitry, processor (shared, dedicated, or group), and / or memory (shared, dedicated, or group) are part of, or include, the same.FIG. 1 schematically illustrates a top view of an example die 102 in wafer form 10 and in a singular form 100, in accordance with some embodiments. In some embodiments, the die 102 may be one of a plurality of dies (eg, die 102, 102 a, 102 b) of a wafer 11 that includes a semiconductor material such as silicon or other suitable material. A plurality of dies may be formed on the surface of the wafer 11. Each of the dies may be a repeating unit of a semiconductor product that includes a phase-change memory (PCM) device as described herein. For example, die 102 may include circuitry 103 for a PCM device according to some embodiments. According to various embodiments, the circuit 103 may include one or more PCM elements (eg, cells) that may be arranged in an array. The PCM element may include, for example, a phase change material, for example, chalcogenide glass, capable of switching between a crystalline state and an amorphous state by applying heat generated by a current. The state of the phase change material (eg, crystalline / amorphous) may correspond to a logic value (eg, 1 or 0) of the PCM element. In some embodiments, circuit 103 may be part of a PCM and switch (PCMS) device. That is, the PCM elements may include switches such as Bidirectional Threshold Switches (OTSs) configured for select / program operations of PCM elements. In some embodiments, circuit 103 may include a dummy cell doped with impurities as described herein.Circuit 103 may also include one or more bit lines and one or more word lines coupled to the PCM element. In some embodiments, the bit lines and word lines may be configured such that each of the PCM elements is disposed at the intersection of each individual bit line and word line. Voltage or bias voltage can be applied to the target PCM element in the PCM element using word lines and bit lines to select a target cell for a read or write operation. A bit line driver may be coupled to the bit line and a word line driver may be coupled to the word line to facilitate decoding / selection of the PCM element. Capacitors and resistors can be coupled to bit lines and word lines. In some embodiments, circuit 103 may include other suitable devices and configurations. For example, circuit 103 may include one or more modules configured to perform read, program, verify, and / or analyze operations.In some embodiments, the circuit 103 may be formed using PCM fabrication techniques and / or other suitable semiconductor fabrication techniques. Note that circuit 103 is only schematically depicted in FIG. 1 and may represent a variety of suitable logic units or memories in the form of circuits including, for example, one or more state machines, including circuits and / or memories (Eg, firmware or software) that is configured to perform actions such as read, program, verify, and / or analyze operations.After the manufacturing process of the semiconductor product is completed, the wafer 11 may undergo a separate process wherein each of the dies (eg, dies 102, 102a, 102b) are separated from one another to provide discrete "dies" of the semiconductor product. Wafer 11 may be any of a variety of sizes. In some embodiments, the diameter of the wafer 11 ranges from about 25.4 mm to about 450 mm. In other embodiments, the wafer 11 may include other dimensions and / or other shapes. According to various embodiments, the circuit 103 may be provided on the semiconductor substrate in a wafer form 10 or a single form 100. In some embodiments, die 102 may include logic cells or memories, or a combination thereof.FIG. 2 schematically illustrates a cross-sectional side view of an integrated circuit (IC) assembly 200 in accordance with some embodiments. In some embodiments, IC component 200 may include one or more dies (hereinafter referred to as "die 102") electrically and / or physically coupled with package substrate 121. Die 102 may include a circuit (eg, circuit 103 of FIG. 1), such as a PCM device (eg, PCM device 300 of FIG. 3) as described herein. In some embodiments, the package substrate 121 may be coupled with the circuit board 122 as can be seen.Die 102 may represent a discrete product made of a semiconductor material, such as silicon, using semiconductor fabrication techniques, such as thin film deposition, lithography, etching, and the like, used in conjunction with forming PCM devices. In some embodiments, die 102 may include a processor, memory, system on chip (SoC), or ASIC, or portions thereof, in some embodiments. In some embodiments, an electrically insulating material, such as a mold compound or underfill material (not shown), may encapsulate at least a portion of the die 102 and / or the die-level interconnect structure 106.According to various suitable configurations, the die 102 can be attached to the package substrate 121, including, for example, being directly coupled with the package substrate 121 in a flip-chip configuration, as depicted. In a flip-chip configuration, a die-level interconnect structure 106 (eg, bump, pillar, or other suitable structure) that may also electrically couple the die 102 and the package substrate 121 is used to couple a die 102 The active side S1 is attached to the surface of the package substrate 121. The active side S1 of the die 102 may include circuitry such as a PCM element. It can be seen that the non-active side S2 can be arranged opposite to the active side S1. In other embodiments, the die 102 may be disposed on another die coupled with the package substrate 121 in any of a variety of suitable stacked die configurations. For example, the processor die may be coupled with the package substrate 121 in a flip-chip configuration, and the die 102 may be mounted on the processor die in a flip-chip configuration and use a through silicon formed through the processor die The hole (TSV) is electrically coupled with the package substrate. In still other embodiments, the die 102 may be embedded in the package substrate 121 or coupled with a die embedded in the package substrate 121. In other embodiments, other dies may be coupled with the package substrate 121 in a side-by-side configuration with the die 102.In some embodiments, the die-level interconnect structure 106 may be configured to route electrical signals between the die 102 and the package substrate 121. The electrical signals may include, for example, input / output (I / O) signals and / or power / ground signals used in conjunction with the operation of the die. The die-level interconnect structure 106 may be coupled with a corresponding die contact disposed on the active side S1 of the die 102 and a corresponding package contact disposed on the package substrate 121. The die contacts and / or the package contacts may include, for example, pads, vias, trenches, traces, and / or other suitable contact structures.In some embodiments, the package substrate 121 is an epoxy-based laminate substrate with a core and / or build-up layer, for example, an Ajinomoto laminated film (ABF) substrate. In other embodiments, package substrate 121 may include other suitable types of substrates, including substrates formed of, for example, glass, ceramic, or semiconductor material.The package substrate 121 may include circuit-by-feature configured to route electrical signals to or from the die 102. The circuit features may include, for example, package contacts (eg, pads 110) and / or internal routing features (not shown) disposed on one or more surfaces of the package substrate 121, eg, trenches, vias or The electrical signals are routed through other interconnect structures of the package substrate 121.Circuit board 122 may be a printed circuit board (PCB) that includes an electrically insulating material, such as an epoxy laminate. For example, the circuit board 122 may include an electrically insulating layer that includes, for example, polytetrafluoroethylene, phenolic tissue paper materials (eg, Flame Retardant 4 (FR-4), FR- 1, tissue paper), and epoxy Resin material (eg, CEM-1 or CEM-3) or woven glass material laminated together using an epoxy prepreg material. Interconnect structures (not shown), such as traces, trenches, vias, may be formed through the electrically insulating layer to route the electrical signals of the die 102 through the circuit board 122. In other embodiments, the circuit board 122 may include other suitable materials. In some embodiments, circuit board 122 is a motherboard (eg, motherboard 702 of FIG. 7).Package-level interconnects (eg, solder balls 112) may be coupled to the pads 110 on the package substrate 121 and / or the circuit board 122 to form corresponding solder pads that are configured to further bond the package substrate 121 and circuitry The electrical signals are routed between the boards 122. Pad 110 may include any suitable conductive material, such as metal, including, for example, nickel (Ni), palladium (Pd), gold (Au), silver (Ag), copper (Cu), and combinations thereof. Package-level interconnects may include other structures and / or configurations, including, for example, contact array package (LGA) structures and the like.In other embodiments, IC component 200 may include a variety of other suitable configurations including, for example, flip chip and / or wire bond configurations, interposers, multi-chip package configurations including system in package (SiP), and / Package (PoP) configuration. In some embodiments, other suitable techniques for routing electrical signals between die 102 and other components of IC package 200 may be used.FIG. 3 schematically illustrates a cross-sectional side view of a PCM device 300 in accordance with some embodiments. According to various embodiments, the PCM device 300 may include a plurality of PCM elements (eg, individual PCM elements 316A, 316B) formed on the substrate 302. The individual PCM elements 316A, 316B may correspond to the cells of the cell array of the PCM device.In some embodiments, individual PCM elements 316A may represent dummy cells, and individual PCM cells 316B may represent active cells of a plurality of cells. The dummy cells may be unwanted or non-specified information for storing the PCM device 300 but may be additionally formed for cells of the memory array structure or for other reasons. For example, in some embodiments, dummy cells may be used for electrical isolation or physical isolation of active cells. The dummy cells may include tiles that are not configured for storage, for example, due to differences in electrical characteristics (eg, Vt) in the dummy cells that are different from the normal active cells (eg, greater than the threshold voltage Vt), eg, a predetermined amount, The edge of the unit. Such a dummy cell may be particularly affected by chemical mechanical polishing (CMP) in a manner that adversely affects the electrical performance of the dummy cell relative to the active cell. In some embodiments, the dummy cells may be placed in other areas than the edges of the tiles. In some embodiments, the dummy cells may be biased with the active cells during normal operation (eg, when active cells that share the same bit line or wordline as the dummy cells are selected), and dummy cells may leak vertically (Eg, from bit line 324 to word line 304). In some embodiments, the dummy cells may be slightly different from the active cells, resulting in potentially greater leakage through the active cells than through the dummy cells.According to various embodiments, a dummy cell (eg, individual PCM cell 316A) may be part of a cell subset of a PCM device that is doped with impurities 333 to reduce cell leakage of dummy cells. For example, a dummy cell may be doped with an implantation process that may damage a layer (eg, layers 306, 308, 310, 312, 314) or an interface between the layers to offset a threshold voltage (Vt) of the damaged dummy cell High enough to turn off the dummy cells and / or to reduce the dummy cell leakage at normal Vt for the active cells.According to various embodiments, each of the individual PCM elements 316A, 316B may include a stacked layer disposed on the word line 304. Although not shown, one or more intervening layers and / or structures (eg, circuitry) may be disposed between the substrate 302 and the wordline 304. For example, the circuit may include a complementary metal oxide semiconductor (CMOS) device between word line 304 and substrate 302 and / or a metallization formed on substrate 302. In some embodiments, the circuit may include a charge pump and / or a selection circuit. In some embodiments, the substrate 302 may be a semiconductor substrate such as silicon. Word line 304 may include, for example, tungsten. Other suitable materials for substrate 302 and word line 304 may be used in other embodiments.In some embodiments, each of the individual PCM elements 316A, 316B may include a select device (SD) layer 308 and a phase change material (PM) layer 312 disposed between the electrodes. For example, in the depicted embodiment, SD layer 308 may be disposed on bottom electrode layer 306 that may be formed on word line 304. The middle electrode layer 310 may be disposed on the SD layer 308. The PM layer 312 may be disposed on the middle electrode layer 310, and the top electrode layer 314 may be disposed on the PM layer 312. The individual PCM elements 316A, 316B may include other intervening materials and / or layers according to various embodiments, including, for example, diffusion between the chalcogenide material of the SD layer 30 and the PM layer 312 and the material of the electrodes 306, 310, Barrier layer. In other embodiments, the stacked layers can be arranged in other configurations. For example, in one embodiment, the PM layer 312 may be disposed on the bottom electrode layer 306, the middle electrode layer 310 may be disposed on the PM layer 312, the SD layer 308 may be disposed on the middle electrode layer 310, and the top electrode Layers may be provided on the SD layer 308. That is, in the depicted configuration, the PM layer 312 and the SD layer 308 may be switched.According to various embodiments, one or more of the layers 306, 308, 310, 312, 314 of the individual PCM elements 316A may be doped with impurities 333 to reduce leakage. In some embodiments, the PM layer 312 may be doped with impurities 333 to reduce cell leakage of the dummy cells. Implant species (eg, impurities 333) in the PM layer 312 may also reduce diffusion of PM elements at the interface during thermal processing, enabling suppression of PM material separation and improved bit error rate (BER) for overall cell performance. In some embodiments, one or both of the beamline injection and / or plasma injection techniques may be used for the implantation process.In general, the direction of implantation of impurities 333 toward one or more layers 306, 308, 310, 312, 314 may be in the direction indicated by the arrow 440 of FIG. 4 that is substantially perpendicular to the surface S of the substrate 302 and may include, From any angle of injection of -89 ° to + 89 ° (eg, relative to the direction indicated by arrow 440) for the beamline injection process. The concentration profile of impurities 333 in the one or more layers 306, 308, 310, 312, 314 may depend on the species, energy, and dose of the impurities, as further described in connection with FIG. 4.In some embodiments, the SD layer 308 may be doped with impurities 333. In experiments, it was found that as much as 40% of the vertical leakage reduction in the dummy cells can be achieved by injecting the SD layer 308. In some embodiments, the concentration profile of impurities 333 in SD layer 308 may be greater than the concentration of impurities 333 in other layers (eg, layers 306, 310, 312, 314). In such an embodiment, the layers 310, 312, and 314 may have a concentration of impurities 333 that is greater than zero because impurities 333 may pass and be embedded into the layers 310, 312, and 314 during injection of the SD layer 308.In some embodiments, more or fewer layers 306, 308, 310, 312, and 314 than depicted may be doped with impurities 333. For example, the bottom electrode layer 306 may be doped with impurities 333 in some embodiments. In other embodiments, only the layers 310, 312 and 314 may be doped with impurities 333. In other embodiments, only layers 312 and 314 may be doped with impurities.According to various embodiments, various suitable impurities may be used to dope individual PCM elements 316A of PCM device 300, and wireline implant and plasma injection techniques may be used for the implantation process. In some embodiments, the impurities 333 may include arsenic, germanium, oxygen, silicon, carbon, boron, argon, phosphorus, One or more of hydrogen (H), fluorine (F), selenium (Se), indium (In), and nitrogen (N) In some embodiments, the layers 306, 308, 310, 312, and / or 314 may be doped with the same impurities 333. In other embodiments, layers 306, 308, 310, 312, and / or 314 may be doped with different impurities (eg, having different chemical compositions). In other embodiments, layers 306, 308, 310, 312, and / or 314 may be doped with other suitable impurities.According to various embodiments, the electrode layers 306, 310, and 314 may include carbon (C). The electrode layers 306, 310, and 314 may be tuned by an implantation process and a physical vapor deposition (PVD) process for electrical resistivity, smoothness, and C bonding (sp2 or sp3). In some embodiments, the electrode layers 306, 310, and / or 314 may include one or more conductive materials and / or semiconductive materials having a resistivity in the range of 1 milliohm · cm (mOhm · cm) to 100 mOhm · cm, For example, carbon (C), carbonitride (CxNy); n-doped polycrystalline silicon and p-doped polycrystalline silicon; alloys including Al, Cu, Ni, Cr, Co, Ru, Rh, Pd, Ag, Pt, Ir, Ta and W; conductive metal nitrides including TIN, TaN, WN and TaCN; conductive metal silicides including tantalum silicide, tungsten silicide, nickel silicide, cobalt silicide and titanium silicide; conductive metals including TiSiN and WSiN Silicide nitride; conductive metal carbide nitride including TiCN and WCN; and conductive metal oxide including RuO 2.According to various embodiments, the PM layer 312 may include a phase change material such as chalcogenide glass that can be switched between a crystalline state and an amorphous state by applying heat generated by a current, for example, a material including germanium, antimony, tellurium, Silicon, indium, selenium, sulfur, nitrogen and carbon.According to various embodiments, the SD layer 308 may include a PN diode, an MIEC (Mixed Ion Electron Conductance) device, or an OTS (Bidirectional Threshold Switch) based on a semiconductor having a structure including a chalcogenide compound described with respect to a memory element (eg, PM layer 312) Alloy system, and additionally, it may further include an element capable of suppressing crystallization. In other embodiments, layers 306, 308, 310, 312, and 314 may include other suitable materials having other suitable properties.It can be seen that the PCM device 300 can further include a dielectric liner 318 conformally deposited on the surface of the stacked layers of individual PCM elements 316. Dielectric fill material 320 may be deposited on dielectric liner 318 using any suitable technique to fill the area between individual PCM elements 318. In some embodiments, the dielectric liner 318 may include silicon nitride (Si3N4 or generally SixNy, where x and y represent any suitable relative amount), and the dielectric fill material 320 may include silicon oxide (SiO 2). In other embodiments, the dielectric liner 318 and the dielectric fill material 320 may include other suitable materials.As can be seen, the PCM device 300 may further include a bit line 324 coupled to the individual PCM element 316. In some embodiments, bit line 324 may be electrically and / or directly coupled with top electrode 314. Bit line 324 may include any suitable metal including, for example, tungsten, and may be deposited using any suitable technique.In some embodiments, PCM device 300 may represent a bitline socket having a width of about 30 microns to about 50 microns and / or including a two-transistor (2T) decoding scheme.4 schematically illustrates a cross-sectional side view of a stacked layer 306, 308, 310, 312, and 314 of a PCM device 400 that is selectively doped (eg, indicated by arrow 440) with impurities 333 in accordance with some embodiments. PCM device 400 may be depicted after deposition of stacked layers 306, 308, 310, 312, and 314 and prior to the patterning of stacked layers 306, 308, 310, 312, and 314 and word line 304.In some embodiments, each of the stacked layers 306, 308, 310, 312, and 314 may be sequentially deposited to form the stacked layers 306, 308, 310, 312, and 314. The injection film 330 may be deposited on the stacked layers to provide metal contamination control (eg, to prevent material of the stacked layers 306, 308, 310, 312, and 314 from being sputtered into the environment of the injection equipment). In some embodiments, the implant film 330 may include a silicon oxide (eg, SiO 2) film having a thickness ranging from 40 angstroms to 100 angstroms. In other embodiments, the injection film 330 may include other suitable materials or have other thicknesses. After implantation, the implantation film 330 may be removed using any suitable technique including, for example, an etching process.In some embodiments, a mask layer 332 may be deposited on the stack layers 306, 308, 310, 312, and 314 and patterned such that the patterned mask layer 332 is configured as a protection region 328, 328 to form an active cell (eg, individual PCM element 316B of FIG. 3). The opening may be patterned in a mask layer 332 on a region 328 where a dummy cell (eg, individual PCM cell 316A of FIG. 3) is to be formed. Mask layer 332 may include any suitable material including a hard mask material, for example, a silicon oxide or a photosensitive material, for example, a photoresist. After implantation, mask layer 332 may be removed using any suitable technique including, for example, an etching process.One or more of the stacked layers 306, 308, 310, 312, and 314 in region 328 may be doped with impurities 333 using an implantation process. For example, in some embodiments, implantation of impurities 333 may be tuned to target SD layer 308 (eg, providing higher concentration of impurities 333 in SD layer 308 than other layers (eg, PM layer 312)). In some embodiments, the concentration 34 of impurities 333 in the SD layer 308 may be greater than the concentration 33 of impurities 333 in the PM layer 312 and / or the top electrode layer 314 and the middle electrode layer 310. Tuning implants can include determining the dose, energy, and / or species for implantation by stacking the layers to characterize the dose, energy, and species of various impurities, as well as measuring the concentration of impurities in each layer. The measurement can be performed, for example, by secondary ion mass spectrometry (SIMS) or energy dispersive X-ray spectroscopy (EDS). Both beamline injection and plasma injection techniques can be used for the implantation process. According to various embodiments, the impurities 333 may include arsenic, germanium, oxygen, silicon, carbon, boron, argon, phosphorus, One or more of hydrogen (H), fluorine (F), selenium (Se), indium (In), and nitrogen (N) According to some embodiments, the implantation dose can be 1E14 to 1E17 atoms / cm 2 and / or the implantation energy can be 500 eV to 80 keV. In some embodiments, the impurities 333 may include Si or C. In other embodiments, other suitable impurities and dosages / energies may be used.In other embodiments, the implantation of impurities 333 may be performed during other stages of manufacture of the PCM device 400. For example, in some embodiments, impurities 333 may be implanted in an area where a dummy cell is to be formed (eg, using mask layer 332 on SD layer 308) after deposition of SD layer 308 and prior to deposition of middle electrode layer 310. In other embodiments, impurities 333 may be implanted after depositing another layer of stacked layers 306, 308, 310, 312, and 314 and prior to depositing top electrode layer 314. In other embodiments, the impurities 333 may be after the stacked layers 306, 308, 310, 312, and 314 are patterned to form the trenches in FIG. 3 where the dielectric materials 318 and 320 are disposed and after deposition of the dielectric material 318 and 320 before being injected. In other embodiments, the impurities 333 may be implanted after the cell array (eg, individual PCM elements 316A, 316B) is formed and before the bit lines 324 are deposited.FIG. 5 schematically illustrates a cell array of a PCM device including an active cell 516B and a dummy cell 516A in accordance with some embodiments. In some embodiments, the array may represent a single tile 500. Tile 500 may be considered a discrete unit during the selection operation of the target cell. That is, in some embodiments, tile 500 may be a unit of a cell array that is biased to select a target cell (eg, bit) in the array. In the depicted embodiment, tile 500 includes cells (eg, active cell 516B and dummy cell 516A) disposed at the intersection of four word lines 504 with four bit lines 524 (4 WL × 4 BL); however , Other suitable tile sizes can be used in other embodiments.According to various embodiments, dummy cell 516A (eg, within region 555) may be disposed at an edge of tile 500 as can be seen. Active cell 516B may be electrically coupled with dummy cell 518A via wordline 504 and bitline 524 and may similarly be biased during selection or other operations. In some embodiments, dummy cell 516A may be doped with impurities to reduce leakage as described herein, and active cell 516B may not be doped with impurities. Dummy cell 516A may be consistent with the embodiment described in connection with individual PCM element 316A of FIG. 3 and active cell 516B may be consistent with the embodiment described in connection with individual PCM element 316B of FIG. 3.FIG. 6 is a flowchart of a method 600 of fabricating a PCM device (eg, PCM device 300 of FIG. 3) according to some embodiments. The method 600 may be consistent with the embodiments described in conjunction with FIGS. 1-5 and vice versa.At 602, the method 600 may include forming a stacked layer (eg, stacked layers 306, 308, 310, 312, and / or 314 of FIG. 4) of a phase change memory (PCM) device (eg, PCM device 400 of FIG. According to various embodiments, the bottom electrode layer (eg, bottom electrode layer 306) may be deposited on the bottom electrode layer by depositing a bottom electrode layer (eg, word line 304 of FIG. 3) using any suitable deposition technique Device layer (eg, SD layer 308 of FIG. 3), depositing an intermediate electrode layer (eg, intermediate electrode layer 310 of FIG. 3) on the select device layer, depositing a phase change material layer on the intermediate electrode layer PM layer 312) and / or depositing a top electrode layer (top electrode layer 314 of FIG. 3) on the phase change material layer. .At 604, the method 600 may include doping the stacked layers (eg, in a region corresponding to the dummy cell (eg, individual PCM element 316A of FIG. 3), eg, region 328 of FIG. 4 or region 555 of FIG. , Indicated by arrow 440 of FIG. 4) impurities (eg, impurities 333 of FIG. 4) to reduce cell leakage of the dummy cells. In some embodiments, the region (eg, region 326 of FIG. 4) of an active cell (eg, active cell 516B) may be protected by a patterned mask layer (eg, mask layer 332 of FIG. 4) The source cell is not doped with impurities during the doping of the stacked layers.In some embodiments, doping the stacked layers includes doping the phase-change material layer. In other embodiments, the doping may be configured to introduce impurities into other regions of the stacked layer, including, for example, a select device layer. For example, doping may be performed on a stacked layer including a select device layer and a phase change material layer, and the phase change material layer and the select device layer may be doped simultaneously during the same impurity (eg, the same impurity) implantation process . In some embodiments, doping the selection device layer may provide a concentration of impurities in the selection device layer that is higher than the impurity concentration in the phase change material layer. For another example, in some embodiments, when the selection device layer is doped, the stacked layer may include only the selection device layer on the bottom electrode layer. Other configurations of stacked layers can be doped with impurities as described herein.In the manner most helpful to the understanding of the claimed subject matter, various operations are described in turn as a number of discrete operations. However, the order of description should not be construed as implying that these operations necessarily depend on order. In particular, these operations may not be performed in the order presented. The described operations may be performed in a different order than the described embodiments. In additional embodiments, various additional operations may be performed and / or operations described may be omitted.Embodiments of the present disclosure may be implemented as a system configured on an as needed basis using any suitable hardware and / or software. FIG. 7 schematically illustrates an example system (eg, computing device 700 of FIG. 7) including a PCM device (eg, PCM device 300 of FIG. 3) according to various embodiments described herein. Computing device 700 may house (eg, in housing 709) a board, such as motherboard 702. The motherboard 702 may include a number of components, including but not limited to a processor 704 and at least one communication chip 706. Processor 704 may be physically and electrically coupled to motherboard 702. In some implementations, at least one communication chip 706 may also be physically and electrically coupled to the motherboard 702. In further embodiments, communication chip 706 may be part of processor 704.Depending on its application, the computing device 700 may include other components that may or may not be physically and electrically coupled to the motherboard 702. These other components may include, but are not limited to, volatile memory (eg, dynamic random access memory (DRAM)), nonvolatile memory (eg, PCM 708 or read only memory (ROM)), flash memory, Devices, digital signal processors, cryptographic processors, chipsets, antennas, displays, touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, Geiger counters, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (eg, hard disk drives, compact disks (CDs), digital versatile disks (DVDs), etc.).According to various embodiments, the PCM 708 may be consistent with the embodiments described herein. For example, PCM 708 may include a PCM device (eg, PCM device 300 of FIG. 3) as described herein.The communication chip 706 may enable wireless communication for transmitting data to and receiving data from the computing device 700. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data over non-solid-state media using modulated electromagnetic radiation. The term does not imply that the associated device does not contain any wires, although in some embodiments they may not be. Communication chip 706 may implement any of a variety of wireless standards or protocols including, but not limited to: Institute of Electrical and Electronics Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 series), IEEE 802.16 standards (eg, IEEE 802.16 -2005 Modifications); Long Term Evolution (LTE) projects; and any modifications, updates and / or revisions (eg, LTE Advanced Projects, Ultra Mobile Broadband (UMB) projects also known as "3GPP2"). An IEEE 802.16-compliant BWA network, commonly referred to as the WiMAX network, is an acronym for Worldwide Interoperability for Microwave Access and is a certification mark for products that pass the IEEE 802.16 standard of conformance and interoperability testing. The communication chip 706 may operate according to Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed ​​Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks. The communication chip 706 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). Communication chip 706 may be implemented according to code division multiple access (CDMA), time division multiple access (TDMA), digital enhanced cordless telecommunications (DECT), evolution data optimization (EV-DO), derivatives thereof, and any device designated as 3G, 4G, And above other wireless protocols. In other embodiments, communication chip 706 may operate in accordance with other wireless protocols.Computing device 700 may include a plurality of communication chips 706. For example, the first communication chip 706 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth while the second communication chip 706 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA , WiMAX, LTE, EV-DO and others.In various implementations, computing device 700 may be a mobile computing device, a laptop computer, a netbook, a laptop, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop Computer, server, printer, scanner, monitor, set-top box, entertainment control unit, digital camera, portable music player or digital video recorder. In further embodiments, computing device 700 may be any other electronic device that processes data.exampleAccording to various embodiments, the present invention discloses a device. Example 1 of a device may include a plurality of phase change memory (PCM) elements, wherein individual ones of the plurality of PCM elements are dummy cells, the dummy cells include a bottom electrode layer disposed on the bottom electrode layer , A middle electrode layer disposed on the selection device layer, a phase change material layer disposed on the middle electrode layer, and a top electrode layer disposed on the phase change material layer, wherein the The phase change material layer is doped with impurities to reduce cell leakage of the dummy cell. Example 2 can include the apparatus of claim 1, wherein the selection device layer is doped with impurities to reduce cell leakage of the dummy cells. Example 3 may include the apparatus of claim 2, wherein the selection device layer and the phase change material layer are doped with the same impurities. Example 4 may include the apparatus of claim 3, wherein the selection device layer has a higher concentration of the impurities than the phase change material layer. Example 5 can include the apparatus of any one of claims 1-4, wherein the phase change material layer and the selection device layer comprise a chalcogenide material, and the impurity is selected from the group consisting of arsenic (As) Ge, O, Si, C, B and N. Example 6 can include the device of claim 5, wherein the impurity is Si, C, or Ge. Example 7 may include the apparatus of claim 6, wherein the impurity is Si. Example 8 may include the apparatus of any one of claims 1-4, further comprising a tile comprising a cell of a cell array, wherein the dummy cell is disposed at an edge of the tile. Example 9 may include the apparatus of claim 8, wherein an active cell of the tile of the cell is electrically coupled to the dummy cell and not doped with the impurity. According to various embodiments, the present disclosure describes a method. An example 10 of a method may include forming a stacked layer of phase change memory (PCM) devices by: depositing a bottom electrode layer on a word line metal layer; depositing a select device layer on the bottom electrode layer; Depositing an intermediate layer electrode layer; depositing a phase change material layer on the intermediate electrode layer; and doping the phase change material layer with impurities in a region of the stack layer corresponding to dummy cells to reduce the Cell leakage of dummy cells. Example 11 can include the method of claim 10, further comprising doping impurities into the selection device layer to reduce cell leakage of the dummy cells. Example 12 can include the method of claim 11, wherein the select device layer and the phase change material layer are doped with the same impurity during the same implantation process. Example 13 may include the method of claim 12, wherein doping the selection device layer provides a higher concentration of the impurities in the selection device layer than in the phase change material layer The concentration of impurities. Example 14 can include the method of any one of claims 10-13, wherein the phase change material layer and the selection device layer comprise a chalcogenide material, and the impurity is selected from the group consisting of arsenic (As) Ge, O, Si, C, B and N. Example 15 can include the method of claim 14, wherein the impurity is Si. Example 16 may include the method of any one of claims 10-13, wherein the dummy cells are disposed at an edge of a tile of a cell including a cell array. Example 17 may include the method of claim 16, wherein the area of ​​the active cell of the tile of the cell is protected by a patterned mask layer such that, upon doping the doped phase change material layer During which the active cell is not doped with the impurity. According to various embodiments, the present disclosure describes a system. An example 18 of a system may include a circuit board and a die coupled with the circuit board, the die including a plurality of phase change memory (PCM) elements, wherein individual ones of the plurality of PCM elements are dummy cells , The dummy cell comprising: a bottom electrode layer, a selection device layer disposed on the bottom electrode layer, an intermediate electrode layer disposed on the selection device layer, a phase change material layer disposed on the middle electrode layer And a top electrode layer disposed on the phase change material layer, wherein the phase change material layer is doped with impurities to reduce cell leakage of the dummy cell. Example 19 can include the system of claim 18, wherein the selection device layer is doped with impurities to reduce cell leakage of the dummy cells. Example 20 may include the system of claim 19, wherein the selection device layer has a higher concentration of the impurities than the phase change material layer. Example 21 may include the system of any of claims 18-20, wherein the system is a mobile computing device comprising one or more of the following items coupled with the circuit board: an antenna, a display , Touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, Geiger counters, accelerometers, gyroscopes, speakers or cameras.Various embodiments may include any suitable combination of the above-described embodiments, including alternative (or) embodiments (eg, "and" may be "and / or") of the embodiments described in the above joint form (and). In addition, some embodiments may include one or more articles of manufacture (eg, non-transitory computer-readable media) having instructions stored thereon that, when executed, obtain the actions of any of the above-described embodiments. In addition, some embodiments may include a device or system having any suitable means for performing the various operations of the above-described embodiments.The foregoing description of illustrated embodiments, including the embodiments described in the Abstract, is not intended to be exhaustive or to limit the embodiments of the disclosure to the precise forms disclosed. Although specific embodiments and examples are described herein for illustrative purposes, various equivalent modifications may be made within the scope of the disclosure, as one skilled in the relevant art will recognize.From the foregoing detailed description, these modifications may be made to the embodiments of the present disclosure. The terms used in the claims are not to be construed as limiting the various embodiments of the disclosure to the specific embodiments disclosed in the specification and the claims. On the contrary, the scope is to be determined entirely by the following claims, which are to be construed in accordance with the established theory of the claims.
A flash memory storage system may include several modules of flash memory storage manager circuitry, each having some associated flash memory. The modules may be interconnected via the flash memory storage manager circuitry of the modules. The system may be able to write data to and/or read data from the flash memory associated with various ones of the modules by routing the data through the flash memory storage circuitry of the modules. The system may also be able to relocate data for various reasons using such read and write operations. The flash memory storage circuitry of the modules keeps track of where data actually is in the flash memory.
A plurality of memory circuits, each of which is connected to a respective one of a plurality of integrated circuits ("ICs"), each of the ICs being connected to at least one of the other ICs by inter-IC connections so that an IC exchanges memory circuit data with another IC via the inter-IC connections, each of the ICs including memory manager circuitry comprising:a logic block manager for maintaining a unique global identification ("ID") for each block of data contained in any portion of any of the memory circuits, the global ID including a node ID identifying the IC that is connected to the memory circuit containing that block and a logical block number for that block;a translator for maintaining a mapping between (1) the logical block number of each block contained in the memory circuit connected to the IC that includes that translator, and (2) a physical portion ID of a portion of that memory circuit that contains that block; anda driver for receiving the physical portion ID from the translator of the IC that includes that driver and accessing the portion of the memory connected to that IC that is identified by that physical portion ID.The memory circuits defined in claim 1 wherein at least one of ICs includes circuitry for receiving a request from an external source for a block identified by the global ID of that requested block, and wherein the logic block manager of that IC includes circuitry for using the global ID of the requested block to determine the node ID and the logical block number of the requested block.The memory circuits defined in claim 2 wherein if the circuitry for using determines that the node ID of the requested block is the node ID of the IC that includes that circuitry for using, then the logic block manager circuitry of that IC applies the logical block number of the requested block to the translator of that IC.The memory circuits defined in claim 2 wherein if the circuitry for using determines that the node ID of the requested block is not the node ID of the IC that includes that circuitry for using, then the logic block manager circuitry of that IC employs circuitry for causing that IC to route data for the request to at least one other of the ICs.The memory circuits defined in claim 4 wherein the circuitry for causing uses at least one of the inter-IC connections to route data for the request to at least one other of the ICs.The memory circuits defined in claim 4 wherein the circuitry for causing is responsive to node ID data in the request to select a route for the request to at least one other of the ICs.The memory circuits defined in claim 6 wherein the circuitry for causing is additionally responsive to data, stored in the IC that includes that circuitry for causing, indicative of topology of the ICs and the inter-IC connections.The memory circuits defined in claim 4 wherein, when the request reaches the other IC ("the reading IC") having the node ID of the requested block, the logic block manager of the reading IC applies the logic block number of the requested block to the translator of the reading IC.The memory circuits defined in claim 8 wherein the driver of the reading IC reads the requested data from the memory circuitry connected to the reading IC so that the reading IC routes that data back to the IC that received the request from the external source.The memory circuits defined in claim 1 wherein each of the ICs ("the source IC") includes circuitry for transferring a block ("the transferred block") accessed by the driver of the source IC to another of the ICs ("the destination IC") for storage in the memory circuitry connected to the destination IC.The memory circuits defined in claim 10 wherein the circuitry for transferring employs the inter-IC connections.The memory circuits defined in claim 10 further comprising:circuitry for updating the mapping of the translator of the source IC to delete the logical block number of the transferred block, and for updating the mapping of the translator of the destination IC to add the logical block number of the transferred block.The memory circuits defined in claim 10 further comprising:circuitry for changing the node ID of the transferred block in the block manager in all of the ICs from the node ID of the source IC to the node ID of the destination IC.The memory circuits defined in claim 1 wherein each of the ICs ("the source IC") further includes:circuitry for maintaining a count of how many times each of the other ICs requests a respective block contained in the memory circuit that is connected to the source IC; andcircuitry for transferring a block ("the transferred block"), for which the count for one of the other ICs ("the destination IC") exceeds a threshold value, from the memory circuit connected to the source IC to the memory circuit connected to the destination IC.The memory circuits defined in claim 14 wherein the circuitry for transferring employs the inter-IC connections.The memory circuits defined in claim 14 further comprising:circuitry for updating the mapping of the translator of the source IC to delete the logical block number of the transferred block, and for updating the mapping of the translaor of the destination IC to add the logical block number of the transferred block.The memory circuits defined in claim 14 further comprising:circuitry for changing the node ID of the transferred block in the block manager in at least one of the ICs from the node ID of the source IC to the node ID of the destination IC.Managing access to a plurality of memory circuits, each of which is connected to a respective one of a plurality of integrated circuits ("ICs"), one of the ICs being connected to at least one of the other ICs by inter-IC connections so that one IC exchanges blocks of memory circuit data with another IC via the inter-IC connections, each of the ICs ("the source IC") including a memory manager comprising:circuitry for maintaining a count of how many times a given IC requests at least one block contained in the memory circuit that is connected to the source IC; andcircuitry for transferring a block ("the transferred block"), for which the count for one of the other ICs ("the destination IC") exceeds a threshold value, from the memory circuit connected to the source IC to the memory circuit connected to the destination IC.The memory manager defined in claim 18 wherein the circuitry for transferring employs the inter-IC connections.The memory manager defined in claim 18 wherein an IC further comprises:circuitry for maintaining a record of where a given block is currently stored, said record including a node identification ("ID") which identifies the IC that is connected to the memory circuit in which a given block is currently stored.
Cross Reference to Related ApplicationsThis application claims the benefit of U.S. provisional patent applications No. 61/167,450, filed April 7, 2009 , and No. 61/169,032, filed April 14, 2009 , both of which are hereby incorporated by reference herein in their entireties.BackgroundLarger data storage has been in increased demand in recent years. Data storage based on solid state flash memory offers compelling advantages in terms of read/write throughput, stability, shock and vibration resistance, etc., compared with traditional magnetic disk based storage. Some such solid state flash memory storage may need to be larger than others, and it can therefore be desirable to be able to use various numbers of identical or substantially identical modules to construct such flash memory storage systems in any of a wide range of sizes. It is also important for such flash storage and the associated memory access circuitry to be able to automatically keep track of where all data is in the memory so that the data can be efficiently and reliably accessed. The present disclosure facilities such aspects of electronic data memory construction and/or operation.SummaryIn accordance with certain possible aspects of the disclosure, a plurality of memory circuits may each be connected to a respective one of a plurality of integrated circuits ("ICs"). Each of the ICs may be connected to at least one other of the ICs by inter-IC connections so that an IC exchanges memory circuit data with another IC via the inter-IC connections. Each of the ICs may include memory manager circuitry that comprises a logic block manager for maintaining a unique global identification ("ID") for each block of data contained in any portion of any of the memory circuits, the global ID including a node ID identifying the IC that is connected to the memory circuit containing that block and a logical block number for that block. The memory manager circuitry for each IC may further comprise a translator for maintaining a mapping between (1) the logical block number of each block contained in the memory circuit connected to the IC that includes that translator, and (2) a physical portion ID of a portion of that memory circuit that contains that block. The memory manager for each IC may still further comprise a driver for receiving the physical portion ID from the translator of the IC that includes that driver and accessing the portion of the memory connected to that IC that is identified by that physical portion ID.In accordance with certain other aspects of the disclosure, in memory circuits as summarized above, each of the ICs ("the source IC") may include circuitry for transferring a block ("the transferred block") accessed by the driver of the source IC to another of the ICs ("the destination IC") for storage in the memory circuitry connected to the destination IC.In such memory circuits the circuitry for transferring may employ the inter-IC connections.In accordance with certain still other possible aspects of the disclosure, in memory circuits as summarized above, each of the ICs ("the source IC") may further include circuitry for maintaining a count of how many times each of the other ICs requests a respective block contained in the memory circuit that is connected to the source IC, and circuitry for the transferring a block ("the transferred block") (for which the count for one of the other ICs ("the destination IC") exceeds a threshold value) from the memory circuit connected to the source IC to the memory circuit connected to the destination IC.Still other possible aspects of the disclosure relate to managing access to a plurality of memory circuits, each of which is connected to a respective one of a plurality of integrated circuits ("ICs"). One of the ICs may be connected to at least one of the other ICs by inter-IC connections so that one IC exchanges blocks of memory circuit data with another IC via the inter-IC connections, each of the ICs ("the source IC") including a memory manager. Each such memory manager may comprise circuitry for maintaining a count of how many times a given IC requests at least one block contained in the memory circuit that is connected to the source IC, and circuitry for transferring a block ("the transferred block") (for which the count for one of the other ICs ("the destination IC") exceeds a threshold value) from the memory circuit connected to the source IC to the memory circuit connected to the destination IC.In such memory managers the circuitry for transferring may employ the inter-IC connections.Further features of the disclosure, its nature and various advantages will be more apparent from the accompanying drawings and the following detailed description.Brief Description of the DrawingsFIG. 1 is a simplified schematic block diagram of an illustrative embodiment of circuitry in accordance with this disclosure.FIG. 2 is a simplified logical diagram of an example of a system topology in accordance with the disclosure.FIG. 3 is similar to FIG. 2 for another illustrative system topology in accordance with the disclosure.FIG. 4 is similar to FIG. 2 for yet another illustrative system topology in accordance with the disclosure.FIG. 5 is a more detailed (but still simplified) schematic block diagram of an illustrative embodiment of a portion of the FIG. 1 circuitry in accordance with the disclosure.FIG. 6 is a simplified diagram of an illustrative embodiment of a data packet in accordance with the disclosure.FIG. 7 is similar to FIG. 5 , but shows an illustrative embodiment of more possible details in accordance with the disclosure.FIG. 8 is again similar to FIG. 5 , but shows an illustrative embodiment of still more possible details in accordance with the disclosure.FIG. 9 is a simplified schematic block diagram of an illustrative embodiment of a system (or a representative portion of a system) in accordance with the disclosure.FIG. 10 is a simplified schematic block diagram of an illustrative embodiment of a portion of the FIG. 1 circuitry in accordance with the disclosure.FIG. 11 is a simplified schematic block diagram of an another illustrative embodiment of a portion of the FIG. 1 circuitry in accordance with the disclosure.FIG. 12 is a simplified block diagram of an illustrative embodiment of several elements in accordance with certain possible aspects of the disclosure.FIGS. 13a and 13b (sometimes referred to collectively as FIG. 13 ) are a simplified flow chart of an illustrative embodiment of certain possible method aspects of the disclosure.FIGS. 14a-c (sometimes referred to collectively as FIG. 14 ) are a simplified flow chart of an illustrative embodiment of certain other possible method aspects of the disclosure.FIGS. 15a-c (sometimes referred to collectively as FIG. 15 ) are a simplified schematic block diagram of an illustrative embodiment of circuitry in accordance with certain possible aspects of the disclosure.FIGS. 16a-c (sometimes referred to collectively as FIG. 16 ) are a simplified flow chart of certain, still other possible method aspects of the disclosure.Detailed DescriptionIllustrative embodiments of electronic data memory systems in which the present disclosure can be implemented and practiced are shown in Zhou et al. U.S. patent application No. 12/728,757, filed March 22, 2010 ("the Zhou et al. reference"), which is hereby incorporated by reference herein in its entirety.FIGS. 1-11 herein are repeated from the Zhou et al. reference and are briefly described below. More detailed description is contained in the text of the Zhou et al. reference.FIG. 1 shows an illustrative embodiment of an integrated circuit ("IC") 10 that can be used as part of a distributed electronic data storage or memory system in accordance with this disclosure. (As used herein, terms like "circuit," "circuitry," "integrated circuit," "IC," and the like may refer to circuitry with or without software that runs on the circuitry and/or that controls various operations of the circuitry. As just one illustration of this, an "IC" (as that term is used herein) may include one or more processors with or without software that runs in and/or controls various operations of the processor(s).) The circuitry of IC 10 includes flash memory controller 20, cache memory controller 30, interface controller 40, direct memory access ("DMA") circuitry 50, central processing unit ("CPU") circuitry 60, bus circuitry 70, and controllable routing circuitry 80. IC 10 is connected to flash memory channels 120 and cache memory 130. Flash memory channels 120 are typically the relatively large, main memory for IC 10. Cache memory 130 is typically a smaller, temporary memory for IC 10. For example, cache memory 130 may be used for relatively short-term storage of data on its way to or from flash memory 120.Interface controller 40 can be used for connection of IC 10 to other circuitry (not shown) that may be thought of as external to the memory system of which elements 10, 120, and 130 are a part. For example, the memory system may store data supplied by that external circuitry. Similarly, the memory system may supply its stored data to the external circuitry. Connections 140 (to the external circuitry) may supply to IC 10 data write and/or data read instructions (requests or commands), as well as acting as the conduit for memory data exchange between IC 10 and the external circuitry.Controller 20 controls writing data to and reading data from flash memory 120. Controller 30 functions similarly for cache memory 130. CPUs 60 provide overall control for IC 10. DMA elements 50 support at least many aspects of memory writing and reading, with less or no involvement of CPUs 60 in such activities. Bus circuitry 70 provides connections between other circuit elements on IC 10. Routing circuitry 80 provides controllable connections (1) between bus circuitry 70 and similar routing circuitry 80 in one or more other instances of IC 10, and (2) between such other instances of IC 10. In a memory system that includes such multiple ICs 10, each IC is preferably constructed as shown in FIG. 1 , and each IC is connected to its own flash memory 120 and its own buffer memory 130. Accordingly, such a memory system may be referred to as a distributed memory system, and in general such a memory system may include any number of ICs 10, etc., to provide memories having any of a wide range of sizes.IC 10 is just one example of how this type of system component can be constructed in accordance with this disclosure. For example, in other embodiments of the disclosure, such an IC may omit some of the elements shown for IC 10 in FIG. 1 , and/or such an IC may have other elements that are not shown for IC 10 in FIG. 1 .Routing circuitry 80 may be thought of as a crossbar switch (or at least being like a crossbar switch). In general, such routing circuitry 80 can connect any of circuitry 80's ports (labeled P1-P9) to any other of circuitry 80's ports (although there may be some inter-port connections that cannot be made). Inter-IC connections 210 are used to connect the "external ports" P4-P9 of depicted IC 10 to similar ports of one or more other IC 10 instances in the distributed memory system.FIGS. 2-4 show some examples of distributed memory system topologies that can be used (although many other topologies are also usable). Each small circle 10 in each of FIGS. 2-4 represents one instance of an IC 10. Each line 210 in each of FIGS. 2-4 represents an inter-IC connection. The FIG. 2 topology may be referred to as a two-dimensional ("2D") mesh; the FIG. 3 topology may be referred to as a three-dimensional ("3D") mesh; the FIG. 4 topology may be referred to as a triangle cube.FIG. 5 shows some additional details as to how routing circuitry 80 may be constructed. In this construction, each external port P4-P9 is connected to serializer-deserializer ("SERDES") circuitry 512in physical layer circuitry 510 of routing circuitry 80. Each SERDES circuitry 512 can convert signals between(1) serial form on inter-IC connections 210 and(2) parallel form for use inside circuitry 80 and elsewhere on IC 10. (Internal ports P1-P3 may be parallel ports, which do not require SERDES circuitry.) SYNC, align, and ACK/NAK circuitry 522 (in link layer circuitry 520 of routing circuitry 80) performs synchronization ("SYNC"), alignment ("ALIGN"), and packet acknowledgement ("ACK/NAK") functions for the signals coming from and/or going to each external port P4-P9. Packet routing circuitry 532 (in packet layer circuitry 530 of routing circuitry 80) performs the actual routing of data packets between selectable different ones of ports P1-P9.The organization of a typical data packet is shown in FIG. 6 . For example, such a packet may include a header, which in turn includes an IC 10 identification ("ID") and memory ("MEM") address for the associated actual "data payload". The data payload follows the header, and is in turn followed by cyclic redundancy check ("CRC") or similar information for helping to make sure that the data payload has been correctly received. The IC 10 ID may also sometimes be referred to as the node ID.FIG. 7 shows that some translation may be needed between the information in the header of a packet and the action that packet routing circuitry 532 needs to take in order to get a packet from one IC 10 to another IC 10. For example, in a system like that shown in FIG. 2 , it may be necessary to send a packet from the upper right "node" (IC 10) to the lower left "node" (IC 10). This may be due to the data in the packet being stored in the flash memory 120 connected to upper right IC 10, but being needed to satisfy a request for data received by interface 40 of the lower left IC 10. The packet being discussed can be routed from the "source" or "owner" (upper right) IC 10 to the "destination" (lower left) IC 10 via the upper-most arcuate inter-IC connection 210 to the upper left IC 10, and then via the left-most arcuate inter-IC connection 210 to the lower left IC 10. The header for the packet may include the intended destination (IC 10 ID), but the packet routing circuitry 532 in the upper right IC 10 may need to translate that information into initial information to the effect that a way to get the packet to the lower left destination IC 10 is via the routing circuitry 80 of the upper left IC 10. The routing table circuit 740 ( FIG. 7 ) of the routing circuitry 80 of the upper right source IC 10 may therefore be programmed based on the topology of the system (e.g., FIG. 2 ) to tell the associated packet routing circuitry 532 that when that circuitry 532 gets a packet that is to be routed to the lower left IC 10, circuitry 532 should in fact route that packet to the upper left IC 10. (The upper left IC will forward that packet on to the lower left destination IC.)FIG. 8 shows that packet routing circuitry 532 may include input and/or output buffer circuitry 850 if needed. Buffer circuitry 850 may be input buffer circuitry and/or output buffer circuitry for buffering data in each of the port channels of packet routing circuitry 532.FIG. 9 shows an example of a possible physical layout of a distributed flash memory system (or a representative portion of such a system) in accordance with the disclosure. Element 900 is a printed circuit board ("PCB"). Six ICs 10a-f are mounted on PCB 900. Also mounted on PCB 900 are the flash memories 120a-f and cache memories 130a-f that are connected to each of ICs 10 (e.g., via circuit traces on PCB 900). Inter-IC connections 210 and external connections 140 (e.g., as in FIG. 1 ) may also be provided (at least in part) as traces on PCB 900. Multiple instances of PCB 900 may be connected to one another via a backplane on which the PCBs are mounted.FIG. 10 shows an illustrative ("crossbar") construction of routing circuitry 80. Any two ports P1-P9 can be connected to one another via crossbar conductor(s) CB. The switches S between CB and each of the two ports that it is desired to interconnect are closed (by assertion of appropriate control signals C). All other switches S are open.FIG. 11 shows that one crossbar network implementation of routing circuitry 80 can concurrently and independently make two or more port-to-port connections. Each such port-to-port connection is made using a respective one of CBa, CBb, etc., and the associated switches Sa, Sb, etc.The present disclosure provides circuitry and methods (or systems) for providing storage management in a distributed flash storage environment like that illustrated by FIGS. 1-11 . A storage manager of this disclosure provides memory data block service across distributed storage nodes (e.g., like ICs 10 and their associated flash 120 and cache 130 memory circuits) to still higher-level structure like a file system or database management system. (A "block" may be any convenient amount of data. For example, a block may be the amount of data that fits in the smallest amount of flash memory 120 that can be separately addressed for data writing or reading. A block will typically be a plurality of data words, but each flash memory 120 can typically hold many such blocks.)In accordance with certain possible features of the disclosure, the storage manager may map logical data blocks to physical data blocks of the flash memories 120. In accordance with certain other possible features of the disclosure, the storage manager may provide dynamic data block migration across different storage nodes 10/120/130 to improve data access efficiency. The distributed storage manager is preferably circuitry in and/or software running on each of the ICs 10 in the distributed storage system. The storage manager system elements in each IC 10 are preferably tightly coupled to the storage manager system elements in all of the other ICs 10 in the distributed system. This tight coupling can be via the routing circuitry 80 of the ICs and the inter-IC connections 210 between the ICs.An illustrative embodiment of a distributed flash storage manager 1200 is shown in FIG. 12 . As shown in this FIG., storage manager 1200 has a layered structure. Flash device driver 1230 is the lowest layer in this structure. The next higher layer is flash translation layer 1220. The upper layer is logic block manager layer 1210. Each of these layers may be circuitry (or may include circuitry) on each instance of IC 10 in the distributed flash storage system. Alternatively, each of these layers may be or may include corresponding firmware and/or software running in circuitry on each instance of IC 10 in the distributed system. (Again, as noted earlier in this specification, terms like "circuitry" as used herein are generic to circuitry alone and to circuitry with suitable firmware and/or software.)Considering first flash device driver layer 1230, this layer performs hardware-related functions for storage manager 1200. For example, layer 1230 may provide the actual physical device identification ("ID") for the one of several flash devices 120 (connected to the IC 10 including this particular instance of storage manager 1200) that is to be accessed in a particular memory transaction (data write or data read). Layer 1230 may additionally identify the read/write sector in that flash device 120 that is to be accessed. Layer 1230 may still further provide the DMA 50 ( FIG. 1 ) data transfer (e.g., from flash to cache memory or vice versa).From the foregoing, it will be seen that the outputs of layer 1230 are specific to particular physical locations in the immediately associated memory elements 120/130 that are to be used in the particular memory transaction being carried out. Layer 1230 gets at least the basics of this physical location information from the associated flash translation layer 1220. Note, however, that upper layers 1210 and 1220 preferably give to the associated layer 1230 only information for blocks that are in the memory elements 120/130 that are connected to the IC 10 that includes this instance of elements 1200. Thus one of the functions of upper layers 1210 and 1220 is to effectively filter out (and not pass on to the associated layer 1230) information for any logical blocks that are not physically "owned by" the elements 120/130 connected to the IC 10 including this element 1200 instance. ("Owned by" means that the block is actually stored in the elements 120/130 that "own" that block.)Flash translation layer 1220 typically provides mapping between each "logical" block of memory data and the physical portion (also sometimes referred to as a block) of the memory resources 120/130 that actually contains ("owns") that block of data. A physical block may be identified by a node (IC 10) identification ("ID"), a flash 120 channel number, a flash 120 device number, a flash 120 block number, and a flash 120 sector number. Each logical block may be identified by a node (IC 10) ID and a logical block number. Flash translation layer 1220 may therefore maintain a mapping table whereby each immediately above-mentioned logical block number can be converted to the appropriately corresponding flash channel number, flash device number, flash block number, and flash sector number (all forming parts of a physical portion ID). Again, if (and only if) these last-mentioned physical location numbers are for a block owned by the memory 120 connected to the IC 10 having the associated node ID, then layer 1220 passes these physical location numbers on to the associated layer 1230 for use in accessing the identified physical portion of the associated memory 120.Each layer 1220 may also perform related services like block allocation (e.g., when new data is initially written into memory 120), garbage collection (e.g., when a portion of memory 120 no longer contains data that may be needed), and wear leveling (e.g., to avoid excessive over-use of some portions of memory 120, while other portions are not being accessed as frequently).Logic block manager 1210 provides storage block service to the entire system (i.e., all of the nodes 10/120/130 in an entire system). Each block has a unique global identification ("ID"), which includes a node (IC 10) ID and a logical block number. Any node can request to access any block anywhere in the entire system using the global ID for that block. Based on the node ID portion of the global ID, the request is routed to the correct IC 10 (the "owner" of the requested block). This routing can be performed via the routing circuitry 80 and inter-IC connections 210 needed to get the request from the requesting node to the owner node. When the request reaches the owner node (IC 10), the logic block manager 1210 applies the logical block number part of the request to the flash translation layer 1220 of that IC 10. That layer 1220 then processes the logical block number information as described earlier in this specification, leading ultimately to accessing the requested block in the flash memory 120 that is connected to the owner node IC 10.FIGS. 13a and 13b (sometimes referred to collectively as FIG. 13 ) show an illustrative embodiment of how a read command or request may be handled in distributed flash memory systems in accordance with this disclosure. At 1310, any node (IC 10) may initiate a read command. The node initiating such a read command may be referred to as the "requester node" or "the requester." The read command may include the global ID of the requested data block.At 1320 the read command is routed to the node (IC 10) that is the "owner" of the requested data block. This routing can take place through the interconnect networks 80/210 of the system. As noted earlier, the global ID of each data block includes the node ID of that block. The node ID identifies the node that is the owner of the block, which enables interconnect networks 80/210 to route the read command to the proper node in the system.At 1330 the owner node checks the status of the data block identified in the read command. Two outcomes of such a check are possible. First, it may be found that the data block is "free" (meaning, e.g., that no node is currently writing to that block). Alternatively, it may be found that the data block is "locked" (meaning, e.g., that some node is currently writing to that block). If the node is free, control passes from 1330 to 1340. We will first continue with this branch from 1330. Later we will come back to the other branch from 1330.At 1340 the circuitry of the owner node reads the requested data out of the block identified in the read command. This will typically require processing the logical block number portion of the global ID of the requested block through the storage manager 1200 ( FIG. 12 ) of the owner node as described earlier in this specification. Also at 1340 the data block thus read out is routed to the requester via the interconnect networks 80/210 of the system. The read data may thus get back to the requester via the same route established for the read request, but in the opposite direction. This satisfies the read request, and so the read protocol can end at 1350.Returning now to the other branch from 1330, if the data block is locked, control passes from 1330 to 1360. At 1360, the owner node sends a data block non-available status packet back to the requester via the interconnect networks 80/210. At 1370 the requester receives this non-available status packet. At 1380 the requester can try again to satisfy its read request by restarting the protocol at 1310.FIGS. 14a-c (sometimes referred to collectively as FIG. 14 ) show an illustrative embodiment of how a write command or request may be handled in distributed flash memory systems in accordance with the disclosure. At 1410, any node (IC 10) can initiate a write command. The node initiating such a write command may be referred to as the "requester node" or "the requester." The write command may include the global ID of the data block into which it is desired to write data. Any node (IC 10) in the system may be the "owner" of this data block, where "owner" has the same meaning as used elsewhere in this specification.At 1420 the write command is routed to the owner node. This routing can take place through the interconnect networks 80/210 of the system. As noted earlier, the global ID of each data block includes the node ID of that block. The node ID identifies the node that is the owner of the block, which enables interconnect networks 80/210 to route the write command to the proper node in the system.At 1430 the owner node checks the status of the data block identified in the write command. If the data block is free (as explained earlier), control passes from 1430 to 1440. If the data block is locked (as also explained earlier), control passes from 1430 to 1460.At 1440 the circuitry of the owner node sends a write acknowledge packet back to the requester via interconnect networks 80/210. At 1452 the requester receives the write acknowledge packet. At 1454 the requester sends the write data packet (i.e., the actual data to be written) to the owner via interconnect networks 80/210. At 1456 the owner writes the write data packet to the data block. At 1458 the write protocol ends.Returning to the other branch from 1430, at 1460 the owner sends a data block non-available status packet to the requester via interconnect networks 80/210. At 1470 the requester receives the non-available status packet. At 1480 the requester can retry the write command by starting again at 1410.FIGS. 15a-c (sometimes collectively referred to as FIG. 15 ) show an illustrative embodiment of circuitry (or equivalent structure) in accordance with another possible aspect of this disclosure. This is structure for providing dynamic data block migration (e.g., within a distributed flash memory system such as is described elsewhere in this specification). Each IC 10 in such a system may include an instance of the FIG. 15 structure. This structure may be dedicated circuitry on the IC, firmware on the IC, software running on more general-purpose circuitry on the IC, or any combination of the foregoing. To simplify the following discussion, it will be assumed that FIG. 15 shows circuitry (which "circuitry" terminology is again consistent with the generic use of that term herein to refer to circuitry alone or to circuitry with or running software).The FIG. 15 circuitry includes one counter 1510 for each data block (e.g., in flash memory 120) connected to the node (IC 10) that includes those counters 1510. Each counter 1510 counts the number of times that this node accesses the associated data block owned by this node.The FIG. 15 circuitry also includes M more counters 1512 for each of the N data blocks owned by this node. M is the number of other nodes (ICs 10) in the system. For each data block, each of that data block's M counters 1512 is associated with a respective one of the M other nodes in the system. Each counter 1512 counts the number of times that the associated other node accesses the associated data block.There is one comparator 1514 associated with each of the counters 1512. (It will be understood that the number of comparators 1514 can be reduced by time-sharing the reduced number of comparators. For example, a single comparator 1514 can be time-shared by all of counters 1512. To simplify the discussion, however, it will be assumed that there is a separate comparator 1514 for each counter 1512.) Each comparator 1514 compares (1) the output 1513 of a respective one of counters 1512, and (2) the output 1511 of the counter 1510 for the same data block that the output 1513 relates to. If (and only if) output 1513 is greater than output 1511, then the comparator 1514 applies an enabling signal to a respective one of comparator circuits 1518. (Output 1511 is the count currently registered by the associated counter 1510. Output 1513 is the count currently registered by the associated counter 1512.)There is one comparator 1518 for each comparator 1514. (Again, the number of comparators 1518 can be reduced by time-sharing as described above in connection with elements 1514.)When enabled, each comparator 1518 compares the output 1513 of a respective one of counters 1512 to a threshold value output by threshold value register 1516. For example, any desired threshold value may be programmed into register 1516. If (and only if) the output 1513 exceeds the threshold value, comparator 1518 produces an output for enabling migration request initiation circuitry 1520.The significance of the foregoing is as follows. Whenever the count of accesses of a data block by a non-owner node exceeds both (1) the number of accesses of the data block by that data block's current owner node and (2) a predetermined threshold number of accesses (from register 1516), an attempt will be made to migrate (transfer) that data block from the current owner node to the above-mentioned other node in order to make that other node the new owner of the data block. This tends to give ownership of each data block to the node that is making most frequent use of (i.e., most frequently accessing) that data block. This can greatly increase the access efficiency of the distributed memory system as a whole. The data block migrations needed to produce this result are carried out by elements 1520, 1530, etc. in FIG. 15b , as will now be described.When circuitry 1520 is enabled as mentioned earlier, circuitry 1520 knows (by knowing which comparator 1518 enabled it) which data block ("the transferred block") needs to be migrated, and to which other node ("the destination node") that data block needs to be migrated. Circuitry 1520 therefore sends a migration request to the destination node (e.g., via interconnection networks 80/210). A migration request (like a read request or a write request) can have the characteristics of a data packet (e.g., as in FIG. 6 and described earlier in this specification). Thus, for example, a migration request may have a header including the ID of the destination IC, which enables the interconnect resources 80/210 of the system to route the migration request to the destination IC. This is similar to what is done for data packets (e.g., as in FIG. 6 ), read requests, and write requests.As mentioned earlier, each node (IC 10) includes all of the elements shown in FIG. 15 . Therefore the illustrated node depicted (in part) in FIG. 15 also includes the elements needed to enable that node to be a destination node. The destination node elements can accordingly also be described in connection with FIG. 15 (even though in any actual data block migration two different nodes (i.e., a "source node" originating the migration, and the destination node receiving the migration) will be involved. Thus the migration request from the source node is received by migration request acceptance circuitry 1530 in the destination node. This circuitry 1530 checks to see whether or not the memory (e.g., 120) connected to that node can receive the data block proposed for transfer (migration). Migration request ACK/NAK (acknowledge/non-acknowledge) circuitry 1532 of the destination node sends back to the source node either an ACK signal (meaning that the destination node can receive the data block transfer) or a NAK signal (meaning that the destination node cannot receive the data block transfer).In the source node, migration request ACK/NAK processing circuitry responds to an ACK (and only an ACK) by enabling migration execution circuitry 1542 to actually send the data block to be migrated to the destination node. (A NAK terminates the attempt to migrate the data block.) When the data block migration has been successfully accomplished, migration report broadcast circuitry 1544 is enabled to send a broadcast message or report notifying all nodes about the migration of the transferred block. For example, the broadcast migration report allows the circuitry 1200 ( FIG. 12 ) in all nodes (ICs 10) in the system to update the records the nodes maintain as to the locations of all data blocks in the system. This is shown in more detail in FIG. 15c , which is discussed in the next paragraph. Upper layer system components (e.g., file system, database management system, etc., components (not shown)) may also be notified about the migration of the block (e.g., via an external link 140 ( FIG. 1 )). Although FIG. 15b shows elements 1542 and 1544 operating as part of source node operations, they may alternatively operate as part of destination node operations.As shown in FIG. 15c , each IC 10 further includes storage management update circuitry 1550 for receiving and processing a migration report that has been broadcast as discussed in connection with element 1544 in FIG. 15b . When such a migration report is received, circuitry 1550 causes the logic block manager 1210 in the IC 10 that includes that circuitry 1550 to change in that block manager's records (mapping information) the owner node ID of the transferred block from the source node ID to the destination node ID. Similarly, circuitry 1550 in the source node causes the associated source node flash translation layer 1220 to delete from that translation layer's records (mapping information) the logical block number of the transferred block, while the circuitry 1550 in the destination node causes that circuitry's associated destination node flash translation layer 1220 to add to its records (mapping information) the logical block number of the transferred block. (As an alternative to making these changes to the translation layer 1220 records in response to the broadcast migration report, these changes could instead be made as part of the data migration operation itself, because these changes only affect the translation layers in the source and destination nodes involved in the migration.) Flash device driver 1230 in FIG. 15c has already been fully described in connection with FIG. 12 .FIGS. 16a-c (sometimes referred to collectively as FIG. 16 ) show illustrative embodiments of dynamic data block migration methods that can be performed, e.g., by circuitry of the type shown in FIG. 15 in accordance with this disclosure. Each node (IC 10) in a distributed memory system may perform the FIG. 16 method.At 1610 each access of each data block by the owner node of that data block is counted.At 1620 each access of each data block by each other node is separately counted.At 1630 each count from 1620 is compared to (1) the count (from 1610) of accesses of the same data block by the node that currently owns that data block, and (2) a threshold value. For any data block whose count (from 1620) for some non-owner node exceeds both the owner node count (from 1610) and the threshold, control passes from 1630 to 1640. The last-mentioned data block may be referred to as the transferred block, and the last-mentioned non-owner node may be referred to as the destination node. (If there is no "yes" outcome from 1630, control passes from 1630 back to 1610.)At 1640 the current owner block ("the source block") sends a request to transfer the transferred block to the destination node.At 1650 the destination node determines whether or not it can accept the proposed transfer. If not, control passes back to 1610 and the proposed transfer does not take place. If the destination block can accept the proposed transfer, control passes to 1660.At 1660 the source node transfers the transferred block to the destination node. At 1670 a message or report is broadcast to all nodes (ICs 10) notifying them about the transfer of the transferred block. At 1680 upper layer elements such as file system elements, database management system elements, etc., are notified about the migration of the transferred block.FIG. 16c shows in more detail operations that may be performed in ICs 10 in response to a message broadcast as discussed above in connection with element 1670 in FIG. 16b . The FIG. 16c operations are performed to update the records (mapping information) in elements 1210 and 1220 (e.g., FIGS. 12 and 15c ) in view of the data block migration (transfer) that has taken place. At 1672 the record of the owner node ID of the transferred block is changed (from the source node ID to the destination node ID) in all logic block manager circuits 1210 throughout the system. At 1674 the flash translation layer 1220 in the source node has that translation layer's records updated by deleting the logical block number of the transferred block. At 1676 the flash translation layer 1220 in the destination node has that translation layer's records updated by adding the logical block number of the transferred block. (Again, a possible alternative is to perform operations 1674 and 1676 in connection with the actual data migration, rather than in response to a broadcast migration report.)Throughout this disclosure, references to "data," "information," or the like refer to physical embodiments of such data, information, or the like (e.g., as electrical signals, stored electrical charge, particular magnetic states of magnetizable media, etc.). Also throughout this disclosure (as has already been said), terms like "circuit," "circuitry," "integrated circuit," "IC," and the like can refer to combinations of hardware and software.It will be understood that the foregoing is only illustrative of the principles of the disclosure, and that various modifications can be made by those skilled in the art without departing from the scope and spirit of the disclosure. For example, systems can be constructed with any number of nodes (ICs 10) to provide distributed flash memory systems of any desired size. As another example of modifications within the scope of this disclosure, elements and/or functions that are shown herein as separate may be combined into single elements and/or functions; and elements and/or functions that are shown herein as integral or unitary may be subdivided into two or more separate sub-elements or sub-functions.
The invention relates to a processor peak current control device and method. A driver (e.g., firmware or software) that improves the performance of a system-on-chip (SoC) in battery mode. The driver is a Peak Power Manager (PPM) that allows for a substantial increase in SoC Peak Power Limit Level in battery mode (and thereby increase in Top Mode Performance). The PPM sets a Vth threshold voltage (a voltage level at which the platform will throttle the SoC) in a manner that prevents accidental shutdown (or black screen) of the system. The PPM calculates a Psoc, pk SoC peak power limit (e.g., PL4) from a threshold voltage (Vth). This is two compliant parameters, and if one is set, the other can be calculated. The scheme of PPM is used to optimally set one parameter (Vth) based on system parameters, as well as the history of operations.
1.A method that includes:calculating the current threshold voltage as a function of: the battery no-load voltage, the maximum threshold voltage, and the voltage gap between the battery no-load voltage and the previous threshold voltage;calculating the processor peak power limit as the system peak power limit, which in turn is a function of the current threshold voltage;sending the current threshold voltage to a threshold circuit; andThe processor peak power limit is sent to the processor, wherein the current threshold voltage sets a threshold voltage for triggering throttling of the processor to manage peak power of the processor.2.The method of claim 1, wherein calculating the processor peak power limit comprises:Calculate the system peak power as a function of: the current threshold voltage, battery no-load voltage, system power rail capacitance, the time between the system voltage falling below the current threshold voltage and the system reducing peak power, battery resistance, and the minimum voltage level of the system power rail.3.The method of claim 2, wherein calculating the processor peak power limit comprises:The difference between the system peak power and the power of the rest of the platform is scaled.4.4. The method of claim 3, wherein scaling the difference includes accounting for power conversion losses of a voltage regulator.5.The method of claim 1, comprising:comparing the processor peak power limit to a maximum power peak power limit; andIf the processor peak power limit is greater than the maximum power peak power limit, then the processor peak power limit is set to the maximum power peak power limit.6.The method of claim 5, comprising:comparing the processor peak power limit to a minimum power peak power limit; andIf the processor peak power limit is less than the minimum power peak power limit, then the processor peak power limit is set to the minimum power peak power limit.7.The method of claim 1 wherein the processor peak power limit is an upper limit on instantaneous peak power that can be supplied by capacitors of a battery and system power rail before the processor is throttled.8.The method of claim 1, comprising:The battery no-load voltage is read from the battery's fuel gauge, where the battery no-load voltage is a runtime variable.9.The method of any one of claims 1 to 8, comprising:If it is determined that the processor is not throttling when the processor peak power limit exceeds the processor's peak power, the current threshold voltage is lowered.10.A system that includes:system load, which includes a system-on-chip;a battery with a fuel gauge to provide the battery no-load voltage;a threshold circuit for throttling the system-on-chip according to a threshold;a memory for storing the maximum threshold voltage and the voltage gap between the battery no-load voltage and the previous threshold voltage;Power manager for:Calculate the current threshold voltage as a function of: the battery no-load voltage, the maximum threshold voltage, the voltage gap;calculating the processor peak power limit as the system peak power limit, which in turn is a function of the current threshold voltage;sending the current threshold voltage to the threshold circuit; andThe processor peak power limit is sent to the system-on-chip, wherein the current threshold voltage sets a threshold voltage used to trigger throttling of the system load to manage peak power of the system load.11.11. The system of claim 10, wherein the power manager is configured to calculate system peak power as a function of: the current threshold voltage, the battery no-load voltage, system power rail capacitance, system voltage drop to The time between the current threshold voltage and the system reducing peak power, battery resistance, and the lowest voltage level of the system power rail.12.11. The system of claim 11, wherein the memory stores the system power rail capacitance and the time between the system voltage falling below the current threshold voltage and the system reducing peak power.13.The system of claim 11, wherein the fuel gauge provides the battery resistance.14.11. The system of claim 11, wherein the power manager is configured to scale the difference between the system peak power and the power of the rest of the platform, wherein the power of the rest of the platform is part of the system load .15.The system of claim 11, wherein the power manager is configured to scale the difference to account for power conversion losses of the voltage regulator.16.The system of claim 14, wherein the power manager is used to:comparing the processor peak power limit to a maximum power peak power limit; andIf the processor peak power limit is greater than the maximum power peak power limit, then the processor peak power limit is set to the maximum power peak power limit.17.The system of claim 14, wherein the power manager is used to:comparing the processor peak power limit to a minimum power peak power limit; andIf the processor peak power limit is less than the minimum power peak power limit, then the processor peak power limit is set to the minimum power peak power limit.18.A device comprising:processor; andA power manager coupled to the processor dynamically adjusts a threshold voltage that determines when to throttle the processor and determines a peak power limit for the processor to increase performance of the processor.19.19. The apparatus of claim 18, wherein the power manager reads battery no-load voltage and battery impedance from a battery fuel gauge to determine the threshold voltage.20.19. The apparatus of any one of claims 18 to 19, wherein the power manager is configured to:comparing the peak power limit to a maximum power peak power limit; andIf the peak power limit is greater than the maximum power peak power limit, then the peak power limit is set to the maximum power peak power limit.21.A machine-readable medium having stored thereon machine-storable instructions that, when executed, cause one or more machines to perform the method of any one of claims 1 to 9.
Processor peak current control device and methodtechnical fieldThe present disclosure relates to a processor peak current control apparatus and method.Background techniqueWith each generation of system-on-chip (SoC), the peak power of the SoC is also increasing. The maximum power (Pmax) of the SoC is limited by the IR drop that causes the supply voltage at the SoC to drop below the lowest allowed threshold. Pmax sets the maximum frequency of the processor core of the SoC and directly affects the performance of the SoC. Using 2s batteries (and 1s batteries) to change platform-level power states (e.g., PL4 states) based on battery state of charge and having the peak power reported by the fuel gauge for a given minimum system voltage is a challenge. Maintaining the SoC's performance in battery mode (or preventing it from drastically degrading) is a further challenge when considering battery wear, temperature changes and state of charge.SUMMARY OF THE INVENTIONAccording to an aspect of the present disclosure, there is provided a method comprising: calculating a current threshold voltage as a function of a battery no-load voltage, a maximum threshold voltage, and a difference between the battery no-load voltage and a previous threshold voltage voltage gap; calculate processor peak power limit as a system peak power limit, which in turn is a function of said current threshold voltage; send said current threshold voltage to a threshold circuit; and send said processor peak power limit A power limit is sent to the processor, wherein the current threshold voltage sets a threshold voltage used to trigger throttling of the processor to manage peak power of the processor.According to an aspect of the present disclosure, there is provided a system comprising: a system load including a system on a chip; a battery having a fuel gauge to provide a battery no-load voltage; a threshold circuit for throttling the system on chip according to a threshold; a memory , for storing the maximum threshold voltage, and the voltage gap between the battery no-load voltage and the previous threshold voltage; and a power manager for: calculating the current threshold voltage as a function of: the battery no-load voltage , the maximum threshold voltage, the voltage gap; calculate the processor peak power limit as the system peak power limit, which is in turn a function of the current threshold voltage; send the current threshold voltage to the and sending the processor peak power limit to the system-on-chip, wherein the current threshold voltage sets a threshold voltage used to trigger throttling of the system load to manage the throttling of the system load peak power.According to an aspect of the present disclosure, there is provided an apparatus comprising: a processor; and a power manager coupled to the processor for dynamically adjusting a threshold voltage determining when to throttle the processor, and for all The processor determines a peak power limit to increase the performance of the processor.According to an aspect of the present disclosure, there is provided a machine-readable medium having stored thereon machine-storable instructions that, when executed, cause one or more machines to perform a method as described above.Description of drawingsEmbodiments of the present disclosure will be more fully understood from the detailed description given below and from the accompanying drawings of various embodiments of the present disclosure, which, however, should not be construed to limit the disclosure to particular embodiments, but Just for illustration and understanding.1 illustrates a high-level architecture of a scheme that allows the SoC peak power limit (Psoc,pk) to be higher than the battery sustained peak capability without violating the system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments .2A illustrates a graph showing battery no-load voltage and charger threshold voltage as a function of the state of charge of the battery, according to some embodiments.2B illustrates a graph showing the total battery peak power allowed by the scheme, and the power level at which the charger will throttle, according to some embodiments.3 illustrates a flowchart of a method of allowing a SoC peak power limit (Psoc,pk) to be higher than a battery's sustained peak capability without violating a system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments.4 illustrates intelligence implementing a scheme to allow the SoC peak power limit (Psoc,pk) to be higher than the battery sustain peak capability without violating the system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments device or computer system or SoC (system on chip).Detailed waysSome embodiments describe a driver (eg, firmware or software) that improves the performance of the SoC in battery mode. The driver is a Peak Power Manager (PPM) that, according to some embodiments, allows the SoC peak power limit level to be substantially increased in battery mode (and thus boost mode performance). In some embodiments, the driver implements power throttling and is part of Intel's Dynamic Tuning Technology (DTT). In various embodiments, the peak power limit is referred to as PL4. However, embodiments are applicable to other peak power limits.In some embodiments, the peak power manager sets the Vth threshold voltage (the voltage level at which the platform will throttle the SoC) in a manner that prevents unexpected system shutdown (or black screen). In some embodiments, the peak power manager calculates the Psoc,pk SoC peak power limit (eg, PL4) based on the threshold voltage (Vth). These are two dependent parameters, if one is set, the other can be calculated. The schemes of the various embodiments are used to optimally set a parameter (Vth) based on system parameters, and the history of operation.In some embodiments, there is provided a machine-readable storage medium comprising machine-executable instructions that, when executed, cause one or more machines to perform a method comprising calculating as a battery no-load voltage, a maximum threshold The current threshold voltage as a function of voltage and the voltage gap between the battery no-load voltage and the previous threshold voltage. The method also includes calculating the processor peak power limit as the system peak power limit, which in turn is a function of the current threshold voltage. The method also includes sending a current threshold voltage to the threshold circuit; and sending the processor peak power limit to the processor, wherein the current threshold voltage sets a threshold voltage for triggering throttling of the processor to manage the processor's peak power. In some embodiments, a method of calculating a processor peak power limit includes calculating system peak power as a function of current threshold voltage, battery no-load voltage, system power rail capacitance, system voltage falling below current threshold voltage and The system reduces the time between peak power, battery resistance, and the minimum voltage level of the system power rails. In some embodiments, a method of calculating a processor peak power limit includes scaling the difference between the system peak power and the power of the rest of the platform. In some embodiments, the method of scaling the difference includes accounting for power conversion losses of the voltage regulator.In some embodiments, the method further includes comparing the processor peak power limit to the maximum power peak power limit; and if the processor peak power limit is greater than the maximum power peak power limit, setting the processor peak power limit to the maximum power Peak power limit. In some embodiments, the method further includes: comparing the processor peak power limit to a minimum power peak power limit; and if the processor peak power limit is less than the minimum power peak power limit, setting the processor peak power limit to a minimum Power Peak Power Limit. In some embodiments, the processor peak power limit is the upper limit of the instantaneous peak power that the capacitors of the battery and system power rails can provide before the processor is throttled. In some embodiments, the method further includes reading a battery no-load voltage from a fuel gauge of the battery, wherein the battery no-load voltage is a runtime variable. In some embodiments, the method further includes reducing the current threshold voltage if it is determined that the processor is not throttling when the processor peak power limit exceeds the processor's peak power.Today, SoCs are throttled to a minimum operating frequency. Some embodiments provide a scheme to dynamically calculate the throttle level (Psoc,th) and set the SoC throttle peak power (Psoc,th) based on the available battery power (which varies slowly). In some embodiments, the power management unit firmware (FW) of the SoC determines frequency and voltage based on Psoc,th provided by the peak power manager. In this case, the throttling event has less negative impact on SoC performance. Today, no SoC allows the total peak power of the system to exceed the battery peak power capability without the risk of a black screen. Various embodiments provide a scheme that allows the Pmax framework to operate. Other technical effects will be apparent from the various figures and embodiments.In the following description, numerous details are discussed in order to provide a more thorough description of embodiments of the present disclosure. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present disclosure.Note that in the respective figures of the embodiments, signals are represented by lines. Some lines may be thicker to indicate more constituent signal paths, and/or have arrows at one or more ends to indicate the main information flow direction. This indication is not intended to be limiting. Rather, these lines are used in conjunction with one or more exemplary embodiments to help make a circuit or logic unit easier to understand. Any represented signal, dictated by design needs or preferences, may actually include one or more signals that may travel in either direction and that may be implemented using any suitable type of signal scheme.Throughout the specification, and in the claims, the term "connected" means a direct connection, such as an electrical, mechanical or magnetic connection between connected things, without any intervening devices.The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between connected things, or an indirect connection through one or more passive or active intermediate devices.The term "adjacent" herein generally refers to the location of one thing next to (eg, next to or in close proximity to, with one or more things in between) or adjacent (eg, adjacent to) another thing.The term "circuit" or "module" may refer to one or more passive and/or active components arranged to cooperate with each other to provide desired functionality.The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a" and "the" include plural referents. The meaning of "in" includes "in" and "on".The term "analog signal" is any continuous signal for which the time-varying characteristic (variable) of the signal is a representation of some other time-varying signal, ie, similar to another time-varying signal.The term "digital signal" is a physical signal that is, for example, a representation of a sequence of discrete values (quantized discrete time signal) of an arbitrary bit stream or of a digitized (sampled and analog-to-digital converted) analog signal.The term "scaling" generally refers to converting a design (schematic and layout) from one process technology to another and can subsequently reduce the layout area. In some cases, scaling also refers to increasing the size of a design from one process technology to another and may subsequently increase the layout area. The term "scaling" also generally refers to shrinking or expanding the size of layouts and devices within the same technology node. The term "scaling" may also refer to adjusting (eg, slowing down or speeding up - ie scaling down or scaling up, respectively) the frequency of a signal relative to another parameter (eg power supply level).The terms "substantially", "approximately", "approximately", "approximately" and "approximately" generally mean within +/- 10% of the target value.Unless otherwise indicated, the use of the ordinal adjectives "first," "second," and "third," etc. to describe a common object merely indicates that different instances of similar objects are cited, and is not intended to imply that the objects so described must be in be in a given sequence in time, space, rank, or in any other way.For purposes of this disclosure, the phrases "A and/or B" and "A or B" mean (A), (B), or (A and B). For purposes of this disclosure, the phrase "A, B and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or ( A, B and C).The terms "left", "right", "front", "rear", "top", "bottom", "top", "bottom", etc. (if any) in the specification and in the claims are used for for descriptive purposes and not necessarily for describing permanent relative positions.It is noted that those elements of the drawings that have the same reference numerals (or names) as elements of any other drawing may operate or function in any manner similar to that described, but are not limited thereto.For embodiments, the transistors in the various circuits and logic blocks described herein are metal oxide semiconductor (MOS) transistors or derivatives thereof, where the MOS transistors include a drain, a source, a gate and a body terminal. Transistor and/or MOS transistor derivatives also include tri-gate and FinFET transistors, fully enclosed gate cylindrical transistors, tunneling FETs (Tunneling FETs, TFETs), square wire transistors, or rectangular strip transistors, ferroelectric FETs (ferroelectric FETs, FeFET) or other devices like carbon nanotubes or spin devices that implement transistor functionality. The MOSFET symmetrical source and drain terminals are the same terminals and are used interchangeably here. On the other hand, TFET devices have asymmetric source and drain terminals. Those skilled in the art will appreciate that other transistors, such as bipolar junction transistors (BJT PNP/NPN), BiCMOS, CMOS, etc., may be used without departing from the scope of the present disclosure.1 illustrates a high-level architecture of a scheme that allows the SoC peak power limit (Psoc,pk) to be higher than the battery sustained peak capability without violating the system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments 100. Architecture 100 includes software (SW) or firmware (FW) 101 and hardware (HW) 102 components. In various embodiments, the FW 101 includes a peak power manager (PPM) 103 that manages the peak power performance of the system load 106 . In this simplified architecture, HW 102 includes battery pack 104, threshold circuit 105, system load 106, storage device 107, system resistance Rsys, and system capacitance Csys. In some embodiments, the battery pack 104 includes a battery fuel gauge 104a and one or more battery cells 104b. Here, one or more battery cells are modeled as a Thevenin equivalent circuit with a battery resistance Rbat, a voltage source Voc, and an RC circuit including a resistor Rtr and a capacitor Ctr. The threshold value circuit 105 includes a register 105a (or nonvolatile memory) for storing the threshold value Vth, and a comparator 105b. System load 106 includes SoC 106a (eg, the SoC of FIG. 4) and other system components 106b. The storage device 107 may be a hard disk, a nonvolatile memory, or a volatile memory.In some embodiments, the battery fuel gauge 104a reports the battery no-load voltage (Vbat,nl) and Rbat (pack-side impedance on the battery path) to the PPM 103 . In some embodiments, the battery fuel gauge 104a measures the voltage and current of the battery pack 104 to estimate runtime values of Rbat and Vbat,nl. Note that fuel gauge 104a reports parameters of the first-order Thevenin equivalent model of battery cell 104b. Here, Vbat,nl is the instantaneous voltage of the battery with no load, and Rbat is the ohmic resistance of cell 104b and components (eg, isolation MOSFETs) along the battery path. In some examples, polarized RC circuits Rtr and Ctr, which represent transient behavior, are not reported by fuel gauge 104a because they may not be used by calculations performed by peak power manager 103 .In some embodiments, peak power manager 103 is software that calculates Psoc,pk (SoC peak power limit) and Vth (threshold voltage for triggering the throttle signal). If Vsys (system voltage) falls below Vth (due to high power consumption by the system load), comparator 105b (in throttling hardware circuit 105) asserts the throttling signal to reduce the peak power of the SoC.In some embodiments, the storage device 107 provides system-related parameters such as, but not limited to, Csys (total capacitance on system rails); Rsys (battery path impedance on the system side, ie, not battery pack resistance); Vsys,min ( system rail minimum voltage); Vth,max (maximum level of threshold voltage for triggering the throttle signal); Prop (peak power of the rest of the platform); Vth,gap (delta between Vbat,nl and Vth); Δt (the time between the system voltage falling below Vth and the SoC reducing peak power due to throttling signal assertion); and ηVR (Voltage Regulator VR Power Conversion Loss).In various embodiments, PPM 103 selects and provides HW 102 with Psoc,pk (SoC peak power limit) and Vth (threshold voltage for triggering the throttle signal). Higher Psoc,pk levels will mean higher Vth levels and vice versa. A higher Vth level means a higher threshold for trigger throttling. In some embodiments, the peak power manager 103 includes an algorithm that uses the parameters Vth,gap, which are used to set Vbat,nl (battery no-load voltage) and Vth (the throttling signal used to trigger the SoC) delta (difference) between threshold voltages). PPM 103 allows the system designer to tune a parameter (eg, Vth, gap) to optimize performance for different battery states of charge and for different applications and benchmarks.In some embodiments, Vth,gap may also be further adjusted automatically by the peak power manager 103 or other SW driver based on the number of throttling events. For example, if SoC 106a never throttles, this is a clear indication that Vth,gap is set too high, and Vth,gap can be set lower - possibly due to low application ratios, RoP (rest of the platform or other System component) 106b consumes low power, or some system parameters are better than initially expected.Considering the SoC peak power limit Psoc, pk is bounded by the upper limit set by Vth given as:Vth=min(Vth,max,Vbat,nl-Vth,gap) (Equation 1)Where Vth,max is the maximum Vth value that can be set by the platform throttle circuit 105, Vbat,nl is the instantaneous no-load voltage of the battery, and Vth,gap is the voltage margin between Vth and Vbat,nl.In some embodiments, Vth,max is a static variable provided by storage device 107 . In other embodiments, Vth,max is a programmable variable provided by storage device 107 . In some embodiments, Vbat,nl is a runtime variable provided by the platform battery fuel gauge 104a. In some embodiments, Vth,gap is a static variable provided by storage device 107 . In some embodiments, Vth,gap can be overridden at runtime for performance optimization. In some embodiments, Vth,gap is a programmable variable.For a given upper limit set by Vth, PPM 103 determines an upper limit for the instantaneous peak power Psys,pk of the system that can be provided by the battery and system rail capacitors before the throttling signal is asserted. In some embodiments, PPM 103 determines Psys,pk as:where Vth is the setting given in Equation 1; Vsys,min is the minimum voltage level of the system rail; Δt is the time between the system voltage falling below Vth and the SoC reducing its peak power (due to throttle signal assertion), and Csys is The total capacitance on the system rail, Rbat is the battery resistance in ohms. In some embodiments, Vsys,min is a static or programmable variable provided by storage device 107 . In some embodiments, Δt is a static or programmable variable provided by storage device 107 . In some embodiments, Csys is a static or programmable variable provided by storage device 107 . In some embodiments, Rbat is a runtime variable provided by the platform fuel gauge 104a.In some embodiments, the PPM 103 determines the SoC peak power limit Psoc,pk (value written to the SoC via the HW/SW interface) by subtracting the power of the rest of the platform from the system power:Psoc,pk=ηVR(Psys,pk-Prop) (Equation 3)where Psys,pk is the upper bound on the instantaneous peak power that the battery 104 and system rail capacitors can provide before the throttling signal is asserted given by Equation 2, Prop is the power of the rest of the platform, and ηVR is the scaling factor (usually used to account for voltage regulator VR power conversion losses). In some embodiments, Prop is a static or programmable variable provided by the storage device. In some embodiments, nVR is a static or programmable variable provided by the storage device.In some embodiments, once the Psoc,pk value is calculated using Equation 3, the peak power manager 103 compares and clips the Psoc,pk value to the upper bound value Psoc,pk,max and the lower bound value Psoc,pk,min . If the Psoc,pk value is clipped to the limit (Psoc,pk,max and Psoc,pk,min), then the Vth value can be recalculated. In some embodiments, Psys,pk instantaneous peak power is re-evaluated using the following equation, where Psys,pk may be provided by battery 104 and system rail capacitor Csys before fast PROCHOT# is asserted:Psys,pk=Psoc,pk/ηVR+Prop (Equation 4)where Psoc,pk is the peak power limit of the SoC clipped to the limit (Psoc,pk,max and Psoc,pk,min); ηVR is the scaling factor (usually used to account for voltage regulator VR power conversion losses), which is determined by the storage Static or programmable variables provided by device 107 .Using the system peak power Psys,pk, we can obtain the Vth settingwhere Psys,pk is the upper bound on the instantaneous peak power given in Equation 4 that can be delivered by the battery and system rail capacitors Csys before the throttling signal is asserted.2A illustrates a graph 200 showing battery no-load voltage (curve 201 ) and charger threshold voltage (curve 202 ) as a function of the state of charge of battery 104 , according to some embodiments. In this example, the minimum difference between the battery no-load voltage and the threshold voltage is 0.5V.2B illustrates a graph 220 showing the total battery peak power allowed by the scheme (curve 221 ), and the power level at which the charger will throttle (curve 222 ), according to some embodiments. Here, the CPU will be full performance until about 40% state of charge. In various embodiments, the voltage delta between the no-load battery voltage and the threshold voltage is dynamically adjusted based on the number of throttling events. Therefore, the CPU may be throttled when in a lower state of charge.3 illustrates a flowchart 300 of a method of allowing a SoC peak power limit (Psoc,pk) to be higher than a battery's sustained peak capability without violating a system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments. Although the blocks are illustrated in a particular order, the order can be modified. For example, in some embodiments, some blocks may be executed before other blocks, while some blocks may be executed in parallel or concurrently. In various embodiments, the blocks illustrated here are performed by PPM 103 . For example, machine-readable instructions are provided that, when executed by a processor (eg, a SoC), cause PPM 103 to perform the method of flowchart 300 . Details of the operation are described with reference to FIG. 1 .At block 301, Equation 1 is calculated to determine Vth as a function of Vbat,n1, Vth,gap and Vth,max. At block 302, Equation 2 is calculated to calculate Psys,pk as a function of: Vth; Vbat,n1; Csys; delta T(Δt); Rbatl Vsys,min. At block 303, Equation 3 is calculated to calculate Psoc,pk as a function of: Psys,pk; Prop; and ηVR. At block 304, it is determined whether Psoc,pk is greater than Psoc,pk,max. If Psoc,pk is greater than Psoc,pk,max, the process proceeds to block 305 where Psoc,pk is set to Psoc,pk,max and the process proceeds to block 308 . If Psoc,pk is less than or equal to Psoc,pk,max, the process proceeds to block 306 . At block 306, it is determined whether Psoc,pk is less than Psoc,pk,min. If Psoc,pk is less than Psoc,pk,min, the process proceeds to block 307 where Psoc,pk is set to Psoc,pk,min and the process proceeds to block 308 . If Psoc,pk is less than or equal to Psoc,pk,max, the process proceeds to block 310 .At block 308, Equation 4 is calculated to calculate Psys,pk as a function of: Psoc; Prop; and ηVR. At block 309, Equation 5 is calculated to calculate Vth as a function of: Psys,pk; Vbat,nl; Csys; ΔT; Rbat; and Vsys,min. At block 310 , the PPM 103 sends the calculated Vth to the throttling HW circuit 105 . At block 311, the PPM 103 sends the calculated Psoc,pk to the SoC. In this way, by dynamically adjusting Vth and Psoc,pk, unexpected system shutdown (or black screen) can be prevented.4 illustrates intelligence implementing a scheme to allow the SoC peak power limit (Psoc,pk) to be higher than the battery sustain peak capability without violating the system voltage (Vsys) minimum level (Vsys,min) in accordance with some embodiments device or computer system or SoC (system on chip). In some embodiments, device 2400 represents a suitable computing device, such as a computing tablet, mobile or smart phone, laptop, desktop, Internet-of-Things (IOT) device, server, wearable Devices, set-top boxes, e-readers with wireless capabilities, etc. It will be appreciated that certain components are shown in general terms and not all components of such a device are shown in device 2400.In one example, the device 2400 includes a SoC (system on chip) 2401 . Example boundaries of SoC 2401 are illustrated with dashed lines in FIG. 4 , with some example components illustrated as being included within SoC 2401 - however, SoC 2401 may include any suitable components of device 2400 .In some embodiments, device 2400 includes processor 2404 . Processor 2404 may include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, processing cores, or other processing devices. The processing operations performed by the processor 2404 include execution of an operating platform or operating system on which application and/or device functions are performed. Processing operations include operations related to I/O (input/output) with a human user or with other devices, operations related to power management, operations related to connecting computing device 2400 to another device, and the like. Processing operations may also include operations related to audio I/O and/or display I/O.In some embodiments, the processor 2404 includes a plurality of processing cores (also referred to as cores) 2408a, 2408b, 2408c. Although only three cores 2408a, 2408b, 2408c are illustrated in Figure 4, the processor 2404 may include any other suitable number of processing cores, such as tens or even hundreds of processing cores. The processor cores 2408a, 2408b, 2408c may be implemented on a single integrated circuit (IC) chip. Additionally, a chip may include one or more shared and/or private caches, buses or interconnects, graphics and/or memory controllers, or other components.In some embodiments, processor 2404 includes cache 2406 . In an example, some sections of cache 2406 may be dedicated to individual cores 2408 (eg, a first section of cache 2406 is dedicated to core 2408a, a second section of cache 2406 is dedicated to core 2408b, etc.) . In an example, one or more sections of cache 2406 may be shared between two or more cores 2408 . The cache 2406 may be partitioned into different levels, such as a level 1 (L1) cache, a level 2 (L2) cache, a level 3 (L3) cache, and the like.In some embodiments, processor core 2404 may include a fetch unit to fetch instructions, including instructions with conditional branches, for execution by core 2404. Instructions may be fetched from any storage device, such as memory 2430. Processor core 2404 may also include a decode unit to decode fetched instructions. For example, the decode unit may decode the fetched instruction into multiple micro-operations. Processor core 2404 may include a scheduling unit to perform various operations associated with storing decoded instructions. For example, the dispatch unit may hold data from the decode unit until the instruction is ready for dispatch, eg, until all source values for the decoded instruction become available. In one embodiment, the dispatch unit may schedule and/or issue (or dispatch) the decoded instruction to the execution unit for execution.The execution unit may execute the dispatched instructions after they are decoded (eg, by a decode unit) and dispatched (eg, by a dispatch unit). In an embodiment, an execution unit may include more than one execution unit (eg, an imaging computing unit, a graphics computing unit, a general computing unit, etc.). Execution units may also perform various arithmetic operations, such as addition, subtraction, multiplication, and/or division, and may include one or more arithmetic logic units (ALUs). In an embodiment, a coprocessor (not shown) may perform various arithmetic operations in conjunction with execution units.Additionally, execution units may execute instructions out of order. Thus, processor core 2404 may be an out-of-order processor core in one embodiment. The processor core 2404 may also include a retirement unit. The retirement unit may retire executed instructions after they are committed. In an embodiment, retirement of an executed instruction may result in processor state being committed from execution of the instruction, physical registers used by the instruction being deallocated, and the like. The processor core 2404 may also include a bus unit to enable communication between components of the processor core 2404 and other components via one or more buses. The processor core 2404 may also include one or more registers to store data accessed by various components of the core 2404 (eg, values related to assigned app priorities and/or subsystem state (mode) associations).In some embodiments, device 2400 includes connectivity circuitry 2431 . For example, connectivity circuitry 2431 includes hardware devices (eg, wireless and/or wired connectors and communication hardware) and/or software components (eg, drivers, protocol stacks), eg, to enable device 2400 to communicate with external devices. Device 2400 may be separate from external devices such as other computing devices, wireless access points or base stations, and the like.In an example, the connectivity circuit 2431 may include multiple different types of connectivity. In general, connectivity circuitry 2431 may include cellular connectivity circuitry, wireless connectivity circuitry, and the like. The cellular connectivity circuitry of the connectivity circuitry 2431 generally refers to cellular network connectivity provided by wireless operators, eg via the following: GSM (global system for mobile communications) or variants or Derivatives, CDMA (code division multiple access, code division multiple access) or variants or derivatives, TDM (timedivision multiplexing, time division multiplexing) or variants or derivatives, 3rd Generation Partnership Project (3rdGeneration Partnership Project) , 3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variant or derivative, 3GPP Long-Term Evolution (LTE) system or variant or derivative, 3GPP LTE-Advanced (LTE-Advanced, LTE) -A) system or variant or derivative, fifth generation (5G) wireless system or variant or derivative, 5G mobile network system or variant or derivative, 5G New Radio (NR) system or variant or derivatives, or other cellular service standards. The wireless connectivity circuitry (or wireless interface) of the connectivity circuitry 2431 refers to non-cellular wireless connectivity, and may include personal area networks (eg, Bluetooth, near field, etc.), local area networks (eg, Wi-Fi), and/or wide area networks (eg WiMax), and/or other wireless communications. In an example, the connectivity circuitry 2431 may include a network interface, such as a wired or wireless interface, eg, such that system embodiments may be incorporated into a wireless device (eg, a cellular telephone or a personal digital assistant).In some embodiments, device 2400 includes a control hub 2432 that represents hardware devices and/or software components related to interaction with one or more I/O devices. For example, processor 2404 may communicate via control hub 2432 with one or more of display 2422, one or more peripheral devices 2424, storage device 2428, one or more other external devices 2429, and the like. The control hub 2432 may be a chipset, a platform control hub (Platform ControlHub, PCH), and the like.For example, control hub 2432 illustrates one or more connection points for additional devices connected to device 2400, eg, through which a user may interact with the system. For example, devices attachable to device 2400 (eg, device 2429) include microphone devices, speakers or stereo systems, audio devices, video systems or other display devices, keyboard or keypad devices, or other I/O for specific applications equipment, such as a card reader or other device.As described above, the control hub 2432 can interact with audio devices, the display 2422, and the like. For example, input through a microphone or other audio device may provide input or commands for one or more applications or functions of device 2400. Furthermore, instead of, or in addition to, display output, audio output may be provided. In another example, if the display 2422 includes a touch screen, the display 2422 also acts as an input device, which can be managed at least in part by the control hub 2432. There may also be additional buttons or switches on computing device 2400 to provide I/O functions managed by control hub 2432. In one embodiment, control hub 2432 manages devices such as accelerometers, cameras, light sensors, or other environmental sensors, or other hardware that may be included in device 2400. Inputs can be part of direct user interaction, as well as providing environmental input to the system to affect its operation (eg filtering of noise, adjusting the display for brightness detection, applying a flash to the camera, or other features).In some embodiments, the control hub 2432 may utilize any suitable communication protocol (eg, PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High Definition Multimedia Interface (High Definition Multimedia Interface, HDMI), Firewire, etc.) coupled to various devices.In some embodiments, display 2422 represents hardware (eg, display device) and software (eg, drivers) components that provide a visual and/or tactile display for user interaction with device 2400 . Display 2422 may include a display interface, a display screen, and/or hardware devices for providing a display to a user. In some embodiments, display 2422 includes a touch screen (or touchpad) device that provides both output and input to the user. In one example, the display 2422 may communicate directly with the processor 2404. Display 2422 may be one or more of an internal display device as in a mobile electronic device or laptop device, or an external display device attached via a display interface (eg, DisplayPort, etc.). In one embodiment, display 2422 may be a head mounted display (HMD), such as a stereoscopic display device, for use in virtual reality (VR) applications or augmented reality (AR) applications .In some embodiments, although not shown in the figures, in addition to (or in place of) the processor 2404, the device 2400 may include a Graphics Processing Unit (GPU) that includes One or more graphics processing cores that can control one or more aspects of displaying content on display 2422.The control hub 2432 (or platform controller hub) may include hardware interfaces and connectors, and software components (eg, drivers, protocol stacks) to make peripheral connections, such as to peripheral devices 2424.It will be appreciated that device 2400 can be either a peripheral to other computing devices, or have peripherals connected to it. Device 2400 may have a "dock" connector to connect to other computing devices, eg, to manage (eg, download and/or upload, change, synchronize) content on device 2400 . Additionally, a docking connector may allow device 2400 to connect to certain peripherals that allow computing device 2400 to control content output, such as to audiovisual or other systems.In addition to proprietary docking connectors or other proprietary connection hardware, device 2400 may also make peripheral connections via common or standards-based connectors. Common types may include Universal Serial Bus (USB) connectors (which may include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface, HDMI), Firewire or other types.In some embodiments, the connectivity circuit 2431 may be coupled to the control hub 2432 , eg, in addition to or instead of being directly coupled to the processor 2404 . In some embodiments, display 2422 may be coupled to control hub 2432 , eg, in addition to or instead of being directly coupled to processor 2404 .In some embodiments, the device 2400 includes a memory 2430 coupled to the processor 2404 via a memory interface 2434. Memory 2430 includes a memory device for storing information in device 2400.In some embodiments, memory 2430 includes means to maintain stable clocking, as described with reference to various embodiments. The memory may include non-volatile memory devices (the state does not change if power to the memory device is interrupted) and/or volatile memory devices (the state is indeterminate if power to the memory device is interrupted). The memory device 2430 may be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, a flash memory device, a phase-change memory device, or have suitable performance for use as a Some other memory device for process memory. In one embodiment, memory 2430 may serve as system memory for device 2400 to store data and instructions for use when one or more processors 2404 execute applications or processes. Memory 2430 may store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of applications and functions of device 2400.Elements of the various embodiments and examples may also be provided as a machine-readable medium (eg, memory 2430 ) for storing computer-executable instructions (eg, instructions implementing any of the other processes discussed herein). Machine-readable media (eg, memory 2430) may include, but are not limited to, flash memory, optical disks, CD-ROMs, DVD ROMs, RAM, EPROMs, EEPROMs, magnetic or optical cards, phase change memory (PCM), or a suitable Other types of machine-readable media for storing electronic or computer-executable instructions. For example, embodiments of the present disclosure may be downloaded as a computer program (eg, a BIOS) that may be transmitted by data signals from a remote computer (eg, a server) to a The computer making the request (eg, the client).In some embodiments, device 2400 includes temperature measurement circuitry 2440 , eg, for measuring the temperature of various components of device 2400 . In an example, the temperature measurement circuit 2440 may be embedded, or coupled or attached to various components whose temperature is to be measured and monitored. For example, temperature measurement circuitry 2440 may measure the temperature (or temperature within) of one or more of cores 2408a, 2408b, 2408c, voltage regulator 2414, memory 2430, the motherboard of SoC 2401, and/or any suitable components of device 2400 ).In some embodiments, device 2400 includes power measurement circuitry 2442, eg, for measuring power consumed by one or more components of device 2400. In one example, the power measurement circuit 2442 may measure voltage and/or current in addition to, or instead of, measuring power. In an example, the power measurement circuit 2442 may be embedded, or coupled or attached to various components whose power, voltage and/or current consumption is to be measured and monitored. For example, power measurement circuitry 2442 may measure power, current, and/or voltage supplied by one or more voltage regulators 2414, power supplied to SoC 2401, power supplied to device 2400, power supplied by processor 2404 of device 2400 (or any other components), etc.In some embodiments, apparatus 2400 includes one or more voltage regulator circuits, generally referred to as voltage regulators (VRs) 2414 . VR 2414 generates signals at appropriate voltage levels that may be supplied to operate any appropriate components of device 2400. For example only, VR 2414 is illustrated as supplying signals to processor 2404 of device 2400 . In some embodiments, VR 2414 receives one or more Voltage Identification (VID) signals and generates voltage signals at appropriate levels based on the VID signals. Various types of VR are available for the VR 2414. For example, VR 2414 may include a "buck" VR, a "boost" VR, a combination of buck and boost VR, a low dropout (LDO) regulator, a switching DC-DC regulator, constant on-time based control The DC-DC regulator of the device, etc. Buck VR is typically used in power delivery applications where the input voltage needs to be transformed to the output voltage at a rate less than unity. Boost VR is typically used in power delivery applications where an input voltage needs to be transformed to an output voltage at a ratio greater than unity. In some embodiments, each processor core has its own VR, which is controlled by PCU 2410a/b and/or PMIC 2412. In some embodiments, each core has a network of distributed LDOs to provide efficient control of power management. LDOs can be digital, analog, or a combination of digital or analog LDOs. In some embodiments, VR 2414 includes a current tracking device to measure the current through the power supply rail(s).In some embodiments, VR 2414 includes a digital control scheme to manage the state of a proportional-integral-derivative (PID) filter (also known as a digital Type III compensator). The digital control scheme controls the integrator of the PID filter to achieve nonlinear control of the saturation duty cycle, during which the proportional and derivative terms of the PID are set to 0, while the integrator and its internal state (previous value or memory) are set to Duty cycle as the sum of the current nominal duty cycle plus deltaD. deltaD is the maximum duty cycle increment used to adjust the voltage regulator from ICCmin to ICCmax, and is a configuration register that can be set after tape-out. The state machine transitions from a non-linear full-on state, which brings the output voltage Vout back into the regulation window, to an open-loop duty cycle that maintains the output voltage slightly above the desired reference voltage Vref. After a period of time in this open loop state at the commanded duty cycle, the state machine then ramps down the open loop duty cycle value until the output voltage approaches the commanded Vref. In this way, the output dither on the output supply from the VR 2414 is completely eliminated (or substantially eliminated), and there is only a single undershoot transition, which can cause dither based on comparator delay and load with available output decoupling capacitors /dt guaranteed Vmin.In some embodiments, device 2400 includes one or more clock generator circuits, generally referred to as clock generator 2416 . Clock generator 2416 may generate clock signals at appropriate frequency levels, which may be supplied to any appropriate components of device 2400. For example only, clock generator 2416 is illustrated as supplying a clock signal to processor 2404 of device 2400 . In some embodiments, the clock generator 2416 receives one or more Frequency Identification (FID) signals and generates a clock signal at an appropriate frequency based on the FID signal.In some embodiments, device 2400 includes a battery 2418 that supplies power to various components of device 2400 . For example only, battery 2418 is shown supplying power to processor 2404 . Although not shown in the figures, the device 2400 may include charging circuitry to recharge the battery, eg, based on an AC power supply received from an Alternating Current (AC) adapter. In some embodiments, battery 2418 includes a battery subsystem that includes a battery control and driver MOS (DrMOS) block.In some embodiments, the charging circuit (eg, 2418) includes a buck-boost converter. This buck-boost converter includes DrMOS or DrGaN devices to replace the half-bridge of conventional buck-boost converters. Various embodiments herein are described with reference to DrMOS. However, the embodiments also apply to DrGaN. DrMOS devices allow better power conversion efficiency due to reduced parasitics and optimized MOSFET packaging. Since the dead-time management is inside the DrMOS, the dead-time management is more accurate than conventional buck-boost converters, resulting in higher conversion efficiency. Higher operating frequencies allow for smaller inductor size, which in turn reduces the z-height of chargers including DrMOS-based buck-boost converters. Buck-boost converters of various embodiments include dual-folded bootstrap for DrMOS devices. In some embodiments, in addition to the traditional bootstrap capacitors, folded bootstrap capacitors are added that cross-couple the inductor node to two sets of DrMOS switches.In some embodiments, the device 2400 includes a Power Control Unit (PCU) 2410 (also referred to as a Power Management Unit (PMU), a power controller, etc.). In an example, portions of PCU 2410 may be implemented by one or more processing cores 2408, and those portions of PCU 2410 are symbolically illustrated with dashed boxes and labeled as PCU 2410a. In an example, some other parts of PCU 2410 may be implemented outside of processing core 2408, and these parts of PCU 2410 are symbolically illustrated with dashed boxes and labeled as PCU 2410b. PCU 2410 may implement various power management operations for device 2400 . PCU 2410 may include hardware interfaces, hardware circuits, connectors, registers, etc., as well as software components (eg, drivers, protocol stacks) to implement various power management operations for device 2400.In some embodiments, device 2400 includes a power management integrated circuit (PMIC) 2412 to implement various power management operations for device 2400, for example. In some embodiments, the PMIC 2412 is a Reconfigurable Power Management IC (RPMIC) and/or an IMVP (Mobile Voltage Positioning). In one example, the PMIC is within an IC chip separate from the processor 2404. This may enable various power management operations for device 2400. PMIC 2412 may include hardware interfaces, hardware circuits, connectors, registers, etc., as well as software components (eg, drivers, protocol stacks) to implement various power management operations for device 2400.In an example, device 2400 includes one or both of PCU 2410 or PMIC 2412. In an example, either PCU 2410 or PMIC 2412 may not be present in device 2400, so these components are illustrated with dashed lines.Various power management operations for device 2400 may be performed by PCU 2410, by PMIC 2412, or by a combination of PCU 2410 and PMIC 2412. For example, PCU 2410 and/or PMIC 2412 may select power states (eg, P-states) for various components of device 2400. For example, PCU 2410 and/or PMIC 2412 may select power states for various components of device 2400 (eg, in accordance with the ACPI (Advanced Configuration and Power Interface) specification). For example only, PCU 2410 and/or PMIC 2412 may cause various components of device 2400 to transition to a sleep state, to an active state, to a suitable C-state (eg, a CO state, or another suitable C-state, according to ACPI specification), etc. In an example, PCU 2410 and/or PMIC 2412 may control the voltage output by VR 2414 and/or the frequency of the clock signal output by the clock generator, eg, by outputting the VID signal and/or the FID signal, respectively. In one example, PCU 2410 and/or PMIC 2412 may control battery power usage, charging of battery 2418, and features related to power saving operation.Clock generator 2416 may include a phase locked loop (PLL), a frequency locked loop (FLL), or any suitable clock source. In some embodiments, each core of processor 2404 has its own clock source. In this way, each core can operate at a frequency independent of the operating frequencies of the other cores. In some embodiments, PCU 2410 and/or PMIC 2412 perform adaptive or dynamic frequency scaling or adjustment. For example, the clock frequency of the processor core may be increased if the core is not operating at its maximum power consumption threshold or limit. In some embodiments, PCU 2410 and/or PMIC 2412 determine the operating conditions of each core of the processor, and opportunistically adjust the operating conditions when PCU 2410 and/or PMIC 2412 determine that a core is operating below a target performance level The frequency and/or supply voltage of the core without the core clocking source (eg, the core's PLL) losing lock. For example, if a core is drawing current from a power supply rail that is less than the total current allocated for that core or processor 2404, PCU 2410 and/or PMIC 2412 may temporarily increase the power draw for that core or processor 2404 (eg, , by increasing the clock frequency and/or power supply voltage level) so that the core or processor 2404 can operate at a higher performance level. In this way, voltage and/or frequency may be temporarily increased for processor 2404 without violating product reliability.In an example, PCU 2410 and/or PMIC 2412 may, for example, be based at least in part on receiving measurements from power measurement circuit 2442, temperature measurement circuit 2440, receiving the charge level of battery 2418, and/or receiving any other suitable information that may be used for power management , to perform power management operations. To this end, the PMIC 2412 is communicatively coupled to one or more sensors to sense/detect various values/variations in one or more factors that have an impact on the power/thermal behavior of the system/platform. Examples of one or more factors include current, voltage droop, temperature, operating frequency, operating voltage, power consumption, inter-core communication activity, and the like. One or more of these sensors may be located in physical proximity (and/or in thermal contact/coupled with) one or more components or logic/IP blocks of the computing system. Additionally, the sensor(s) may be directly coupled to the PCU 2410 and/or the PMIC 2412 in at least one embodiment to allow the PCU 2410 and/or the PMIC 2412 to base at least in part on detected by one or more of these sensors value(s) to manage processor core power.An example software stack for device 2400 is also illustrated (although not all elements of the software stack are illustrated). For example only, the processor 2404 may execute an application 2450, an operating system 2452, one or more Power Management (PM) specific applications (eg, generally referred to as PM application 2458), and the like. PM application 2458 may also be executed by PCU 2410 and/or PMIC 2412. OS 2452 may also include one or more PM applications 2456a, 2456b, 2456c. The OS 2452 may also include various drivers 2454a, 2454b, 2454c, etc., some of which may be dedicated to power management purposes. In some embodiments, the device 2400 may also include a Basic Input/Output System (BIOS) 2420 . BIOS 2420 may communicate with OS 2452 (eg, via one or more drivers 2454), with processor 2404, and so on.For example, one or more of PM applications 2458, 2456, drivers 2454, BIOS 2420, etc. may be used to implement power management specific tasks, such as controlling voltage and/or frequency of various components of device 2400, controlling various Wake state, sleep state, and/or any other suitable power state of components, control battery power usage, charging of battery 2418, features related to power saving operation, and the like.In some embodiments, the battery 2418 is a lithium metal battery with a pressure chamber to uniformize the pressure on the battery. The pressure chamber is supported by a metal plate, such as a pressure equalizing plate, which is used to give a uniform pressure to the cell. The pressure chamber may include pressurized gas, elastic material, spring plates, and the like. The skin of the pressure chamber is free to bend, with its edges bounded by the (metal) skin, but still exerting uniform pressure on the plates that compress the cells. The pressure chamber imparts uniform pressure to the battery, which is used to achieve high energy density batteries, for example, a 20% increase in battery life.In some embodiments, pCode executing on PCU 2410a/b has the ability to implement additional computational and telemetry resources for pCode's runtime support. Here, pCode refers to firmware executed by the PCU 2410a/b to manage the performance of the 2401. For example, pCode can set the frequency and appropriate voltage for the processor. Part of pCode is accessible via OS 2452. In various embodiments, mechanisms and methods are provided that dynamically change Energy Performance Preference (EPP) values based on workload, user behavior, and/or system conditions. There may be a well-defined interface between OS 2452 and pCode. The interface may allow or facilitate software configuration of several parameters and/or may provide hints to pCode. As an example, an EPP parameter can tell the pCode algorithm whether performance or battery life is more important.This support can also be done by the OS 2452 by including machine learning support as part of the OS 2452, and using machine learning predictions to adjust the EPP values that the OS prompts to the hardware (eg, various components of the SoC 2401), or by starting with Machine learning predictions are delivered to pCode in a similar fashion to what the Dynamic Tuning Technology (DTT) driver does. In this model, the OS 2452 sees the same set of telemetry that is available for DTT. As a result of the DTT machine learning hint settings, pCode adjusts its internal algorithms to achieve optimal power and performance results following activation type machine learning predictions. As an example, pCode may increase responsibility for changes in processor utilization to enable rapid response to user activity, or may save more power by reducing responsibility for processor utilization or by adjusting energy saving optimizations and increasing the performance penalty to increase the bias towards energy savings. This approach may facilitate saving more battery life in case the type of activity enabled loses some level of performance relative to what the system can enable. pCode may include an algorithm for dynamic EPP that may take two inputs, one from the OS 2452 and another from software, such as DTT, and may optionally be selected to provide higher performance and/or responsiveness. As part of this approach, pCode can enable an option in DTT to adjust its response to DTT for different types of activities.In some embodiments, pCode improves the performance of the SoC in battery mode. In some embodiments, pCode allows for a substantially higher SoC peak power limit level in battery mode (and thus higher speed mode performance). In some embodiments, pCode implements power throttling and is part of Intel's Dynamic Tuning Technology (DTT). In various embodiments, the peak power limit is referred to as PL4. However, embodiments are applicable to other peak power limits. In some embodiments, pCode sets the Vth threshold voltage (the voltage level at which the platform will throttle the SoC) in a way that prevents unexpected system shutdown (or black screen). In some embodiments, pCode calculates the Psoc,pk SoC peak power limit (eg, PL4) based on the threshold voltage (Vth). These are two dependent parameters, if one is set, the other can be calculated. pCode is used to optimally set a parameter (Vth) based on system parameters, and the history of operations. In some embodiments, pCode provides a scheme to dynamically calculate the throttle level (Psoc,th) and set the SoC throttle peak power (Psoc,th) based on the available battery power (which varies slowly). In some embodiments, pCode determines frequency and voltage based on Psoc,th. In this case, the throttling event has less negative impact on SoC performance. Various embodiments provide a scheme that allows a maximum performance (Pmax) framework to operate.In some embodiments, VR 2414 includes a current sensor to sense and/or measure current through the high-side switch of VR 2414 . In some embodiments, the current sensor uses an amplifier with a capacitively coupled input in the feedback to sense the input offset of the amplifier, which can be compensated for during measurement. In some embodiments, amplifiers with capacitively coupled inputs in the feedback are used to operate the amplifiers in regions where the input common mode specification is relaxed, resulting in higher feedback loop gain and/or bandwidth. In some embodiments, an amplifier with a capacitively coupled input in the feedback is used to operate the sensor from the converter input voltage by employing a high PSRR (Power Supply Rejection Ratio) regulator to create a local, clean supply voltage , causing less disturbance to the power grid in the switch area. In some embodiments, a variation of this design can be used to sample the difference between the input voltage and the controller supply and recreate it between the drain voltage of the power supply and the replica switch. This allows the sensor not to be exposed to the supply voltage. In some embodiments, amplifiers with capacitively coupled inputs in feedback are used to compensate for power delivery network dependent (PDN dependent) changes in input voltage during current sensing.Reference in the specification to "one embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with these embodiments is included in at least some embodiments , but not necessarily in all examples. The various appearances of "one embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. If the specification states that "may," "may," or "may" include a certain component, feature, structure, or characteristic, it does not necessarily have to include that particular component, feature, structure, or characteristic. If the specification or claim refers to "a" element, that does not mean there is only one of that element. If the specification or claim refers to "an additional" element, that does not preclude there being more than one of the additional element.Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, the first embodiment may be combined with the second embodiment wherever the specific features, structures, functions or characteristics associated with the two embodiments are not mutually exclusive.While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations to such embodiments will be apparent to those of ordinary skill in the art from the foregoing description. Embodiments of the present disclosure are intended to encompass all such alternatives, modifications and variations that fall within the broad scope of the appended claims.Furthermore, well known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown in the figures presented for simplicity of illustration and discussion, and in order not to obscure the present disclosure. Additionally, arrangements may be shown in block diagram form in order to avoid obscuring the disclosure, while also taking into account the fact that specific details regarding the implementation of such block diagram arrangements are highly dependent on the platform within which the disclosure is to be implemented (ie, Such specific details should be well within the purview of those skilled in the art). Where specific details (eg, circuits) are set forth in order to describe example embodiments of the present disclosure, it will be apparent to those skilled in the art that the present disclosure may be practiced without or with variations of these specific details. The description should therefore be regarded as illustrative rather than restrictive.The following examples belong to further embodiments. The specific details in the examples may be used anywhere in one or more embodiments. All optional features of the apparatus described herein can also be implemented for a method or process. These examples can be combined in any combination. For example, Example 4 can be combined with Example 2.Example 1: A machine-readable storage medium comprising machine-executable instructions that, when executed, cause one or more machines to perform a method comprising: calculating a current threshold voltage as a function of : the battery no-load voltage, the maximum threshold voltage, and the voltage gap between the battery no-load voltage and the previous threshold voltage; the processor peak power limit is calculated as the system peak power limit, which in turn is the a function of a current threshold voltage; sending the current threshold voltage to a threshold circuit; and sending the processor peak power limit to the processor, wherein the current threshold voltage sets a threshold voltage that is used to trigger Throttle of the processor to manage peak power of the processor.Example 2: The machine-readable storage medium of Example 1, wherein calculating the processor peak power limit comprises calculating system peak power as a function of: the current threshold voltage, battery no-load voltage, system Power rail capacitance, time between the system voltage falling below the current threshold voltage and the system reducing peak power, battery resistance, and the minimum voltage level of the system power rail.Example 3: The machine-readable storage medium of Example 2, wherein calculating the processor peak power limit includes scaling a difference between the system peak power and the power of the rest of the platform.Example 4: The machine-readable storage medium of Example 3, wherein scaling the difference includes accounting for power conversion losses of a voltage regulator.Example 5: The machine-readable storage medium of Example 1, comprising machine-executable instructions that, when executed, cause the one or more machines to perform the method comprising the steps of: placing the processor A peak power limit is compared to a maximum power peak power limit; and if the processor peak power limit is greater than the maximum power peak power limit, the processor peak power limit is set to the maximum power peak power limit.Example 6: The machine-readable storage medium of Example 5, comprising machine-executable instructions that, when executed, cause the one or more machines to perform the method comprising: placing the processor A peak power limit is compared to a minimum power peak power limit; and if the processor peak power limit is less than the minimum power peak power limit, the processor peak power limit is set to the minimum power peak power limit.Example 7: The machine-readable storage medium of Example 1, wherein the processor peak power limit is an upper limit on instantaneous peak power that a capacitor of a battery and system power rail can provide before the processor is throttled.Example 8: The machine-readable storage medium of Example 1, comprising machine-executable instructions that, when executed, cause the one or more machines to perform the method comprising the steps of: recovering from a fuel gauge of a battery The battery no-load voltage is read, wherein the battery no-load voltage is a runtime variable.Example 9: The machine-readable storage medium of Example 1, comprising machine-executable instructions that, when executed, cause one or more machines to perform a method, the method comprising: if it is determined that when the processor The current threshold voltage is lowered when the processor is not throttling when the peak power limit exceeds the processor's peak power.Example 10: A system comprising: a system load including a system-on-chip; a battery having a fuel gauge to provide a battery no-load voltage; a threshold circuit to throttle the system-on-chip based on a threshold; a memory to store a maximum threshold voltage, and a voltage gap between the battery no-load voltage and a previous threshold voltage; a power manager for: calculating a current threshold voltage as a function of: the battery no-load voltage, the maximum threshold voltage , the voltage gap; calculate the processor peak power limit as a system peak power limit, which in turn is a function of the current threshold voltage; send the current threshold voltage to the threshold circuit; and The processor peak power limit is sent to the system-on-chip, wherein the current threshold voltage sets a threshold voltage used to trigger throttling of the system load to manage peak power of the system load.Example 11: The system of Example 10, wherein the power manager is to calculate system peak power as a function of: the current threshold voltage, the battery no-load voltage, system power rail capacitance, system voltage The time between falling below the current threshold voltage and the system reducing peak power, battery resistance, and the lowest voltage level of the system power rail.Example 12: The system of Example 11, wherein the memory stores the system power rail capacitance and the time between the system voltage falling below the current threshold voltage and the system reducing peak power.Example 13: The system of Example 11, wherein the fuel gauge provides the battery resistance.Example 14: The system of Example 11, wherein the power manager is to scale the difference between the system peak power and the power of the rest of the platform, wherein the power of the rest of the platform is the system load a part of.Example 15: The system of Example 11, wherein the power manager is to scale the difference to account for power conversion losses of a voltage regulator.Example 16: The system of Example 14, wherein the power manager is to: compare the processor peak power limit to a maximum power peak power limit; and if the processor peak power limit is greater than the maximum power peak power limit, the processor peak power limit is set to the maximum power peak power limit.Example 17: The system of Example 14, wherein the power manager is to: compare the processor peak power limit to a minimum power peak power limit; and if the processor peak power limit is less than the minimum power peak power limit, the processor peak power limit is set to the minimum power peak power limit.Example 18: An apparatus comprising: a processor; and a power manager coupled to the processor to dynamically adjust a threshold voltage that determines when to throttle the processor and determine peak power for the processor limit to improve the performance of the processor.Example 19: The apparatus of Example 18, wherein the power manager reads battery no-load voltage and battery impedance from a battery fuel gauge to determine the threshold voltage.Example 20: The apparatus of Example 18, wherein the power manager is to: compare the peak power limit to a maximum power peak power limit; and if the peak power limit is greater than the maximum power peak power limit, The peak power limit is then set to the maximum power peak power limit.An Abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The Abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
An apparatus according to one embodiment may include an integrated circuit. The integrated circuit may include a plurality communication channels. The integrated circuit may be is capable of communicating with at least one remote node external to the integrated circuit, via at least one of the communication channels, in accordance with at least one communication protocol. Each of said plurality of communication channels may provide a communication path between a host system and at least one remote node. The integrated circuit may be further capable of operating each communication channel independently of each other and independently of the host system. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.
1.A device for making multiple communication channels work independently, including:An integrated circuit, including a plurality of communication channels, the integrated circuit being capable of communicating with at least one remote node outside the integrated circuit via at least one of the communication channels according to at least one communication protocol, each of the plurality of communication channels One provides a communication path between the host system and at least one of the remote nodes, the integrated circuit also enables each of the communication channels to work independently of each other and the host system,Wherein, the integrated circuit further includes:A task management circuit can receive multiple tasks from the host system, the task management circuit can also schedule the multiple tasks independently of the host system, and the task management circuit can also be independent of the host system Choose a task; andThe corresponding protocol engine circuit associated with each channel can perform the selected task independently of the host system, and the protocol engine circuit can also report the results of the executed task to the task management circuit,Wherein, the task management circuit and the protocol engine circuit are implemented by hardware circuits, but no firmware or software is used.2.The apparatus of claim 1, wherein if one of the plurality of communication channels fails and / or has an error condition, the integrated circuit can operate the remaining channels of the plurality of communication channels .3.The apparatus of claim 1, wherein the at least one communication protocol includes a serial connection small computer system interface protocol.4.The apparatus of claim 1, wherein each of the protocol engine circuits is further capable of performing one of the plurality of tasks for one of the plurality of communication channels, while another protocol engine in the protocol engine circuit The circuit performs another task of the plurality of tasks for another communication channel of the plurality of communication channels.5.A system for making multiple communication channels work independently, including:A circuit card that includes an integrated circuit that can be coupled to a bus of a host system, the integrated circuit includes a plurality of communication channels, and the integrated circuit can communicate with all the devices via at least one of the communication channels according to at least one communication protocol At least one remote node outside the integrated circuit communicates, each of the plurality of communication channels provides a communication path between a host system and at least one of the remote nodes, the integrated circuit also enables each of the communication channels to communicate with each other Work independently and independently of the host system,Wherein, the integrated circuit further includes:A task management circuit can receive multiple tasks from the host system, the task management circuit can also schedule the multiple tasks independently of the host system, and the task management circuit can also be independent of the host system Choose a task; andThe corresponding protocol engine circuit associated with each channel can perform the selected task independently of the host system, and the protocol engine circuit can also report the results of the executed task to the task management circuit,Wherein, the task management circuit and the protocol engine circuit are implemented by hardware circuits, but no firmware or software is used.6.The system of claim 5, further comprising:A circuit board, the circuit board includes the bus and a bus interface slot, and the circuit card can be coupled to the bus interface slot.7.The system of claim 5, wherein the at least one remote node includes a mass storage array.8.The system of claim 7, wherein the mass storage device includes a redundant array of independent disks.9.A method for making multiple communication channels work independently, including:Communicating with at least one remote node via an integrated circuit including a plurality of communication channels according to at least one communication protocol, each of the plurality of communication channels providing a communication path between a host system and at least one of the remote nodes;Making each of the communication channels work independently of each other and the host system,Wherein, the integrated circuit further includes a task management circuit and a corresponding protocol engine circuit associated with each channel, and the communication and working steps further include:Receiving multiple tasks from the host system via the task management circuit;Scheduling the plurality of tasks independently of the host system via the task management circuit;Selecting tasks independently of the host system via the task management circuit;Execute the selected task independently of the host system via the corresponding protocol engine circuit in the protocol engine circuit; andReport the result of the executed task to the task management circuit via the corresponding protocol engine circuit in the protocol engine circuit,Wherein, the task management circuit and the protocol engine circuit are implemented by hardware circuits, but no firmware or software is used.10.The method of claim 9, wherein if one of the plurality of communication channels encounters a failure, the integrated circuit can operate the remaining channels of the plurality of communication channels.11.The method of claim 9, wherein the work further comprises:One of the plurality of tasks is performed for one of the plurality of communication channels, and the other of the plurality of tasks is simultaneously performed for another communication channel of the plurality of communication channels.12.The method of claim 9, wherein the communication protocol includes a serial connection small computer system interface protocol.13.An integrated circuit, including:The first circuit is used to receive multiple tasks from the host system, schedule the multiple tasks independently of the host system, and select tasks independently of the host system;A second circuit, which includes a plurality of communication channels, each of the communication channels including a corresponding protocol engine circuit that can perform a selected task for the communication channel independently of the host system, and The second circuit enables each of the communication channels to work independently of each other and the host system,Wherein, the first circuit and the second circuit are implemented by hardware circuits, but no firmware or software is used.14.The integrated circuit of claim 13, wherein if one of the plurality of communication channels encounters a failure, the integrated circuit can operate the remaining channels of the plurality of communication channels.15.The integrated circuit of claim 13, wherein each of the protocol engine circuits is further capable of performing one of multiple tasks on one of the plurality of communication channels while another protocol engine in the protocol engine circuit The circuit performs another task of the plurality of tasks on another communication channel of the plurality of communication channels.
Integrated circuit capable of making multiple communication channels work independentlyTechnical fieldThe present invention relates to an integrated circuit that enables multiple communication channels to work independently.Background techniqueIn a traditional data storage solution, the computer node includes a host bus adapter (HBA). The HBA includes a protocol engine that communicates with the data storage system via one or more communication links according to at least one communication protocol. In conventional systems, the host system may include software and / or firmware that issues one or more tasks to the HBA. Multiple tasks may include one or more I / O data transfer commands transferred from the host system to the data storage system via the protocol engine. In addition, in a traditional system, at least a large part of the protocol engine is implemented in software and / or firmware, so tasks are processed in firmware and / or software.Processing tasks in software and / or firmware requires at least one embedded processor to execute instructions generated by the software and / or firmware. When using software and / or firmware to process tasks, the traditional protocol engine needs to be interrupted multiple times, which increases the total waiting time for task processing, and also requires real-time monitoring of task progress via the protocol engine with software and / or firmware. In addition, if the protocol engine has multiple channels for processing multiple tasks issued by the host system, an embedded processor cannot make multiple communication channels work independently. Therefore, if the embedded processor is busy processing a task for a communication channel, the processing of the remaining tasks on the remaining communication channels will be delayed. Therefore, any difficulties encountered on one communication channel will adversely affect the communication on the remaining communication channels.Additionally, if software and / or firmware is embedded in the host system, these tasks may degrade the performance of the host processor and / or chipset. Therefore, as the protocol speed and complexity increase, the processing of tasks by software and / or firmware may become too slow for effective data transmission, and multiple data involving multiple communication channels and associated ports are sent from the host system This is especially true when transferring multiple tasks.BRIEF DESCRIPTIONAfter referring to the following detailed description and drawings, the features and advantages of the embodiments of the present invention will become apparent, wherein the same reference numerals denote the same parts, where:Figure 1 is a drawing showing an embodiment of the system;2 is a drawing showing the integrated circuit of FIG. 1 in detail;3 is a diagram showing in detail an exemplary embodiment of a task communication circuit of the task management circuit in the integrated circuit of FIG. 2;4 is a diagram showing another exemplary embodiment of the task communication circuit of the task management circuit in the integrated circuit of FIG. 2 in detail;5 is a diagram showing in detail the task scheduling circuit of the task management circuit in the integrated circuit of FIG. 2;6 is a diagram showing in detail the wide-port circuit in the task scheduling circuit of the task management circuit in the integrated circuit of FIG. 2;7 is a diagram showing in detail the context cache management circuit in the integrated circuit of FIG. 2;8 is a diagram showing in detail the transport layer management circuit of the protocol engine circuit in the integrated circuit of FIG. 2;9 is a diagram showing in detail the data cache management circuit in the integrated circuit of FIG. 2;10 is a diagram showing in detail the link layer management circuit of the protocol engine circuit in the integrated circuit of FIG. 2;11 is a flowchart illustrating operations that can be performed according to one embodiment.Although the description is described in conjunction with the illustrative embodiments, many alternatives, modifications, and variations are obvious to those of ordinary skill in the art. Accordingly, the claimed subject matter of the invention should be interpreted broadly, which is defined only by the claims.detailed descriptionFIG. 1 shows a system embodiment 100 of the present invention. The system 100 may generally include a host system 107, a circuit card 120, and at least one remote node 104. The host system 107 may include a host processor 112, a bus 122, a user interface system 116, a chipset 114, a system memory 121, and a circuit card slot 130. The host processor 112 may include any type of processor known in the art, such as a commercialIV processor from the assignee of this application. The bus 122 may include various types of buses for transmitting data and commands. For example, the bus 122 conforms to the Peripheral Component Interconnect (PCI) ExpressTM Base Specification Revision 1.0 (hereinafter referred to as "PCI ExpressTM bus") published on July 22, 2002, which is available from PCI Special Interest Group, Portland Oregon, U.S.A. The bus 122 also complies with the PCI-X Specification Rev. 1.0a of July 24, 2000 (hereinafter referred to as the "PCI-X bus") available from the aforementioned PCI Special Group, Portland, Oregon, U.S.A.The user interface 116 may include various devices that the user uses to enter commands and / or data and monitor the system, such as a keyboard, pointing device, and video display. The chipset 114 may include a host bridge / hub system (not shown) that couples the processor 112, system memory 121, and user interface system 116 to each other and to the bus 122. The chipset 114 may include integrated circuit chips, such as those selected from commercial integrated circuit chipsets provided by the assignee of the present invention (eg, graphics memory and I / O controller hub chipsets), but also Other integrated circuit chips may be reused or alternatively used. The processor 112, the system memory 121, the chipset 114, and the circuit card slot 130 may be integrated on the circuit board 132 (for example, a system motherboard).The configuration of the circuit card 120 allows it to be inserted into the slot 130. When the circuit card 120 is correctly inserted into the slot 130, the connectors 134 and 137 remain electrically and mechanically coupled to each other. When the connectors 134 and 137 are so coupled to each other, the circuit card 120 is electrically coupled to the bus 122, whereby data and / or data can be exchanged with the system memory 121, the host processor 112, and / or the user interface system 116 via the bus 122 and the chipset 114. Or command.The circuit card 120 may include a host bus adapter (HBA), which may include at least one integrated circuit 140 capable of initiating communication between the host system 107 and at least one remote node 104. The circuit card 120 can communicate with one or more remote nodes via at least one communication link (eg, 160a, 160b ... 160n) using various communication protocols. In one embodiment, the remote node 170 may include an expander. The expansion device 170 may connect the one or more links 160a, 160b ... 160n and the remote node 104 via one or more additional links 162a, 162b, 162c ... 162n. Of course, the circuit card 120 may be directly coupled to the remote node 104 via links 160a, 160b ... 160n (ie, without using the expansion device 170), without departing from this embodiment. In addition, one or more of the links 160a, 160b ... 160n can be connected to other remote nodes (not shown) without departing from this embodiment.The remote node 104 may include, for example, a mass storage array including a plurality of mass storage devices (eg, hard disk drives) 104a, 104b, 104c, and 104d. Alternatively or additionally, the remote node may include an expansion device, a bridge, another host system, and / or other intermediate devices, and / or other devices external to the circuit card 120 without departing from this embodiment. In at least one embodiment, the mass storage array may include, for example, one or more redundant arrays of independent disks (RAID). The achievable RAID level may be a RAID level 0, level 1 or greater than level 1. Alternatively or additionally, one or more mass storage devices may include solid-state storage devices, such as flash drives, static random access memory (SRAM) drives, and so on.The integrated circuit 140 may include circuitry capable of initiating communication between the host system 107 and the remote node 104 to exchange data and / or commands between them. The term "integrated circuit" used in any embodiment of the present application may be defined as a semiconductor device or a microelectronic device, such as a semiconductor integrated circuit chip. Likewise, the terms "circuit" and "circuit system" used in any embodiment of the present application may include, for example, single or any combination of the following: hard-wired circuits, programmable circuits, state machine circuits, and / or to store Firmware of instructions executed by programmable circuits. Similarly, in any embodiment of the present application, the circuit may be embodied as one or more integrated circuits and / or components thereof.The circuit card 120 may also include a memory 138. The memory 138 may include one or more of the following: semiconductor firmware memory, programmable memory, non-volatile memory, read-only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, and / or optical disk memory. Additionally or alternatively, the memory 138 may include other and / or various computer-readable memories developed later. Machine-readable firmware program instructions may be stored in the memory 138. These instructions can be accessed and executed by the integrated circuit 140. When executed by integrated circuit 140, these instructions may cause integrated circuit 140 to perform the operations performed by integrated circuit 140 as described in this application. In addition, the memory 138 and / or other memory (not shown) may store data associated with the operation of the integrated circuit 140, which will be described in detail below.Alternatively, the operating circuit of the circuit card 120 may be included in other structures, systems, and / or devices without departing from this embodiment. These other structures, systems, and / or devices may be included in the motherboard 132 of the host system 107 and coupled to the bus 122, for example. Thus, for example, the operating circuit associated with the integrated circuit 140 described in this application may be included in the chipset 114. Alternatively, the operation circuit associated with the integrated circuit 140 described in this application may be included in the storage array of the remote node 140. Of course, the operation circuit associated with the integrated circuit 140 described in this application may also be included in more than one integrated circuit, without departing from this embodiment.The host system 107 can generate one or more tasks 150A, 150B ... 150N and transfer these tasks to the IC (of the circuit card 120) 140 for execution. Tasks 150A, 150B ... or 150N may include, for example, data transmission, control and / or management instructions generated by the host system 107. For example, tasks 150A, 150B ... or 150N may include one or more I / O instructions for reading data from and / or writing data to one or more devices of remote node 104 or Multiple devices. Therefore, the host system 107 may be configured with software and / or drivers (which may be executed on, for example, the host processor 112) for generating one or more tasks.A task (eg, task 150A) may include task instructions 152A and context information 154A. Task instructions may include instructions associated with specific tasks, such as for I / O transactions (eg, data transfer tasks), primitive sequence tasks (ie, when communication protocols require, integrated circuit instructions are used to generate one or more Primitive signal sequence), manual frame task (manual frame task) and other instructions to initiate communication with one or more remote nodes.Context information 154A may include, for example, scheduling context information (which may include, for example, local port number, remote node number, priority information, etc.), task context (which may include, for example, the transmission size of I / O operations, data buffer pointer, protocol type, etc. ) And / or remote node context (which may include, for example, remote node port address, communication protocol supported by the remote node, remote node port width, remote node queue depth of each port, connection rate information, etc.).The integrated circuit 140 may include multiple communication channels. Each channel can be defined by a corresponding protocol engine circuit 144a, 144b ... 144n (abbreviated as PEC in FIG. 1). The integrated circuit 140 may further include a task management circuit 142 (abbreviated as TMC in FIG. 1). Each channel defined by the corresponding protocol engine circuit 144a, 144b ... 144n can communicate with at least one remote node 104 according to at least one communication protocol among multiple communication protocols. For example, if the Fibre Channel (FC) protocol is used by the protocol engine circuit 144 to exchange data and / or commands with the remote node 104, it conforms to the description in "ANSI Standard Fibre Channel Physical Signaling Interface-3X3.303: 1998 Specification" Interface / protocol or compatible with the interface / protocol. Alternatively or additionally, if the Serial ATA (SATA) protocol is used by the protocol engine circuit 144 to exchange data and / or commands with the remote node 104, it is compatible or compatible with the Serial ATA Working Group in 2003. The protocol described in the "Serial: ATA: High Speed Serialized AT" Attachment "revision 1.0a published on July 7 and / or the" Serial ATA II: Extensions to Serial ATA published by the Serial ATA Working Group on August 27, 2004 Version 1.0a "of the protocol described in revision 1.2 and / or earlier and / or subsequently published SATA standards.Further alternatively or additionally, if the serial connection small computer system interface (SAS) protocol is used by the protocol engine circuit 144 to exchange data and / or commands with the remote node 104, it is compatible or compatible with the Agreement described in Working DraftDraftAmerican National Standard of International Commission for For Information Technology Standards (INCITS) T10 Technical Institute, Project T10 / 1562-D, "Information Technology-Serial Attached SCSI-1.1," Rev. 1 (published on September 18, 2003) Hereinafter referred to as "SAS standards") and / or earlier and / or subsequently published versions of SAS standards. The SAS communication protocol may include one or more communication transmission protocols, such as Serial Advanced Technology Connection (ATA) Tunneling Protocol (STP) and Serial Small Computer System Interface (SCSI) Protocol (SSP). Of course, the protocol engine circuit 144 can also use other communication protocols for communication, which does not deviate from this embodiment.In this embodiment, each communication channel 144a, 144b ... 144n may be a virtual link and / or a physical link between two points. Thus, for example, each communication channel 144a, 144b ... 144n may provide a communication path between the host system 107 and one or more remote nodes (eg, remote nodes 170 and / or 104). As will be described in detail below, each communication channel may include a port (eg, one or more links 160a, 160b ... 160n may be coupled to the port). According to a specific communication protocol, one port may include multiple links (wide ports) or a single link (narrow ports). For example, in the SAS communication protocol, multiple links can be assigned to a port, thereby defining a wide port. In at least one embodiment described in this application, each communication channel 144a, 144b ... 144n can work independently of each other and independently of the host system 107. Therefore, when a fault and / or error condition occurs, one or more of the communication channels does not degrade the performance of other channels. In addition, since it works independently of the host system 107, each communication channel can enhance the data transmission capability.The task management circuit 142 may receive one or more tasks 150A, 150B ... 150N from the host system 107. The task management circuit 142 may perform a plurality of tasks independently of the host system 107. For example, the task management circuit 142 may queue multiple tasks, discover appropriate protocol engine circuits 144a, 144b ... 144n to process specific tasks, and transfer one or more tasks to one or more protocol engine circuits 144a, 144b ... … 144n. The task management circuit 142 can schedule multiple tasks for execution, select one task among multiple scheduled tasks for execution, and after executing the task through the protocol engine circuit 144, report the task status to the host system 107 for execution Software / driver.The protocol engine circuit 144 may execute one or more tasks scheduled by the task management circuit 142 and transmit the task status to the task management circuit 142. Therefore, in at least one embodiment described in this application, the integrated circuit 140 can schedule multiple tasks, select at least one task for execution, execute the task, and independently report the status of the selected task to the software on the host system 107 / driver. In at least one embodiment, the task management circuit 142 and the protocol engine circuit 144 may be implemented by one or more dedicated hardware circuits and / or state machine circuits, as long as the operations described in this application can be performed.When the integrated circuit 140 receives a task from the host system 107 to transmit data to the remote node 104 or receive data from the remote node 104, the task management circuit 142 and the protocol engine circuit 144 may follow the path between the host system 107 and the remote node 104. Multiple communication channels reside. Therefore, compared to software and / or firmware implementations, task management circuit 142 and protocol engine circuit 144 implemented with dedicated hardware circuits and / or state machine circuits have enhanced data transmission capabilities and enhanced performance because the host processor 112 or embedded processor is not involved in executing instructions. Of course, it is also contemplated here that the task management circuit 142 and / or the protocol engine circuit 144 and / or parts thereof implemented in software and / or firmware, without departing from this embodiment. The operations of the task management circuit 142 and the protocol engine circuit 144 will be described in detail below.FIG. 2 is a drawing 200 showing the integrated circuit 140 of the embodiment of FIG. 1 in detail. In FIG. 2, in order to clarify the device, some parts of the system 100 described in FIG. 1 (such as the circuit board 132, the circuit card 120, and the remote node 104) are omitted, but it should be understood that FIG. 2 is the same as FIG. Part may be implemented in a manner consistent with the embodiment described in FIG. 1, or alternatively, implemented with other system implementation solutions, without departing from this embodiment. For example, the integrated circuit 140 described in FIG. 2 may include an on-chip integrated system (SoC) and / or on-chip RAID (ROC) and / or a protocol bridge and / or an external storage controller, and each of the components may include FIG. 1 The mentioned elements and / or other elements and / or additional elements are, for example, elements as used in other system embodiments.In this embodiment, the task management circuit 142 may include a task communication circuit 202 and a task scheduling circuit 204. In this embodiment, the protocol engine circuit is generally described with reference numeral 144, and may include a transport layer management circuit 206 and a link layer management circuit 208. In one embodiment, each communication channel may be defined as a transport layer / link layer pair. The integrated circuit 140 may also include an analog front end (AFE) circuit 210, a context cache management circuit 212, and a data cache management circuit 220. In one embodiment, the integrated circuit 140 may also include a context cache management circuit 212, a scheduler context memory 214, a task context cache 216, and a remote node context cache 218.The task communication circuit 202 may be coupled to the back-end interface bus 226. In general, the task communication circuit 202 may serve as a communication interface between the software / driver of the host system 107 and the rest of the integrated circuit 140. The task communication circuit 202 may receive tasks from the host system 107 and pass the task status to the software / driver of the host system 107. The task communication circuit 202 can communicate with the context cache management circuit 212, and the context cache management circuit 212 can store context information from various tasks in different memory locations, such as the scheduler context memory 214, the task context cache 216, and Remote node context cache 218.The task communication circuit 202 may use local and / or remote task work queues and local and / or remote status queues. The task work queue may store task instructions of multiple tasks issued from the host system 107. In essence, the task work queue provides a storage location for one or more tasks waiting to be processed by the protocol engine circuit 144. The status queue can store data associated with the status of a particular task. Thus, for example, the status of a task (eg, task completion, ongoing, and / or fault status) can be stored in the status queue, and then the status can be reported to the host system 107. The task communication circuit 202 can work in a master mode or a slave mode. The main difference between the master mode and the slave mode is the location of the task work queue and the status queue.FIG. 3 shows a main mode embodiment of the task communication circuit 202 of FIG. 2. In the main mode, the task work queue and the status queue can be stored outside the task communication circuit 202a and the protocol engine circuit 144. The task communication circuit 202a of FIG. 3 may include a task and status queue management circuit 302 and a task dispatcher circuit 304. The task and status queue management circuit 302 can manage the task work queue and the status queue to obtain task information from the task work queue and report the status to the status work queue. The main mode task communication circuit 202a of each communication channel may be the same. For example, in one embodiment, there may be 8 task communication circuits 202a for 8 communication channels associated with 8 external ports.The task dispatch circuit 304 may send the task to the local appropriate port of the task scheduling circuit 204. Therefore, for the same function, the firmware / driver of the host system 107 only needs to generate a task to the task work queue or retrieve status information from the status work queue, regardless of how many or which local ports are allocated to the function or virtual Map to this function. Based on the number of local ports given by the firmware when sending the task, the task dispatch circuit 304 can send the task to the appropriate local port. The task dispatch circuit 304 may also parse the context information of the task provided to the task communication circuit 202 or the task communication circuit 202 into the appropriate context memory based on the task context index or the remote node index, for example, resolve the scheduler context to the scheduler In the context storage 214, the task context is parsed into the task context storage 216, and the remote node context is parsed into the remote node context storage 218. The context buffer management of the three context memories 214, 216, and 218 can be managed by the context cache management circuit 212.Because the task work queue and the status queue can be placed outside the protocol engine circuit 144 in the main mode embodiment, the protocol engine circuit 144 can monitor the status of the two queues and can be notified through the software / driver of the host system 107 to The task work queue obtains tasks assigned by the host system 107, and sends task completion status information or protocol engine status information to the status queue. The task working status and the position of the status queue can be initialized by the firmware / driver of the host system 107. If the firmware / driver of the host system 107 issues one or more tasks in the external task work queue, it can provide a "doorbell" signal to the task and status queue management circuit 302 to notify the task and status of this situation The queue management circuit 302 thus starts scheduling and processing the task.FIG. 4 shows the slave mode embodiment of the task communication circuit 202 in FIG. 2. In the slave mode, the task work queue / control circuit 402 and the status queue / control circuit 404 may be local to the task communication circuit 202b. The slave mode embodiment of the task communication circuit 202b may further include a task dispatch circuit 304, which is the same as the task dispatch circuit of the master mode embodiment in FIG. 3, therefore, for the sake of clarity, it will not be repeated here.In the slave mode embodiment, the firmware / driver of the host system 107 can assign tasks to the local task work queue 402 and retrieve status information from the local status queue 404. Compared to the master mode, the firmware / driver of the host system 107 is responsible for monitoring the status queue in the slave mode. The master mode or slave mode embodiments can be selected according to specific implementation and usage model requirements.FIG. 5 shows an embodiment of the task scheduling circuit 204 of FIG. 2 in detail. Generally speaking, the task scheduling circuit 204 finds a task that needs to be executed and allocates the task to the available task execution resources of the protocol engine circuit 144. The task scheduling circuit 204 may include multiple scheduler wide port groups 502A ... 502N. Each scheduler wide port group circuit (eg, scheduler wide port group 502A circuit) may include a port task scheduling circuit 504 and a wide port management and control circuit 506. The port task scheduling circuit 504 can perform all scheduling and allocate tasks to the available resources of the protocol engine circuit 144. The wide port management and control circuit 506 can connect all available links to local ports. The task and event timeout management circuit 508 may monitor all tasks active on one or more port task scheduling circuits and all tasks with an inbound status. The task and event timeout management circuit 508 may also monitor all timeout events generated by the transport layer management circuit 206 or the link layer management circuit 208.FIG. 6 shows the scheduling wide port group circuit 502A of FIG. 5 in detail. The port task scheduling circuit 504 may include a port task scheduling circuit in each channel or link associated with the local port. For example, port task scheduling circuit 0 may be associated with local port 0 via an associated channel, port task scheduling circuit 1 may be associated with local port 1 via an associated channel, and so on. If the specific communication protocol used supports wide-port functions, such as the SAS communication protocol, multiple channels may form wide ports, such as wide ports 604 and 606. A wide port may only require one of multiple port task scheduling circuits, and disable unused port task scheduling circuits. For example, the wide port 604 may use "port task scheduling circuit 0" and may disable "port task scheduling circuit 1", "port task scheduling circuit 2", and "port task scheduling circuit 3". During port configuration, unused port task scheduling circuits may be disabled by the firmware of the host system 107.Each port task scheduling circuit can schedule all tasks of each remote node for subsequent transmission to the remote node. Each port task scheduling circuit does not necessarily need to schedule a task in a "frame receiving state" received from a remote node. Therefore, if the remote node has no active tasks or is offline, the port task scheduling circuit may temporarily remove the specific remote node from the task scheduling circuit to improve scheduling performance.Each port task scheduling circuit can act as a horizontal scheduler, vertical scheduler, or local port manager. The horizontal scheduler can select which remote node needs to be served next, and can remember which remote nodes have the current active connection. The horizontal scheduler can also maintain the connection retry status of the remote node and manage the connection timeout failure management, wherein the connection attempt of the remote node fails. The horizontal scheduler can also support one or more remote nodes in the connection, if the relevant communication protocol supports this configuration, such as SATA port multiplier, FC structure FL_ port, etc.Each port task scheduling circuit can also act as a vertical scheduler. The vertical scheduler can manage tasks that are active for all remote nodes that can be accessed from the associated local voltage port. For example, the vertical scheduler can insert new tasks into the task list of the associated remote node. The vertical scheduler can also keep a count of active tasks and manage the queue depth of each remote node. The vertical scheduler can also manage the execution order of tasks in any remote task list. The vertical scheduler can also maintain multi-task lists in remote nodes, such as operation mode task lists, communication protocol specific task lists, and priority (high and low) task lists. The vertical scheduler can also reschedule any outstanding tasks. In response to the type of unfinished task, the vertical scheduler may place the unfinished task at the beginning or end of a specific task list.Each port task scheduling circuit can also act as a local port manager. The local port manager can manage port configuration and status, such as link-to-port allocation, allowable connections, and connection scheduling fairness. The local port manager can also perform queue depth management and interact with the link layer management circuit 208 for connection management.The wide port management and control circuit 506 may include an X-bar router 602, which includes X-bar router logic. When the X-order routing logic initializes the protocol engine circuit 144 and configures any hardware port mapping, it can be configured by the firmware / driver of the host system 107. After the wide port configuration protocol has been completed (for example, the exchange of identity frames in the SAS communication protocol), the firmware can also map / route the transport layer management circuit 206 / link layer management circuit 208 to the associated port task scheduling circuit. Similarly, any unused port task scheduling circuit can be disabled.FIG. 7 shows the context cache management circuit 212 of the integrated circuit 140 in FIG. 2 in detail. In general, the context cache management circuit 212 can store the context in the scheduler context memory 214, task context cache 216, and remote node context cache 218, as well as provide slave scheduler context memory 214, task context cache 216 The context is provided in the remote node context cache 218. When necessary, the context cache management circuit 212 may provide the context to the task scheduling circuit 204, the transport layer management circuit 206, and the link layer management circuit 208. The context cache management circuit 212 may cache the context and pre-fetch the context from the external memory in order to prepare for the task scheduling circuit 204, the transport layer management circuit 206, and the link layer management circuit 208. The context cache management circuit 212 can also perform context locking, context unlocking, pre-fetching, and scheduling the context to be used. The context cache management circuit 212 may also perform the mapping / translation of task context indexes to cache context addresses. The size of each context memory 214, 216, and 218 may vary depending on the implementation.The context cache management circuit 212 may include an internal bus 708 coupled to the task context cache management circuit 704, the remote node context cache management circuit 706, and the scheduling context management circuit 702. The task context cache management circuit 704 may manage the task context cache and provide the requested task context to the transport layer management circuit 206. The remote node context cache management circuit 706 may manage the remote node context cache and provide the requested remote node context to the link layer management circuit 208. Scheduling context management circuit 702 is shown as a dashed box because it can also be located in task scheduling circuit 204. The scheduling context management circuit 702 may provide the next task context to the transport layer management circuit 206 with the active connection and the remote node to be served next selected.FIG. 8 shows in detail the transport layer management circuit 206 of the protocol engine circuit in the integrated circuit of FIG. 2. In general, the transport layer management circuit 206 can perform tasks assigned by the task scheduling circuit 204. According to the upper layer mapping communication protocol, the transport layer management circuit 206 may divide or decompose a task into multiple control and / or data outbound frames or packets. The transport layer management circuit 206 can also process inbound frames or packets specified by the upper layer mapping communication protocol and reassemble them. If supported by the protocol, the transport layer management circuit 206 may also need to communicate with other transport layer circuits for wide port management. In addition, the transmission layer circuit 206 can perform data transmission command processing.The transport layer management circuit 206 may include a wide port interface management circuit 802A and an associated wide port transport layer task controller group 804A. The wide port transport layer controller group may include multiple transport layer (TL) task controllers 806A ... 806N. The wide port interface management circuit 802A may provide a communication control path for routing control, status, and / or data path information between the transport layer (TL) task controllers 806A ... 806N and the associated port task scheduler.The wide port transport layer task controller group 804A may include the maximum number of protocol engines that can be supported by the associated wide port. It can support one or more ports in the group. The transport layer (TL) task controllers 806A ... 806N may be transport layer engines that perform tasks assigned by the port task scheduler, as defined by the upper layer mapping protocol. The wide port transport layer task controller group 804A may also support transport layer retry circuits (not shown) as supported by specific communication protocols. The transport layer retry circuit can perform the retry function defined by a specific communication protocol such as SAS, and can also remember the context snapshot to be used during the retry. The wide-port transport layer task controller group 804A may also support credit (transmission and reception) management circuits (not shown). The trust management circuit can manage the credits of the inbound and outbound channels at each transport layer task controller.The back-end direct memory access (DMA) controller 808 can move the data on the IC to the memory and the data from the memory to the IC. For highly pipelined architectures, the tasks to be processed on the link may be different from the tasks to be processed on the back end. Therefore, the back-end DMA controller 808 can handle data movement to the back-end interface between the transmit and receive frame buffers. The DMA controller 808 can manage the context part and communicate with the front-end transport layer task controller.The data domain conversion manager 810 can automatically convert logical block addressing (LBA) information between multiple domains without the intervention of the firmware / driver of the host system 107. The data domain conversion manager 810 enables the protocol engine circuit 144 to support different RAID levels and volume virtualization, such as logical unit number (LUN) virtualization or LBA block level virtualization.FIG. 9 shows the data cache management unit 220 of FIG. 2 in detail. The data cache management unit 220 may include a data cache (LBA / domain) conversion control circuit 902 and an address and logical block addressing (LBA) conversion table 906. In general, the data cache management unit 220 supports data cache to improve data transmission performance. The data cache management unit 220 may convert the LBA value into a cache address of associated LBA data. The data cache conversion control circuit 902 may perform LBA on the data cache buffer address conversion control. The table 906 may be a memory area to store LBA and address mapping information.FIG. 10 shows the link layer management circuit 208 of the protocol engine circuit 144 of the integrated circuit 140 in FIG. 2 in detail. The data link layer management circuit 208 may include a remote node context management circuit 1002, a remote initiator index mapping table 1004, a plurality of PHY layer wide port groups 1006A ... 1006N, and associated analog front end (AFE) circuits 210A ... 210N. The remote node context management circuit 1002 may manage access to the remote node context during connection requests and connection arbitration. The remote node context management circuit 1002 can also manage the updating of the remote node context.The remote initiator index mapping table 1004 may be used to map initiator addresses to local context indexes for addressing remote node contexts. Other embodiments may not use the remote initiator index mapping table 1004 because they do not require initiator index translation, such as FCAL addresses.The PHY layer wide port group 1006A may include multiple PHY layer controllers as required by the wide port transport layer task controller group. The PHY layer wide port group 1006A may include a connection management circuit 1010 and a PHY layer data path 1012. According to the request of the transport layer management circuit 206, the connection manager 1010 may establish a connection with an appropriate remote node. In response to communication protocol requirements, such as the link idle timeout specified on the SCSI mode page, the connection management circuit 1010 may manage to automatically terminate the connection. The connection management circuit 1010 can also arbitrate between inbound or outbound connection requests defined by an applicable communication protocol (eg, SAS). If the connection request fails in some communication protocol such as SAS, the connection management circuit 1010 can also manage the connection request retry.The PHY layer data path 1012 can provide basic functions to perform the low-level link layer functions required by most serial protocol interfaces. The PHY layer data path 1012 may also include automatic link initialization, such as loop initialization, speed negotiation in FCAL, and so on. The analog front-end circuits 210A ... 210N can provide a physical link interface for the communication link. The analog front-end circuit may also include detection logic to automatically identify and select supported communication protocols, such as SAS, SATA, and FC.FIG. 11 is a flowchart 1100 of operations according to one embodiment. Operation 1102 may include communicating with at least one remote node external to the integrated circuit according to a communication protocol, the integrated circuit including a plurality of communication channels. Operation 1104 may include operating the plurality of communication channels independently of each other and the host system.In summary, in one embodiment, an apparatus including an integrated circuit is provided. The device according to one embodiment may include an integrated circuit. The integrated circuit may include multiple communication channels. The integrated circuit may communicate with at least one remote node outside the integrated circuit via at least one communication channel according to at least one communication protocol. Each of the plurality of communication channels may provide a communication path between the host system and at least one remote node. The integrated circuit also enables each communication channel to operate independently of each other and the host system.One system embodiment may include a circuit card that includes an integrated circuit. The circuit card can be coupled to the bus of the host system. The integrated circuit may include multiple communication channels. The integrated circuit may communicate with at least one remote node outside the integrated circuit via at least one communication channel according to at least one communication protocol. Each of the plurality of communication channels may provide a communication path between the host system and at least one remote node. The integrated circuit also enables each communication channel to work independently of each other and the host system.Beneficially, in these embodiments, the integrated circuit may provide enhanced communication capabilities. Any reduction in a communication channel, such as the difficulty of performing the tasks of one communication channel, will not adversely affect the tasks of the remaining communication channels. In addition, the integrated circuit can work independently of the host system, which can further increase the communication speed. The integrated circuit may also perform one of multiple tasks for one of the multiple communication channels, and simultaneously perform another task among the multiple tasks for the other one of the multiple communication channels to further speed up the communication speed.The terms and expressions used in this application are for explanatory purposes and are not limiting, and the use of these terms and expressions does not exclude any equivalents (or portions thereof) of the features shown and described And it can be understood that various modifications are still within the protection scope of the present invention. Other modifications, changes, and substitutions are also possible. Therefore, the claims are intended to cover all such equivalents.
A strained semiconductor device suitable for use in an integrated circuit and a method for manufacturing the strained semiconductor device. A mesa isolation structure is formed from a semiconductor­ on-insulator substrate. A gate structure is formed on the mesa isolation structure. The gate structure includes a gate disposed on a gate dielectric material and has two sets of opposing sidewalls. Semiconductor material is selectively grown on portions of the mesa isolation structure adjacent a first set of opposing sidewalls of the gate structure and then doped. The doped semiconductor material is silicided and protected by a dielectric material. The gate is silicided wherein the silicide wraps around a second set of opposing sidewalls and stresses a channel region of the semiconductor device.
WHAT IS CLAIMED IS: 1. A method for manufacturing a semiconductor device, comprising: providing a semiconductor substrate; forming a mesa structure from the semiconductor substrate, wherein the mesa structure has a first surface and first and second sidewalls; forming a gate structure over the mesa structure, wherein the gate structure has a gate surface and first and second sides, and wherein first and second portions of the gate structure are disposed on the first and second sidewalls, respectively; and doping portions of the semiconductor substrate adjacent the first and second sides of the gate structure. 2. The method of claim 1, wherein forming the gate structure includes: forming a first layer of dielectric material over the mesa structure; and forming a second layer of dielectric material over the first layer of dielectric material. 3. The method of claim 1, wherein forming the gate structure includes: forming portions of a first layer of dielectric material over the first and second sidewalls, and wherein a portion of the first layer of dielectric material serves as the first portion of the gate structure and another portion of the first layer of dielectric material serves as the second portion of the gate structure and oxidizing the first and second sidewalls. 4. The method of claim 1, wherein forming the gate structure includes forming a first layer of dielectric material over the mesa structure; and further including forming a layer of semiconductor material over the semiconductor substrate adjacent the first and second sides of the gate structure by selectively growing the layer of semiconductor material. 5. A method for manufacturing a strained semiconductor device suitable for use in an integrated circuit, comprising: providing a semiconductor-on-insulator mesa isolation structure, the semiconductor-on-insulator mesa isolation structure having a top surface and first and second sidewalls; forming a gate dielectric material on the top surface and the first and second sidewalls; forming a gate on the gate dielectric material, wherein the gate and the gate dielectric material cooperate to form a gate structure having a top surface and gate sidewalls; forming a semiconductor material on portions of the top surface of the mesa isolation structure adjacent to the first and second sidewalls; forming silicide from the semiconductor material; and forming silicide from the gate, wherein the silicide from the gate strains the semiconductor device. 6. A method for straining a semiconductor device, comprising: <Desc/Clms Page number 9> providing a semiconductor substrate comprising a first layer of semiconductor material disposed over a layer of dielectric material, the semiconductor substrate having a top surface and isolation sidewalls; forming a gate structure on the semiconductor substrate, the gate structure having a gate surface, first and second opposing gate sidewalls, and third and fourth opposing gate sidewalls; and forming silicide from the gate surface and the first and second opposing gate sidewalls of the gate structure, wherein the silicide strains the semiconductor material of the semiconductor substrate. 7. The method of claim 6, further including forming a second layer of semiconductor material on the portions of the first layer of semiconductor material adjacent the third and fourth opposing gate sidewalls, and protecting the gate structure before forming the second layer of semiconductor material. 8. The method of claim 6, further including: forming a second layer of semiconductor material on the portions of the first layer of semiconductor material adjacent the third and fourth opposing gate sidewalls; doping the second layer of semiconductor material; forming silicide from the second layer of semiconductor material; and protecting the silicide formed from the second layer of dielectric material before forming the silicide from the gate surface. 9. A strained semiconductor device suitable for use in an integrated circuit, comprising: a semiconductor-on-insulator substrate in a mesa isolation configuration; a gate structure disposed on the semiconductor-on-insulator substrate, the gate structure having a gate surface, first and second opposing sidewalls, and third and fourth opposing sidewalls; first and second doped regions adjacent the third and fourth opposing sidewalls, respectively, of the gate structure ; first and second silicide regions on the first and second doped regions, respectively; and a gate silicide on the gate, wherein the gate silicide strains the semiconductor device. 10. The strained semiconductor device of claim 9, wherein the gate structure comprises a first dielectric material disposed on a second dielectric material and a semiconductor material disposed on the first dielectric material, the first layer of dielectric material being oxide, the second layer of dielectric material being silicon nitride, and the semiconductor material being polysilicon, and wherein the first dielectric material further includes sidewall oxide disposed on the first and second opposing sidewalls.
<Desc/Clms Page number 1> SEMICONDUCTOR DEVICE AND METHOD OF MANUFACTURE FIELD OF THE INVENTION The present invention relates, in general, to a semiconductor device and, more particularly, to carrier mobility in the semiconductor device and to a method for manufacturing the semiconductor device. BACKGROUND OF THE INVENTION Integrated circuits such as microprocessors, digital signal processors, microcontrollers, memory devices, and the like typically contain millions of Insulated Gate Field Effect Transistors (IGFETs). Because of the desire to the increase the speed of the transistors or devices making up the integrated circuits, integrated circuit manufacturers have decreased the device sizes. Although the smaller devices are capable of operating at increased speeds, secondary performance factors such as decreased source-drain breakdown voltage, increased junction capacitance, and instability of the threshold voltage negatively affect transistor performance. Collectively, these adverse performance effects are referred to as short channel effects. Techniques for increasing device speed have shifted from shrinking device sizes to improving carrier mobility and to mitigating short channel effects. For example, short channel effects can be mitigated by adjusting the electric field in the channel region to minimize the peak lateral electric field of the drain depletion region. One technique for lowering the lateral electric field is to include source and drain extension regions. Another technique suitable for increasing carrier mobility and mitigating short channel effects is to manufacture the devices on a Silicon-On-Insulator (SOI) substrate. Mobility can be further increased by straining the semiconductor devices. A drawback in manufacturing strained semiconductor devices has been the inability to develop large scale manufacturing processes capable of producing semiconductor devices that are under substantially the same amount of strain. Accordingly, what is needed is a semiconductor device having a predetermined amount of strain and a method for manufacturing the semiconductor device. SUMMARY OF THE INVENTION The present invention satisfies the foregoing need by providing a semiconductor device having a strained channel region and a method for manufacturing the semiconductor device. In accordance with one aspect, the present invention includes forming a mesa structure from a semiconductor substrate, wherein the mesa structure has a first surface and first and second sidewalls. A gate structure having a gate surface and first and second sides is formed over the mesa structure, wherein first and second portions of the gate structure are disposed on the first and second sidewalls, respectively. Portions of the semiconductor substrate adjacent the first and second sides of the gate structure are doped. In accordance with another aspect, the present invention includes a method for manufacturing a strained semiconductor device suitable for use in an integrated circuit. A semiconductor-on-insulator mesa <Desc/Clms Page number 2> isolation structure having a top surface and first and second sidewalls is provided. A gate dielectric material is formed on the top surface and the first and second sidewalls and a gate is formed on the gate dielectric material, wherein the gate and the gate dielectric material cooperate to form a gate structure having a top surface and gate sidewalls. A semiconductor material is formed on portions of the top surface of the mesa isolation structure adjacent to the first and second sidewalls. Silicide is formed from the semiconductor material and from the gate, wherein the silicide from the gate strains the semiconductor-on-insulator mesa isolation structure. In accordance with yet another aspect, the present invention comprises a method for straining a semiconductor device. A semiconductor substrate comprising a first layer of semiconductor material is disposed over a layer of dielectric material, wherein the semiconductor substrate has a top surface and isolation sidewalls. A gate structure having a gate surface, first and second opposing gate sidewalls, and third and fourth opposing sides is formed on the semiconductor substrate. Silicide is formed from the gate surface and the first and second opposing sidewalls of the gate structure, wherein the silicide strains the semiconductor material of the semiconductor substrate. In accordance with yet another embodiment, the present invention includes a strained semiconductor device suitable for use in an integrated circuit. The strained semiconductor device comprises a semiconductor-on-insulator substrate in a mesa isolation configuration. A gate structure having a gate surface, first and second opposing sidewalls, and third and fourth opposing sidewalls is disposed on the semiconductor-on-insulator substrate. First and second doped regions are adjacent the third and fourth sidewalls, respectively, of the gate structure. First and second silicide regions are disposed on the first and second doped regions, respectively. A gate silicide is disposed on the gate, wherein the gate silicide strains a channel region of the semiconductor device. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying drawing figures, in which like reference numbers designate like elements and in which:FIG. 1 is a perspective view of a portion of a semiconductor device at a beginning stage of manufacture in accordance with an embodiment of the present invention;FIG. 2 is a cross-sectional side view of the device of FIG. 1 taken along section line 2-2;FIG. 3 is a cross-sectional side view of the semiconductor device of FIG. 2 further along in processing;FIG. 4 is a cross-sectional side view of the semiconductor device of FIG. 3 further along in processing;FIG. 5 is a cross-sectional side view of the semiconductor device of FIG. 4 taken along section line 5- 5 ;FIG. 6 is a cross-sectional side view of the semiconductor device of FIGS. 4 and 5 further along in processing; <Desc/Clms Page number 3> FIG. 7 is a cross-sectional side view of the semiconductor device of FIG. 6 further along in processing;FIG. 8 is a cross-sectional side view of the semiconductor device of FIG. 7 further along in processing;FIG. 9 is a cross-sectional side view of the semiconductor device of FIG. 8 further along in processing;FIG. 10 is a cross-sectional side view of the semiconductor device of FIG. 9 further along in processing; andFIG. 11 is a cross-sectional side view of the semiconductor device of FIG. 10 taken along section line11-11. DETAILED DESCRIPTION Generally, the present invention provides an integrated circuit that includes a strained semiconductor device or transistors and a method for manufacturing the strained semiconductor device. The semiconductor device is strained to increase the mobility of the electrons and holes in its channel region. In accordance with one embodiment, the combination of a mesa isolation structure and a silicided gate structure increases the hole mobility by causing the channel region to be under a compressive stress. In accordance with another embodiment, the combination of underetching the buried oxide of the mesa structure and wrapping a gate dielectric and a gate material around the underetched mesa structure increases the electron and hole mobilities by causing the channel region to be under tensile stress. In these embodiments, the silicide is preferably nickel silicide. The stress can be further increased by annealing the silicide at an elevated temperature. For example, the tensile stress of a nickel silicide gate is approximately 800 MegaPascals (MPa) when annealed at a temperature of 360 C and approximately 1.25 GigaPascals (GPa) when annealed at a temperature of 400 C. In accordance with yet another embodiment, the channel region is maintained under tensile stress by manufacturing the gate to have a width of less than approximately 250 nm. FIG. 1 is a perspective view of a portion of a semiconductor device 10 during manufacture in accordance with an embodiment of the present invention. What is shown in FIG. 1 is a Semiconductor-OnInsulator (SOI) substrate 12 patterned to include a mesa isolation structure 14 having a substrate surface 20 and sidewalls 16'and 18. SOI substrate 12 comprises a layer of semiconductor material 22 disposed on a layer of dielectric material 24 which is disposed on a body of semiconductor material 26. Preferably, layer of semiconductor material 22 is undoped silicon having a thickness ranging from that of a monolayer of silicon to approximately 25 nanometers (nm) and dielectric layer 24 has a thickness ranging between approximately 50 nanometers and approximately 500 nm. More preferably, silicon layer 22 has a thickness of less than 10 nm and dielectric layer 24 has a thickness of about 200 nm. Substrate surface 22 is also referred to as a top surface of the substrate or an active surface. Techniques for forming mesa isolation structures are known to those skilled in the art. Referring now to FIG. 2, patterned SOI substrate 12 taken along section line 2-2 of FIG. 1 is shown. More particularly, FIG. 2 is a cross-sectional side view showing substrate surface 20, silicon layer 22, silicon dioxide layer 24, and silicon layer 26. <Desc/Clms Page number 4> Referring now to FIG. 3, a layer of dielectric material 28 is formed on substrate surface 20 and a dielectric material 30 is formed on dielectric material 28. By way of example, dielectric material 28 is silicon dioxide layer and dielectric material 30 is silicon nitride. Silicon dioxide layer 28 cooperates with silicon nitride layer 30 to form a gate dielectric material 32. Silicon dioxide layer 28 and silicon nitride layer 30 may be formed by techniques known to those skilled in the art including thermal oxidation, chemical vapor deposition, and the like. Preferably, gate dielectric material 32 has a thickness ranging from approximately 0.8 nm to approximately 2.0 nm. Even more preferably, gate dielectric material 32 has a thickness of approximately 1.3 nm. It should be understood that gate dielectric material 32 is not limited to being two layers of dielectric material or a layer of silicon nitride disposed on a layer of silicon dioxide. For example, gate dielectric material 32 may be comprised of a material having a high dielectric constant (K), e. g. , greater than 3.9, a single layer of oxide, or a combination thereof. A layer of polysilicon 34 is formed on gate dielectric material 32 using, for example, a chemical vapor deposition technique. A suitable range of thicknesses for polysilicon layer 34 is between approximately 1 nm and approximately 2 nm. A layer of photoresist is deposited on polysilicon layer 34 and patterned to form etch mask 36. Referring now to FIG. 4, polysilicon layer 34 is etched using an etch chemistry that preferentially etches polysilicon, i. e. , an etch chemistry selective to photoresist etch mask 36. By way of example, polysilicon layer 34 is etched using an anisotropic Reactive Ion Etch (RIE) and an etchant species that is selective to photoresist. Optionally, gate dielectric material 32, i. e. , silicon dioxide layer 28 and silicon nitride layer 30, may be anisotropically etched after etching polysilicon layer 34. Methods for etching polysilicon and gate dielectric material are well known to those skilled in the art. Etch mask 36 is removed. The remaining portion 38 of polysilicon layer 34 serves as the gate for semiconductor device 10. The portion 40 of gate dielectric material 32 between gate 38 and substrate 22 serves as a gate dielectric. Gate 38 and gate dielectric 40 cooperate to form a gate structure 42. Gate structure 42 has a gate surface 44 and opposing sidewalls 46 and 47. Briefly referring to FIG. 5, a cross-sectional view taken along section line 5-5 of FIG. 4 is shown. What is shown in FIG. 5 is silicon layer 22, silicon dioxide layer 24, and silicon layer 26 of mesa isolation structure 14. It should be noted that sidewalls 16 and 18 extend under silicon layer 22 because portions of silicon dioxide 24 layer have been etched during the manufacture of semiconductor device 10. In particular, silicon dioxide layer 24 may be etched during the cleaning steps performed in preparation for forming polysilicon layer 34. This etching, also referred to as underetching, can be controlled such that a predetermined amount of silicon dioxide layer 24 is underetched. Preferably, the amount of silicon dioxide layer 24 that is etched from each side, i. e. , from sidewalls 16 and 18, ranges between approximately 10 nm and approximately 30 nm. Even more preferably, the amount of silicon dioxide layer 24 that is etched from each side is approximately 20 nm. Because of the underetching, gate dielectric material 32 wraps around opposing sides 48 and 49 of silicon layer 22. Likewise, polysilicon layer 34 wraps around the portions of gate dielectric 40 that are adjacent opposing sides 48 and 49. Referring now to FIG. 6, a layer of silicon dioxide 50 having a thickness ranging between approximately 2.5 nm and approximately 10 nm is formed on gate 38 and on silicon nitride layer 30. A layer of silicon nitride 52 having a thickness ranging between approximately 5 nm and approximately 50 nm is <Desc/Clms Page number 5> formed on silicon dioxide layer 50. Preferably, silicon dioxide layer 50 has a thickness of 5 nm and silicon nitride layer 52 has a thickness of 30 nm. Referring now to FIG. 7, silicon nitride layer 52 and silicon dioxide layer 50 are etched using anisotropic reactive ion etching. After the anisotropic etching, a portion 54 of silicon dioxide layer 50 and a portion 56 of silicon nitride layer 52 remain over gate structure 42 and the portions of silicon layer 22 adjacent gate structure 42. It should be noted that if gate dielectric material 32 was not anisotropically etched after the formation of gate 38 as described with reference to FIG. 4, gate dielectric material 32 may be anisotropically etched after anisotropically etching silicon nitride layer 52 and silicon dioxide layer 50. A layer of silicon 58 having a surface 60 and a thickness ranging between approximately 15 nm and approximately 45 nm is grown on the exposed portions of silicon layer 22. Preferably, silicon layer 58 is grown using a technique of selective epitaxial growth. It should be understood that silicon layer 58 is not limited to being silicon, but can be any suitable semiconductor material such as, for example, silicon germanium or germanium. An impurity material of N type conductivity such as, for example, arsenic or phosphorus, is implanted into silicon layer 58 to form doped regions 62 and 64 that serve as source and drain extension regions, respectively. Preferably, source extension region 62 extends under gate structure 42 from gate side 46 and drain extension region 64 extends under gate structure 42 from gate side 47. Extension regions 62 and 64 may extend into dielectric layer 24. By way of example, extension regions 62 and 64 have a concentration ranging from approximately lxl0I8 atoms per centimeter cubed (atoms/cm3) to approximately Sx102 atoms/cm3. Preferably, extension regions 62 and 64 are formed by using a tilt angle implant having a tilt angle that ranges between approximately 7 degrees and approximately 45 degrees, where the angle is formed between surface 60 and an imaginary line extending perpendicularly from surface 60. Suitable implant parameters for forming source and drain extension regions 62 and 64, respectively, include an implant dose ranging between approximately 1012 ions per centimeter squared (ions/cm2) and approximately 1015 ions/cm2 and an implant energy ranging between approximately 1 kilo electron volt (keV) and approximately 20 keV. After the implant, semiconductor device 10 is annealed. Although source and drain extension regions 62 and 64, respectively, are formed using an angled or tilt angle implant, it should be understood that the implant may implant other portions of silicon layers 58 and 72 than those under gate structure 42. A source/drain implant is performed to form a source region 72 and a drain region 74. The source/drain implant may also dope gate structure 42. A suitable set of parameters for the source/drain implant includes implanting an N type impurity material such as, for example, arsenic at a dose ranging between approximately 1x1014 ions/cm2 and approximately 1x1016 ions/cm2 and using an implant energy ranging between approximately 20 keV and approximately 50 keV. The doped semiconductor material is annealed by heating to a temperature between approximately 800 degrees Celsius ( C) and 1, 100 C. A layer of refractory metal 76 is conformally deposited on silicon surface 60 and portion 56 of silicon nitride layer 52. By way of example, the metal of refractory metal layer 76 is nickel having a thickness ranging between approximately 50 and approximately 150 A. The refractory metal is heated to a temperature ranging between 350 C and 500 C. Referring now to FIG. 8, the heat treatment causes the nickel to react with the silicon to form nickel silicide (NiSi) in all regions in which the nickel is in contact with silicon. Thus, a nickel silicide region 82 is <Desc/Clms Page number 6> formed in source region 72 and a nickel silicide region 84 is formed in drain region 74. The portions of the nickel adjacent portion 56 of nitride layer 52 remain unreacted. After formation of nickel silicide regions 82 and 84, any unreacted nickel silicide is removed. It should be understood that the type of silicide is not a limitation of the present invention. For example, other suitable silicides include titanium silicide (TiSi), platinum silicide (PtSi), cobalt silicide (CoSi2), and the like. As those skilled in the art are aware, silicon is consumed during the formation of silicide and the amount of silicon consumed is a function of the type of silicide being formed. A layer dielectric material 86 having a thickness ranging between approximately 250 Angstroms (A) and approximately 750 A is formed on silicide regions 82 and 84 and on portion 56 of silicon nitride layer 52. A layer of dielectric material 88 having a thickness ranging between approximately 500 A and approximately 2, 500 A is formed on dielectric layer 86. By way of example, dielectric material 86 is silicon oxynitride having a thickness of approximately 500 A and dielectric layer 88 is oxide formed by decomposition of tetraethylorthosilicate (TEOS) having a thickness of approximately 1,500 A. Referring now to FIG. 9, TEOS layer 88 is planarized using, for example, a Chemical Mechanical Polishing (CMP) technique having a high selectivity to polysilicon. Thus, the planarization stops on gate 38. A layer of refractory metal 90 is conformally deposited on silicon surface 44, TEOS layer 88, the exposed portions of silicon oxynitride layer 86 and the exposed portions of silicon dioxide layer 54 and silicon nitride layer 56. By way of example, the metal of refractory metal layer 90 is nickel having a thickness of approximately 700 A. The refractory metal is heated to a temperature ranging between approximately 350 C and 500 C. Referring now to FIG. 10, the heat treatment causes the nickel to react with the silicon to form nickel silicide (NiSi) in all regions in which the nickel is in contact with silicon. Thus, a nickel silicide region 92 is formed from gate 38. The portions of the nickel disposed on non-silicon regions, i. e. , TEOS layer 88, the exposed portions of SiON layer 86, and exposed portions of silicon dioxide layer 54 and silicon nitride layer 56 remain unreacted. After formation of nickel silicide region 92, any unreacted nickel silicide is removed. It should be understood that the type of silicide is not a limitation of the present invention. For example, other suitable silicides include titanium silicide (TiSi), platinum silicide (PtSi), cobalt silicide (CoSi2), and the like. As those skilled in the art are aware, silicon is consumed during the formation of silicide and the amount of silicon consumed is a function of the type of silicide being formed. Briefly referring to FIG. 11, a cross-sectional side view of semiconductor device 10 along section line 11-11 of FIG. 10 is illustrated. What is shown in FIG. 11 is silicon layer 22 disposed on dielectric layer 24, which is disposed on body of semiconductor material 26. Gate dielectric 40, which is comprised of silicon dioxide layer 28 and silicon nitride layer 30, wraps around opposing sides 48 and 49 of silicon layer 22. Likewise, nickel silicide region 92 of gate 38 wraps around the portions of gate dielectric 40 that are adjacent opposing sides 48 and 49. By now it should be appreciated that a strained semiconductor device suitable for use in an integrated circuit has been provided. An advantage of the present invention is that the semiconductor device can be manufactured to be under compressive or tensile stress by adjusting the width of the gate, selecting the annealing temperature, and underetching the mesa structure. The semiconductor device can include one of these techniques or a combination of more than one of these techniques to provide stress. Thus, either the <Desc/Clms Page number 7> electron mobility, the hole mobility or the mobility of both the electrons and the holes can be optimized. The increased mobility results in increased device performance. For example, NMOS and PMOS transistors manufactured in accordance with an embodiment of the present invention have CV/1 delays as small as 0.2 picoseconds (ps) and 0.3 ps, respectively. Another advantage of the present invention is that the strain is defined at the last higher temperature processing step which helps prevent subsequent relaxation. Yet another advantage is that the high mobility increases the drive current of the device, while the quantization effects in such an ultra-thin semiconductor-on-insulator device increases its threshold voltage, thereby improving the offset current. Although certain preferred embodiments and methods have been disclosed herein, it will be apparent from the foregoing disclosure to those skilled in the art that variations and modifications of such embodiments and methods may be made without departing from the spirit and scope of the invention. It is intended that the invention shall be limited only to the extent required by the appended claims and the rules and principles of applicable law.
A method and apparatus for reducing the disk drive data transfer interrupt service latency penalty is described. The method comprises beginning a data transfer between a disk drive and a host system, issuing an interrupt before the transfer is complete, and then completing the data transfer. This method may be implemented on a computer assembly that includes a processor, an input/output controller, and a scatter/gather list, which is stored in memory, that includes an entry that will cause the input/output controller to generate the interrupt.
What is claimed is: 1. A method for reducing the disk drive data transfer interrupt service latency Penalty comprising:beginning a data transfer between a disk drive and a host system; issuing an interrupt before the transfer is complete; and completing the data transfer, wherein the time required for the interrupt to reach a device driver is approximately equal to the time required to complete the data transfer after the interrupt is issued. 2. A computer program stored on a computer readable medium for reducing the disk drive data transfer interrupt service latency penalty, the computer program comprising instructions that cause a computer tobegin a data transfer between a disk drive and a host system; issue an interrupt before the transfer is complete; and complete the data transfer, wherein the instructions ensure that the time required for the interrupt to reach a device driver is approximately equal to the time required to complete the data transfer after the interrupt is issued.
FIELD OF THE INVENTIONThe present invention relates to I/O controllers and device drivers that may program them. More specifically, the invention relates to a method and apparatus for reducing the disk drive data transfer interrupt service latency penalty.BACKGROUND OF THE INVENTIONBefore data may be read from, or written to, a computer's hard drive, a host system must issue a read or write request to the hard drive. In response to such a request, the hard drive in concert with a DMA engine transfers the data to, or from, the host system. The hard drive then issues an interrupt to inform a device driver that the transfer is complete. Because that interrupt is not issued until the end of the data transfer, there is a delay between data transfer completion and device driver notification of that event. That delay, which results from the command overhead inherent in routing the interrupt from the disk drive to the operating system and then to the driver, can be significant-e.g., 10 microseconds for each disk access. Eliminating, or reducing, such an interrupt service latency penalty could significantly enhance a computer's performance.Accordingly, there is a need for an apparatus and method that reduces the command overhead associated with the transfer of data between a disk drive and a host system. The present invention provides such an apparatus and method.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram representing a disk drive and a host system.FIG. 2 is a flow chart representing an embodiment of the method of the present invention.FIG. 3 is a block diagram representing a computer assembly that may be used to carry out the method of the present invention.DETAILED DESCRIPTION OF THE PRESENT INVENTIONA method and apparatus for reducing the disk drive data transfer interrupt service latency penalty is described. The method comprises beginning a data transfer between a disk drive and a host system, issuing an interrupt before the transfer is complete, and then completing the data transfer. This method may be implemented on a computer assembly that includes a processor, an input/output controller, and a scatter/gather list, which is stored in memory, that includes an entry that will cause the input/output controller to generate the interrupt.Before describing the method and apparatus of the present invention in detail, a brief overview of how a typical system operates to read data from, and write data to, a hard disk is provided with reference to FIG. 1. As shown in FIG. 1, a host system 100 reads data from, or writes data to, a hard disk 101 located in disk drive 102. Disk drive 102 may be part of host system 100, or instead be an external drive. Host system 100 executes applications or other computer programs. Those programs may deliver commands, via the operating system ("OS"), to device driver 103 that instruct it to read data from, or write data to, hard disk 101. In response, device driver 103 creates scatter/gather list 105 that includes entries corresponding to locations in memory 106 that will receive data from hard disk 101 or that contain data that will be transferred from the host system to the hard disk.After creating scatter/gather list 105, device driver 103 causes data to be transferred between the hard disk and the host system. Device driver 103 starts this data transfer by writing to registers in DMA engine 104, which may be integrated into input/output controller 107, and to registers in disk drive 102, as is well known to those skilled in the art. Device driver 103 may be an ATA ("Advanced Technology Attachment") driver.As data is transferred, DMA engine 104 consults scatter/gather list 105, which driver 103 has had stored in memory 106, and delivers data from the hard disk to the identified memory locations (for reads), or from the identified memory locations to the hard disk (for writes). Scatter/gather list 105 may have multiple entries for each data transfer. Each entry consists of a memory location and a length. See table 1 below. The sum of the lengths equals the total transfer.<tb> <sep>TABLE 1<tb> <sep>scatter/gather list entry<tb> <sep> <sep>Address<sep>Length<tb> <sep> <sep>0x144B0<sep>4096When the data transfer is complete, the disk drive triggers an interrupt. The host OS routes this interrupt to the driver. There is a latency between the time the disk drive triggers the interrupt and the driver receives it.FIG. 2 is a flow chart that represents an embodiment of the method of the present invention for eliminating, or at least reducing, that interrupt service latency penalty. Initially, an application or other computer program instructs device driver 103, via the OS, to read (or write) a specified amount of data from (or to) hard disk 101 (block 201). In response, device driver 103 sets up scatter/gather list 105, which will direct DMA engine 104 where to store data to be read from the hard disk or where to retrieve data to be written to the hard disk (block 202).The driver used in the method of the present invention anticipates the end of a hard drive request and programs an intermediate interrupt to take place prior to its completion. In doing so, device driver 103 specifies an amount of data to be transferred to a first address, indicates that an intermediate interrupt should issue after that transfer, then specifies an amount of data to be transferred to a second address after the interrupt has issued. See table 2 below.<tb> <sep>TABLE 2<tb> <sep>scatter/gather list entries with intermediate interrupt<tb> <sep> <sep> <sep>Intermediate<tb> <sep>Address<sep>Length<sep>Interrupt<tb> <sep>0x144B0<sep>3896<sep>X<tb> <sep>0x153E8<sep> 200The amount of data that the second entry specifies reflects a quantity that can be transferred during the interrupt service latency. This ensures that the intermediate interrupt will overlap with the hard drive completion interrupt. The amount of data that may be transferred, while retaining that overlap, will depend upon the speed of the disk drive and the length of the interrupt service latency. Optimally, an amount of data should be selected such that the time required for the interrupt to reach the device driver is approximately equal to the time required to complete the data transfer. If, for example, the interrupt service latency is 10 microseconds and the delivery rate of the disk drive is 20 MB/second, then the driver will program the scatter/gather list to trigger an intermediate interrupt 200 bytes before the end of the transfer-as indicated in table 2. By the time that intermediate interrupt (which will experience an interrupt service latency like that of the disk drive completion interrupt) reaches the driver, the data transfer will be complete.After programming scatter/gather list 105, device driver 103 writes the command (which can be either a read or write) to disk drive 102 (block 203) and sets up DMA engine 104, which includes designating the location of scatter/gather list 105 in memory 106 (block 204). At this point, data transfer between hard disk 101 and the host system begins (block 205). For each scatter/gather entry, DMA engine 104 delivers (in the case of reads) or retrieves (in the case of writes) data into, or from, the appropriate location in memory 106 (block 206). For each entry, DMA engine 104 then checks whether the intermediate interrupt bit has been set (block 207). If the bit has been set, the interrupt is issued. In the table 2 example, this occurs after the transfer of 3896 bytes to the specified address (block 208).After the interrupt is issued, DMA engine 104 checks whether it has processed the last scatter/gather list entry (block 209). If true, then the data transfer is complete (block 210). If false, the DMA engine consults the next scatter/gather list entry (block 211) and continues to deliver data in accordance with the instructions provided. To reduce interrupt service latency, the method of the present invention includes another scatter/gather list entry after the intermediate interrupt is issued-as described above.In a preferred embodiment of the present invention, the DMA engine causes an input/output controller to issue the intermediate interrupt. After a short delay (e.g., 10 microseconds) that is caused by the interrupt and software latency in responding to it, device driver 103 receives notification of the interrupt. In response, the driver informs other processes executing on the host system that the data transfer has been completed. During the time it takes for the interrupt to reach the driver, another scatter/gather list entry, i.e., the 200 byte entry shown in table 2, may be processed to complete the hard disk request. As shown in FIG. 2, when the intermediate interrupt bit is not set for a particular scatter/gather list entry, DMA engine 104 proceeds to check, after the data transfer, whether it has processed the last scatter/gather list entry-without first generating an intermediate interrupt.By using an intermediate interrupt in this fashion, it is possible to overlap completion of a data transfer with the interrupt service latency. By the time the device driver assumes control, enabling it to report that the command has completed, the command will have been satisfied-as its completion occurred during the interrupt latency. Although the driver would still check the hard disk's status information to ensure that the command completed successfully, before reporting that fact, such a check would confirm command completion. By continuing to transfer data while the driver waits to be notified of the interrupt, interrupt service latency does not stall data transfer. It follows that removing such an interrupt service latency penalty will enhance system performance.FIG. 3 is a block diagram representing a computer assembly that may be used to carry out the method of the present invention. Computer assembly 300 includes processor 301, input/output controller 302, device driver 303, DMA engine 304 and memory 305. Processor 301 executes instructions included in device driver 303 in the conventional manner. Processor 301 preferably is a Pentium(R) III, Pentium(R) IV or Itanium(TM) processor manufactured by Intel Corporation, but may be a later generation Intel processor or other Intel Architecture compatible processor, a RISC processor, or other device capable of processing data and instructions. Input/output controller 302 includes intermediate interrupt logic 306, which DMA engine 304 may invoke in response to an intermediate interrupt entry included in scatter/gather list 307 (which is stored in memory 305)-as described above. Input/output controller 302 delivers that interrupt to the OS, which forwards it to device driver 303. Input/output controller 302 preferably comprises the Intel(R) 82801 BA I/O Controller Hub 2 ("ICH2"), but may be another device having intermediate interrupt capability.The method and apparatus of the present invention improves system performance by eliminating, or at least reducing, disk drive data transfer interrupt service latency. Although the foregoing description has specified a method and apparatus for accomplishing that beneficial result, those skilled in the art will appreciate that many modifications and substitutions may be made. Accordingly, it is intended that all such modifications, alterations, substitutions and additions be considered to fall within the spirit and scope of the invention as defined by the appended claims.
This disclosure provides p-type metal oxide semiconductor thin films that display good thin film transistor (TFT) characteristics. The p-type metal oxide thin films include ternary or higher order tin-based (Sn-based) p-type oxides such as Sn (II)-M-O oxides where M is a metal. In some implementations, M is a metal selected from the d block or the p block of the periodic table. The oxides disclosed herein exhibit p-type conduction and wide bandgaps. Also provided are TFTs including channels that include p-type oxide semiconductors, and methods of fabrication. In some implementations, the p-channel TFTs have low off-currents.
1.An apparatus comprising a thin film transistor TFT, the TFT comprising: a source electrode; a drain electrode; and a semiconductor channel connecting the source electrode and the drain electrode, the semiconductor channel comprising a ternary or Higher order tin-based (Sn) based p-type oxides wherein the ternary or higher order Sn-based p-type oxide comprises Sn(II).2.The apparatus according to claim 1, wherein said ternary or higher order Sn-based p-type oxide comprises a metal selected from the d region or the p region of the periodic table.3.The apparatus of claim 1 wherein said ternary or higher order Sn-based p-type oxide comprises one or more metals selected from the group consisting of: Group 3 metals, Group 4 Metal, tungsten (W), and niobium (Nb).4.The apparatus according to claim 3, wherein said Group 3 metal comprises boron (B), aluminum (Al), and gallium (Ga), and said Group 4 metal comprises lead (Pb) and silicon (Si) .5.The apparatus according to claim 1, wherein said Sn-based p-type oxide is a Sn-MO ternary oxide, wherein Sn is Sn(II) and M is a metal selected from the d region or the p region of the periodic table .6.The apparatus of claim 5 wherein said Sn-M-O ternary oxide has the formula SnxM1-xOz, wherein x is at least 0.2 and z is greater than zero.7.The apparatus of claim 6 wherein x is between 0.2 and 0.8.8.The apparatus of claim 1 wherein the Sn-based p-type oxide is Sn(II)xB1-xOz, wherein x is between 0.7 and 0.9 and z is greater than zero.9.The apparatus according to claim 1, wherein the Sn-based p-type oxide is selected from the group consisting of Sn(II)xW1-xOz, Sn(II)xTi1-xOz, and Sn(II)xNb1-xOz, wherein x is Between 0.3 and 0.8 and z is greater than zero.10.The apparatus of claim 1 wherein said Sn-based p-type oxide is amorphous.11.The apparatus of claim 1 wherein said Sn-based p-type oxide has a contribution from Sn 5s orbit in its valence band maximum VBM.12.The apparatus of claim 1 wherein said TFT is part of a complementary metal oxide semiconductor CMOS TFT device.13.The apparatus of claim 1 wherein said TFT is a bottom gate TFT.14.The apparatus of claim 1 wherein said TFT is a top gate TFT.15.The device of claim 1, further comprising: a display; a processor configured to communicate with the display, the processor configured to process image data; and a memory device configured to Processor communication.16.The device of claim 15, further comprising: a driver circuit configured to transmit at least one signal to the display; and a controller configured to send at least a portion of the image data to the Driver circuit.17.The apparatus of claim 16 wherein said driver circuit comprises said TFT.18.The apparatus of claim 15 further comprising: an image source module configured to transmit the image data to the processor, wherein the image source module comprises a receiver, a transceiver, and a transmitter At least one.19.The device of claim 15, further comprising: an input device configured to receive input data and to communicate the input data to the processor.20.The apparatus according to claim 1, wherein said ternary or higher order Sn-based p-type oxide comprises one or more metals selected from the group consisting of tungsten (W), titanium (Ti) ), niobium (Nb), boron (B), lead (Pb), and silicon (Si).21.An apparatus comprising a thin film transistor TFT, the TFT comprising: a source electrode; a drain electrode; and a semiconductor channel connecting the source electrode and the drain electrode, the semiconductor channel comprising a ternary Or a higher order tin-based (Sn)-based p-type oxide, wherein the Sn-based p-type oxide is a Sn-M1-M2-O quaternary oxide, wherein Sn is Sn(II) and M1 and M2 It is a metal selected from the d region or the p region of the periodic table.22.A method comprising: providing a substrate; forming a ternary or higher order tin-based (Sn)-based p-type oxide semiconductor layer on the substrate, wherein the ternary or higher order Sn-based The p-type oxide includes Sn(II); and the Sn-based p-type oxide semiconductor layer is annealed.23.The method according to claim 22, wherein said ternary or higher order Sn-based p-type oxide is selected from the group consisting of the d-region or the p-region of the periodic table.24.The method of claim 22, wherein the ternary or higher order Sn-based p-type oxide is selected from the group consisting of one or more metals: Group 3 metal, Group 4 metal , tungsten (W), and niobium (Nb).25.The method of claim 24 wherein said Group 3 metal comprises boron (B), aluminum (Al), and gallium (Ga), and said Group 4 metal comprises lead (Pb) and silicon (Si) .26.The method of claim 22, wherein forming the Sn-based p-type oxide semiconductor layer comprises an atomic layer deposition ALD process.27.The method of claim 22, further comprising forming a gate electrode and a gate dielectric, wherein the gate dielectric is between the Sn-based p-type oxide semiconductor layer and the gate electrode.28.An apparatus comprising a thin film transistor TFT, the TFT comprising: a source electrode; a drain electrode; and a semiconductor channel connecting the source electrode and the drain electrode, the semiconductor channel comprising a ternary Or a higher order tin-based (Sn)-based p-type oxide, wherein the Sn-based p-type oxide is a Sn-MO ternary oxide, wherein Sn is Sn(II) and M is selected from the periodic table a metal of the d or p region, and wherein the Sn-MO ternary oxide has the chemical formula SnxM1-xOz, wherein x is at least 0.2 and z is greater than zero.
Thin film transistor containing tin-based P-type oxide semiconductor and method of manufacturing the samePriority claimU.S. Patent Application Serial No. 14/603,186, filed on Jan. 22, 2015, entitled,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The disclosure of the U.S. Patent Application is hereby incorporated by reference in its entirety herein in its entirety herein in its entirety herein in its entiretyTechnical fieldThis invention relates to thin film transistors and, more particularly, to tin based p-channel metal oxide thin film transistors.Background techniqueElectromechanical systems (EMS) include devices and actuators with electrical and mechanical components, actuators, transducers, sensors, optical components such as mirrors and optical films, and electronics. EMS devices or components can be fabricated on a variety of scales including, but not limited to, microscale and nanoscale. For example, a microelectromechanical system (MEMS) device can include structures ranging in size from about one micron to hundreds of microns or more. Nanoelectromechanical systems (NEMS) devices can include structures that are less than one micron in size, including, for example, less than a few hundred nanometers in size. Electromechanical elements can be formed using deposition, etching, photolithography, and/or other micromachining processes that etch away portions of the substrate and/or deposited material layers or add layers to form electrical and electromechanical devices.One type of EMS device is known as an interferometric modulator (IMOD). The term "IMOD" or "interferometric light modulator" refers to a device that selectively absorbs and/or reflects light using the principles of optical interference. In some embodiments, an IMOD display element can include a pair of conductive plates, one or both of which may be wholly or partially transparent and/or reflective, and capable of being applied immediately after application of an appropriate electrical signal. Perform relative movement. For example, one plate may comprise a stationary layer deposited over the substrate, supported by or supported by the substrate, and the other plate may comprise a reflective film spaced apart from the stationary layer by an air gap. The position of one plate relative to the other can change the optical interference of light incident on the IMOD display element. IMOD-based display devices have a wide range of applications and are expected to be used to improve existing products and create new products, especially those with display capabilities.Hardware and data processing equipment can be associated with the electromechanical system. Such hardware and data processing devices may include thin film transistors (TFTs). The TFT is a field effect transistor including a thin film of a metal and a semiconductor layer.Summary of the inventionThe systems, methods and devices of the present invention each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.A novel aspect of the subject matter described in the present invention can be implemented in an apparatus having a thin film transistor (TFT) including a source electrode, a drain electrode, and a connection of the source electrode and the drain A semiconductor channel of an electrode comprising a ternary or higher order tin-based (Sn) based p-type oxide. In some embodiments, the ternary or higher order Sn-based p-type oxide may comprise Sn(II) and a metal selected from the d-region or the p-region of the periodic table. The ternary or higher order Sn-based p-type oxide may comprise Sn(II) and one or more metals selected from the group consisting of Group 3 metals, Group 4 metals, tungsten (W), boron (B), niobium (Nb), boron (B), aluminum (Al), gallium (Ga), lead (Pb), and silicon (Si). In some embodiments, the Sn-based p-type oxide is a Sn-M-O ternary oxide, wherein Sn is Sn(II) and M is a metal selected from the d-region or the p-region of the periodic table. For example, the Sn-M-O ternary oxide can have the chemical formula SnxM1-xOz, where x is at least 0.2 and z is greater than zero. In some embodiments, x is between 0.2 and 0.8.In some embodiments, the Sn-based p-type oxide is Sn(II)xB1-xOz, where x is between 0.7 and 0.9 and z is greater than zero. In some embodiments, the Sn-based p-type oxide is one of Sn(II)xW1-xOz, Sn(II)xTi1-xOz, and Sn(II)xNb1-xOz, where x is between 0.3 and 0.8 Between and z is greater than zero. In some embodiments, the Sn-based p-type oxide can be a Sn-M1-M2-O quaternary oxide, wherein Sn is Sn(II) and M1 and M2 are selected from the d-region or p of the periodic table. The metal of the area. In some embodiments, the Sn-based p-type oxide has a contribution from the Sn 5s orbital in its valence band maximum (VBM).According to various embodiments, the Sn-based p-type oxide may be amorphous or crystalline. In some embodiments, the TFT is part of a complementary metal oxide semiconductor (CMOS) TFT device. The device can have one or both of a bottom and a top gate.In some implementations, the apparatus can further include: a display; a processor configured to communicate with the display, the processor configured to process image data; and a memory device configured to Processor communication. The apparatus can further include: a driver circuit configured to transmit the at least one signal to the display; and a controller configured to transmit at least a portion of the image data to the driver circuit. The driver circuit can include the TFT. In some implementations, the apparatus can further include an image source module configured to transmit the image data to the processor, wherein the image source module includes a receiver, a transceiver, and a transmit At least one of the devices. The device can include an input device configured to receive input data and to communicate the input data to the processor.Another novel aspect of the subject matter described in the present invention can be implemented in an apparatus having a TFT, comprising: a drain electrode; a source electrode; and a p-type semiconducting device for electrically connecting the drain An electrode and the source electrode. A gate electrode and a gate dielectric can be further included.Another novel aspect of the subject matter described in the present invention can be practiced in a method comprising: providing a substrate; forming a ternary or higher order tin-based (Sn) based p-type on the substrate An oxide semiconductor layer; and annealing the Sn-based p-type oxide semiconductor layer.In some embodiments, the ternary or higher order Sn-based p-type oxide may comprise Sn(II) and a metal selected from the d-region or the p-region of the periodic table. The ternary or higher order Sn-based p-type oxide may comprise Sn(II) and one or more metals selected from the group consisting of Group 3 metals, Group 4 metals, tungsten (W), boron (B), niobium (Nb), boron (B), aluminum (Al), gallium (Ga), lead (Pb), and silicon (Si). In some embodiments, forming the Sn-based p-type oxide semiconductor layer involves an atomic layer deposition (ALD) process. The method can further include forming a gate electrode and a gate dielectric, wherein the gate dielectric is between the Sn-based p-type oxide semiconductor layer and the gate electrode.The details of one or more embodiments of the subject matter described in the invention are set forth in the drawings and the description below. Although the examples provided in the present invention are primarily described in terms of EMS and MEMS based displays, the concepts provided herein are applicable to other types of displays, such as liquid crystal displays, organic light emitting diode ("OLED") displays, and Field emission display. Other features, aspects, and advantages will be apparent from the description, drawings and claims. It should be noted that the relative sizes of the following figures may not be drawn to scale.DRAWINGS1 is an isometric view illustration depicting a series of display elements or two adjacent IMOD display elements in an array of display elements of an interferometric modulator (IMOD) display device.2 is a system block diagram illustrating an electronic device incorporating an IMOD-based display that includes a three-element, three-element array of IMOD display elements.3A and 3B are schematic exploded partial perspective views of a portion of an EMS package including an array of electromechanical systems (EMS) elements and a backplate.4A is an example illustrating a cross-sectional view of a bottom gate thin film transistor (TFT), in accordance with some embodiments.4B is an example illustrating a cross-sectional view of a top gate TFT in accordance with some embodiments.Figure 5 shows the local state density (DOS) of SnO and SnO2 and mixed valence (Sn(II) and Sn(IV)) oxides Sn3O4.Figure 6 provides a band structure plot determined from density function theory (DFT) calculations for various Sn(II) based ternary oxides.Figure 7 shows a partial DOS of Sn-B-O oxide.Figure 8 shows a ternary Sn-based p-type oxide a-Sn0.8-B0.2O and a-Sn0.9 compared to a p-channel TFT having a binary oxide nc-SnO:H. - The drain-source current (IDS) of the p-channel TFT of B0.1O as a function of the gate-source voltage (VGS).9 is a flow chart illustrating an example of a method of fabricating a Sn-based p-type oxide semiconductor layer, in accordance with some embodiments.10 and 11 are flow diagrams illustrating an example of an atomic layer deposition (ALD) method of fabricating a ternary Sn-based p-type oxide semiconductor layer, in accordance with some embodiments.FIG. 12 is an example illustrating a cross-sectional view of a complementary metal oxide semiconductor (CMOS) TFT device, in accordance with some embodiments.FIG. 13A is an illustration of a schematic diagram illustrating an all-oxide CMOS inverter on a flexible substrate, in accordance with some embodiments.13B and 13C show experimental data of an all-oxide CMOS inverter circuit including a p-channel SnO:H TFT and an n-channel a-IGZO TFT.14A and 14B are system block diagrams illustrating a display device including a plurality of IMOD display elements.The same reference numbers and names in the various drawings indicate the same elements.Detailed waysThe following description is directed to certain embodiments for the purpose of describing the innovative aspects of the invention. However, one of ordinary skill in the art will readily recognize that the teachings herein can be applied in many different ways. The described embodiments can be implemented in any device, device, or system that can be configured to display an image, whether the image is in motion (eg, video) or still (eg, a still image), and regardless of whether the image is text , graphic or picture. More specifically, it is contemplated that the described embodiments can be included in or associated with a variety of electronic devices, such as, but not limited to, mobile phones, cellular phones with multimedia Internet capabilities, Mobile TV receiver, wireless device, smart phone,device, personal data assistant (PDA), wireless email receiver, handheld or portable computer, netbook, notebook computer, smartbook, tablet computer, printer, copier, scanner , fax devices, global positioning system (GPS) receivers / navigators, cameras, digital media players (such as MP3 players), video cameras, game consoles, watches, clocks, calculators, TV monitors, Flat panel display, electronic reading device (eg, e-reader), computer monitor, car display (including odometer and speedometer display, etc.), cockpit controls and/or display, camera view display (eg, rear in vehicle) Vision camera, electronic photo, electronic billboard or sign, projector, building structure, microwave, Box, stereo system, cassette recorder or player, DVD player, CD player, VCR, radio, portable memory chip, scrubber, dryer, wash dryer, parking meter, package (for example, in micro Electromechanical systems (EMS) applications for electromechanical systems (MEMS) applications and non-EMS applications), aesthetic structures (eg, display of images of a piece of jewelry or clothing), and various EMS devices. The teachings herein may also be used in non-display applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion sensing devices, magnetometers, inertia for consumer electronic devices Components, parts for consumer electronics, varactors, liquid crystal devices, electrophoresis devices, drive solutions, manufacturing processes and electronic test equipment. Therefore, the teachings are not intended to be limited to the embodiments depicted in the drawings, but have a broad applicability that will be apparent to those skilled in the art.Embodiments described herein relate to tin (Sn) based p-type oxide semiconductor materials. The Sn-based oxide semiconductors disclosed herein comprise p-type ternary and higher order oxides comprising a Sn(II) cation and one or more additional metals. In some embodiments, the Sn-based p-type oxide semiconductor comprises a metal from the d or p region of the periodic table. In some embodiments, the Sn-based p-type oxide semiconductor comprises a metal selected from the group consisting of Group 3 metals, Group 4 metals, tungsten (W), boron (B), niobium (Nb), boron (B), and aluminum (Al). One or more metals of gallium (Ga) and silicon (Si). A ternary or higher order Sn-based p-type oxide can have an indirect band gap greater than 0.8 eV.Embodiments described herein relate to a p-type thin film transistor (TFT) having a p-type channel including a ternary or higher order Sn-based p-type oxide semiconductor layer. In some embodiments, the p-type TFTs described herein can be used in a complementary metal oxide semiconductor (CMOS) TFT device that includes an n-type TFT and a p-type TFT.Particular embodiments of the subject matter described in this disclosure can be implemented to achieve one or more of the following potential advantages. A ternary or higher order Sn-based p-type oxide semiconductor layer can be implemented in a p-type TFT to provide good TFT characteristics including high mobility and low off current. A p-type TFT including a ternary or higher order Sn-based p-type oxide semiconductor can be implemented in a CMOS TFT circuit. These TFT circuits can be integrated on the display backplane, for example as a driver circuit or in other electronic devices. This reduces manufacturing costs and failures associated with packaged integrated circuit (IC) drivers, respectively. A ternary or higher order Sn-based p-type oxide semiconductor can be implemented in a flexible TFT disposed on a flexible substrate. These TFTs can have significantly higher mobility than flexible p-type TFTs with organic channels. A CMOS structure including an Sn-based oxide p-channel may have a higher cutoff frequency than a CMOS structure having an organic p-channel.An example of a suitable EMS or MEMS device or device that may include some or all of the described embodiments of TFTs is a reflective display device. The reflective display device can incorporate an interferometric modulator (IMOD) display element that can be implemented to selectively absorb and/or reflect light incident thereon using optical interference principles . The IMOD display element can include a portion of an optical absorber, a reflector moveable relative to the absorber, and an optical resonant cavity defined between the absorber and the reflector. In some embodiments, the reflector can be moved to two or more different positions that can change the size of the optical resonant cavity and thereby affect the reflectivity of the IMOD. The reflectance spectrum of an IMOD display element can produce a relatively broad spectral band that can be shifted across the visible wavelength to produce a different color. The position of the spectral band can be adjusted by changing the thickness of the optical cavity. One way to change the optical resonant cavity is by changing the position of the reflector relative to the absorber.1 is an isometric view illustration depicting a series of display elements or two adjacent IMOD display elements in an array of display elements of an interferometric modulator (IMOD) display device. The IMOD display device includes one or more interferometric EMS (e.g., MEMS) display elements. In these devices, the interferometric MEMS display elements can be configured in a bright or dark state. In a bright ("relaxed", "on" or "on" state, etc.) state, the display element reflects most of the incident visible light. Conversely, in the dark ("actuated," "closed," or "off", etc.) state, the display element reflects very little incident visible light. MEMS display elements can be configured to reflect primarily at specific wavelengths of light, allowing for a color display in addition to black and white displays. In some embodiments, different intensities and shades of primary colors can be achieved by using multiple display elements.The IMOD display device can include an array of IMOD display elements that can be arranged in rows and columns. Each display element in the array can include at least one pair of reflective and semi-reflective layers, for example, a movable reflective layer (ie, a movable layer, also referred to as a mechanical layer) and a fixed partially reflective layer (ie, a stationary layer) It is positioned at a variable and controllable distance from each other to form an air gap (also referred to as an optical gap, cavity or optical resonant cavity). The movable reflective layer is movable between at least two positions. For example, in the first position (ie, the relaxed position), the movable reflective layer can be positioned at a distance from the fixed portion of the reflective layer. In the second position (ie, the actuated position), the movable reflective layer can be positioned closer to the partially reflective layer. The incident light reflected from the two layers can interfere constructively and/or destructively depending on the position of the movable reflective layer and the wavelength of the incident light, resulting in a fully reflective or non-reflective state for each display element. In some embodiments, when the display element is not actuated, the display element may be in a reflective state, thereby reflecting light in the visible spectrum, and when the display element is actuated, the display element may be in a dark state, thereby absorbing and/or Or destructively interfere with light in the visible range. However, in some other implementations, the IMOD display element can be in a dark state when not actuated and in a reflective state when actuated. In some embodiments, the introduction of an applied voltage can drive the display element to change state. In some other implementations, the applied charge can drive the display element to change state.The depicted portion of the array in FIG. 1 includes two adjacent interferometric MEMS display elements in the form of IMOD display elements 12. In the display element 12 on the right side (as illustrated), the movable reflective layer 14 is illustrated in the vicinity of the optical stack 16, adjacent to or in contact with the optical stack 16, in an actuated position. The voltage Vbias applied on the display element 12 on the right side is sufficient to move the movable reflective layer 14 and also maintain it in the actuated position. In the display element 12 on the left (as illustrated), the movable reflective layer 14 is illustrated in a relaxed position at a distance from the optical stack 16 (which may be predetermined based on design parameters), the optical stack comprising a partially reflective layer. The voltage V0 applied across the display element 12 on the left is not sufficient to cause actuation of the movable reflective layer 14 to the actuated position as is the case with the display element 12 on the right side.In Fig. 1, the reflective properties of the IMOD display element 12 are generally illustrated by arrows indicating light 13 incident on the IMOD display element 12 and light 15 reflected from the display element 12 on the left. Most of the light 13 incident on the display element 12 can be transmitted through the transparent substrate 20 toward the optical stack 16. A portion of the light incident on the optical stack 16 can be transmitted through the partially reflective layer of the optical stack 16 and a portion of the light will be reflected back through the transparent substrate 20. A portion of the light 13 transmitted through the optical stack 16 can be reflected back from the movable reflective layer 14 toward the transparent substrate 20 (and through the transparent substrate). The interference (constructive and/or destructive) between the light reflected from the partially reflective layer of the optical stack 16 and the light reflected from the movable reflective layer 14 will be determined in part from the display element on the viewing or substrate side of the device. The intensity of the wavelength of the reflected light 15 of 12. In some embodiments, the transparent substrate 20 can be a glass substrate (sometimes referred to as a glass plate or panel). The glass substrate can be or comprise, for example, borosilicate glass, soda lime glass, quartz,or other suitable glass material. In some embodiments, the glass substrate can have a thickness of 0.3, 0.5, or 0.7 millimeters, but in some embodiments, the glass substrate can be thicker (eg, tens of millimeters) or thinner (eg, less than 0.3 millimeters) . In some embodiments, a non-glass substrate such as a polycarbonate, acrylic, polyethylene terephthalate (PET) or polyetheretherketone (PEEK) substrate can be used. In such embodiments, the non-glass substrate will likely have a thickness of less than 0.7 millimeters, but the substrate may be thicker depending on design considerations. In some embodiments, a non-transparent substrate can be used, such as a metal foil or stainless steel based substrate. For example, a reverse IMOD based display (which includes a fixed reflective layer and a partially transmissive and partially reflective movable layer) can be configured to view from a side of the substrate opposite the display element 12 of FIG. It can be supported by a non-transparent substrate.Optical stack 16 can comprise a single layer or several layers. The layer can comprise one or more of the following layers: an electrode layer, a partially reflective and partially transmissive layer, and a transparent dielectric layer. In some implementations, the optical stack 16 is electrically conductive, partially transparent, and partially reflective, and can be fabricated, for example, by depositing one or more of the above layers onto the transparent substrate 20. The electrode layer can be formed from a variety of materials, such as various metals, such as indium tin oxide (ITO). The partially reflective layer can be formed from a variety of materials that are partially reflective, such as various metals (eg, chromium and/or molybdenum), semiconductors, and dielectrics. The partially reflective layer can be formed from one or more layers of material, and each of the layers can be formed from a single material or a combination of materials. In some embodiments, certain portions of the optical stack 16 can comprise a single translucent thickness of metal or semiconductor that serves as both a partial optical absorber and an electrical conductor, but with different layers or portions of better conductivity (eg, The optical stack 16 or other conductive layer or portion of the display element can be used to transfer signals between the IMOD display elements using a bus. The optical stack 16 can also include one or more insulating or dielectric layers covering one or more conductive layers or conductive/partially absorbing layers.In some implementations, at least some of the layer(s) of optical stack 16 can be patterned into parallel strips and can form row electrodes in a display device, as described further below. As will be understood by those skilled in the art, the term "patterned" is used herein to refer to masking and etching processes. In some embodiments, highly conductive and reflective materials (eg, aluminum (Al)) can be used for the movable reflective layer 14, and these strips can form column electrodes in a display device. The movable reflective layer 14 can be formed as a series of parallel strips of one or more deposited metal layers (orthogonal to the row electrodes of the optical stack 16) to form a deposit on the support (eg, the illustrated post 18, and at the post) The intervention between the 18 sacrificial materials) the pillars on the top. When the sacrificial material is etched away, a defined gap 19 or optical cavity can be formed between the moving reflective layer 14 and the optical stack 16. In some embodiments, the spacing between the posts 18 can be approximately 1 μm to 1000 μm, while the gap 19 can be substantially less thanIn some embodiments, each IMOD display element (whether in an actuated or relaxed state) can be considered a capacitor formed by a fixed reflective layer and a moving reflective layer. When no voltage is applied, the movable reflective layer 14 remains in a mechanically relaxed state (as illustrated by the display element 12 on the left in FIG. 1) with the gap 19 interposed between the movable reflective layer 14 and the optical stack 16. However, when a potential difference (ie, a voltage) is applied to at least one of the selected row and column, the capacitor formed at the intersection of the row electrode and the column electrode of the corresponding display element becomes charged, and the electrostatic force will be The electrodes are pulled together. If the applied voltage exceeds a threshold, the movable reflective layer 14 can be deformed and moved closer or moved against the optical stack 16. A dielectric layer (not shown) within the optical stack 16 prevents shorting and separation distance between the control layers 14 and 16, as illustrated by the actuated display element 12 on the right in FIG. Regardless of the polarity of the applied potential difference, the behavior is the same. Although a series of display elements in an array may be referred to as "rows" or "columns" in some cases, those skilled in the art will readily appreciate that one direction is referred to as "row" and the other direction is referred to as " Columns are arbitrary. Re-state, in some orientations, rows can be treated as columns, and columns can be treated as rows. In some embodiments, the rows may be referred to as "common" lines and the columns may be referred to as "segmented" lines, or vice versa. Moreover, the display elements can be arranged uniformly in orthogonal rows and columns ("array"), or in a non-linear configuration, for example, having some positional offset ("mosaic") relative to each other. The terms "array" and "mosaic" may refer to either configuration. Therefore, although the display is referred to as including "array" or "mosaic", the elements themselves need not be arranged orthogonally to each other in any case, or arranged in a uniform distribution, but may include asymmetric shapes and uneven distribution. The arrangement of the components.2 is a system block diagram illustrating an electronic device incorporating an IMOD-based display that includes a three-element, three-element array of IMOD display elements. This electronic device can include embodiments of the TFTs disclosed herein. For example, a complementary metal oxide semiconductor (CMOS) TFT device can be used as part of a driving circuit such as the electronic device illustrated in FIG. 2. The electronic device includes a processor 21 that can be configured to execute one or more software modules. In addition to executing an operating system, processor 21 may be configured to execute one or more software applications, including a web browser, a telephony application, an email program, or any other software application.Processor 21 can be configured to communicate with array driver 22. In one embodiment, array driver 22 may include row driver circuitry 24 and column driver circuitry 26 that provide signals to, for example, display array or panel 30. A cross section of the IMOD display device illustrated in Fig. 1 is shown by line 1-1 in Fig. 2. Although FIG. 2 illustrates a 3x3 array of IMOD display elements for clarity, display array 30 may contain a large number of IMOD display elements and may have a different number of IMOD display elements in a row than in the column, and vice versa. Of course.3A and 3B are schematic exploded partial perspective views of a portion of an EMS package 91 including an EMS element array 36 and a backing plate 92. The TFTs as disclosed herein may be implemented in the EMS package 91 shown in Figures 3A and 3B. For example, a TFT including a p-type metal oxide semiconductor channel can be implemented in a driver circuit on the backplane 92. Figure 3A shows the situation where the two corners of the backing plate 92 are cut away to better illustrate some portions of the backing plate 92, while Figure 3B shows the situation where the corners are not cut. The EMS array 36 can include a substrate 20, a support post 18, and a movable layer 14. In some implementations, the EMS array 36 can comprise an array of IMOD display elements having one or more optical stack portions 16 on a transparent substrate, and the movable layer 14 can be implemented as a movable reflective layer.The backing plate 92 can be substantially planar or can have at least one undulating surface (eg, the backing plate 92 can be formed with recesses and/or protrusions). The backing plate 92 can be made of any suitable material, whether transparent or opaque, electrically conductive, or insulative. Suitable materials for the backsheet 92 include, but are not limited to, glass, plastics, ceramics, polymers, laminates, metals, metal foils, Kovar, and electroplated Kovar.As shown in Figures 3A and 3B, the backing plate 92 can include one or more backing plate assemblies 94a and 94b that can be partially or fully embedded in the backing plate 92. As seen in Figure 3A, the backing plate assembly 94a is embedded in the backing plate 92. As seen in Figures 3A and 3B, the backing plate assembly 94b is disposed within a recess 93 formed in the surface of the backing plate 92. In some embodiments, the backing plate assemblies 94a and/or 94b can protrude from the surface of the backing plate 92. Although the backing plate assembly 94b is disposed on the side of the backing plate 92 that faces the substrate 20, in other embodiments, the backing plate assembly can be disposed on the opposite side of the backing plate 92.Backplane assembly 94a and/or 94b may include one or more active or passive electrical components such as transistors, capacitors, inductors, resistors, diodes, switches, and/or integrated circuits (ICs), such as packaged, standard Or discrete IC. Other examples of backplane assemblies that can be used in various embodiments include antennas, batteries, and sensors, such as electrical sensors, touch sensors, optical sensors or chemical sensors, or thin film deposition devices.In some embodiments, the backplane assemblies 94a and/or 94b can be in electrical communication with portions of the EMS array 36. Conductive structures such as traces, bumps, posts or vias may be formed on one or both of the backplate 92 or substrate 20 and may be in contact with each other or with other conductive components to be in the EMS array 36 and back. Electrical connections are made between the board assemblies 94a and/or 94b. For example, FIG. 3B includes one or more conductive vias 96 on the backing plate 92 that are alignable with electrical contacts 98 that extend upward from the movable layer 14 within the EMS array 36. In some embodiments, the backing plate 92 can also include one or more insulating layers that electrically insulate the backing plate assemblies 94a and/or 94b from other components of the EMS array 36. In some embodiments in which the backing plate 92 is formed from a gas permeable material, the inner surface of the backing plate 92 can be coated with a moisture barrier (not shown).The backing plate assemblies 94a and 94b can include one or more desiccants that act to absorb any moisture that can enter the EMS package 91. In some embodiments, a desiccant (or other hygroscopic material, such as a getter) can be provided separately from any other backsheet assembly, for example, as an adhesive to the backing plate 92 (or formed in the backing plate) Sheet of the recess). Alternatively, a desiccant can be integrated into the backing plate 92. In some other embodiments, the desiccant can be applied directly or indirectly to other backsheet components, for example, by spraying, screen printing, or any other suitable method.In some embodiments, EMS array 36 and/or backing plate 92 can include mechanical mounts 97 to maintain the distance between the backplate assembly and the display elements and thereby prevent mechanical interference between those components. In the embodiment illustrated in FIGS. 3A and 3B, the mechanical mount 97 is a post that is formed to protrude from the backing plate 92 in alignment with the support posts 18 of the EMS array 36. Alternatively or additionally, a mechanical mount such as a track or post may be placed along the edge of the EMS package 91.Although not illustrated in Figures 3A and 3B, a seal that partially or completely encloses the EMS array 36 may be provided. The seal may together with the backing plate 92 and the substrate 20 form a protective cavity enclosing the EMS array 36. The seal may be a semi-hermetic seal such as a conventional epoxy based adhesive. In some other embodiments, the seal can be a hermetic seal, such as a thin film metal weld or a glass frit. In some other embodiments, the seal can comprise polyisobutylene (PIB), polyurethane, liquid spin-on glass, solder, polymer, plastic, or other material. In some embodiments, a reinforced sealant can be used to form a mechanical mount.In an alternate embodiment, the seal ring can include an extension of one or both of the backing plate 92 or the substrate 20. For example, the seal ring can include a mechanical extension (not shown) of the backing plate 92. In some embodiments, the seal ring can comprise a separate component, such as an O-ring or other annular component.In some embodiments, the EMS array 36 and the backing plate 92 are formed separately and then attached or coupled together. For example, the edges of the substrate 20 can be attached and sealed to the edges of the backing plate 92, as discussed above. Alternatively, EMS array 36 and backing plate 92 can be formed and joined together to serve as EMS package 91. In some other implementations, the EMS package 91 can be fabricated in any other suitable manner, such as by forming an assembly of the backing plate 92 over the EMS array 36 by deposition.Hardware and data processing devices can be associated with the EMS structure. Such hardware and data processing devices may include transistor switches, such as thin film transistors (TFTs). The EMS display elements in the display device can be arranged in an array, such as a two-dimensional grid, and addressed by circuitry associated with the rows and columns of the array. The row driver circuit can drive the gate of the transistor switch that selects the particular row to be addressed, and the common driver circuitry can provide a bias voltage to a given row of display elements that can be updated synchronously with the row refresh.The display device can include a large number of display elements, which can be referred to as pixels. Some displays may include hundreds, thousands, or millions of pixels arranged in hundreds or thousands of rows and hundreds and thousands of columns. Each pixel can be driven by one or more TFTs. A TFT is a type of field effect transistor fabricated by depositing a thin film of a semiconductor layer and one or more dielectric layers and a conductive layer on a substrate. With the continued development of flat panel displays, systems on glass, display devices, mobile devices, wearable devices, and the like, there is an increasing demand for high performance TFTs.Integrating the switching matrix with the driver circuitry on the display backplane and integration in other electronic devices reduces manufacturing costs and failures associated with separately packaged IC drivers. Complementary metal oxide semiconductor (CMOS) circuits use n-type and p-type channels. Disclosed herein are p-type metal oxide semiconductor materials exhibiting good TFT performance, and TFTs including p-type metal oxide semiconductor channels. Circuits including n-type and p-type TFTs and electronic devices (such as display devices) including such circuits are also disclosed. While the following description focuses on p-type metal oxide semiconductors in the context of TFTs in display applications, p-type metal oxide semiconductors can also be used in other contexts, such as solar applications.In general, a TFT may include a semiconductor layer having a source region, a drain region, and a channel region in the semiconductor layer. Thus, the TFT can be a three terminal device that includes a source terminal, a drain terminal, and a gate terminal for modulating the conductivity of the channel within the TFT. Some types of TFTs can be defined depending on the position of the gate terminal. For example, the type of TFT geometry can include a bottom gate geometry and a top gate geometry. 4A is an example illustrating a cross-sectional view of a bottom gate TFT in accordance with some embodiments. In FIG. 4A, the bottom gate TFT 400a includes a substrate 410a, a gate electrode 420a over the substrate 410a, a gate dielectric 430a over the gate electrode 420a, a semiconductor layer 440a over the gate dielectric 430a, and a semiconductor layer 440a. A source electrode 450a above the source region and a drain electrode 460a above the drain region of the semiconductor layer 440a, wherein a channel region in the semiconductor layer 440a is between the source region and the drain region. The semiconductor layer 440a is electrically connected to the source electrode 450a and the drain electrode 460a, wherein the conductivity in the channel region can be modeled with a potential applied across the gate electrode 420a and the source electrode 450a.4B is an example illustrating a cross-sectional view of a top gate TFT in accordance with some embodiments. In FIG. 4B, the top gate TFT 400b includes a substrate 410b, a semiconductor layer 440b over the substrate 410b, a source electrode 450b over the source region of the semiconductor layer 440b, and a drain region above the semiconductor layer 440b. Drain electrode 460b, gate dielectric 430b over source electrode 450b, and gate electrode 420b over gate dielectric 430b, wherein the channel region is between the source and drain regions of semiconductor layer 440b. The semiconductor layer 440b is electrically coupled to the source electrode 450b and the drain electrode 460b, wherein the conductivity in the channel region can be modeled as the potential applied across the gate electrode 420b and the source electrode 450b changes.Gate electrodes 420a and 420b may comprise one or more metals or other conductive materials. Examples of the metal include aluminum (Al), copper (Cu), molybdenum (Mo), tantalum (Ta), chromium (Cr), niobium (Nd), tungsten (W), titanium (Ti), gold (Au), nickel (Ni), and an alloy containing any of these elements. In some implementations, each of the gate electrodes 420a and 420b can comprise two or more layers of different metals arranged in a stacked configuration. In some implementations, each of the gate electrodes 420 can have a thickness between about 50 nm and about 500 nm, or between about 100 nm and about 250 nm.Source electrodes 450a and 450b and drain electrodes 460a and 460b can comprise any number of different metals or other conductive materials. Examples of the metal include Mo, W, Au, Pt, Ag, Mg, Mn, Ti, Al, Cu, Ta, Cr, Nd, Ni, and an alloy containing any of these elements. For example, source electrodes 450a and 450b and drain electrodes 460a and 460b can comprise stable contact metals such as Mo, W, Au, Pt, and Ag. In some implementations, each of source electrodes 450a and 450b and drain electrodes 460a and 460b comprise two or more sub-layers of different metals arranged in a stacked configuration. In some implementations, each of source electrodes 450a and 450b and drain electrodes 460a and 460b can have a thickness between about 50 nm and about 500 nm, or between about 100 nm and about 250 nm.Gate dielectrics 430a and 430b may also be referred to as gate insulators. Each of the gate dielectrics 430a and 430b can comprise any number of different dielectric materials, including silicon dioxide (SiO2), aluminum oxide (Al2O3), hafnium oxide (HfO2), hafnium oxide (Y2O3), titanium oxide. (TiO2), silicon oxynitride (SiON), silicon nitride (SiN), or an organic dielectric material. In some implementations, each of gate dielectrics 430a and 430b can comprise two or more layers of dielectric material arranged in a stacked configuration. In some embodiments, the thickness of the gate dielectric layer can be between about 50 nm and about 500 nm, or between about 100 nm and about 250 nm.In FIGS. 4A and 4B, the bottom gate TFT 400a and the top gate TFT 400b may include a metal oxide TFT, wherein the semiconductor layers 440a and 440b may include a metal oxide. In the metal oxide TFT, a metal oxide semiconductor is deposited as an active channel layer in the TFT. Metal oxide TFTs can have high mobility. According to various embodiments, the metal oxide TFT is a p-type metal oxide TFT, wherein the semiconductor layers 440a and 440b may include a p-type metal oxide.Most oxide semiconductors are n-type semiconductors in which few materials exhibit p-type conductivity. Due to its high defect density, known p-type oxide semiconductors are generally not suitable for TFTs. However, the ability to form, for example, p-type and n-type oxide semiconductor TFTs allows fabrication of CMOS TFT circuits.Many p-type semiconducting oxides are of interest as transparent conductive oxides (TCOs). However, a p-type oxide semiconductor which can be used for a TCO does not necessarily have good TFT performance. For optical properties, the direct band gap of metal oxide semiconductors is important, while for electronic properties, indirect band gaps are important. Further, although various metal oxide materials can be used as the transparent conductive oxide, they generally do not have sufficient high quality for the TFT. This is due to the presence of defects in the band gap. Although such defects do not affect the TCO exhibiting metal-like conductivity in the conductive strip, they greatly affect the TFT performance.Some embodiments described herein relate to a Sn-based p-type oxide semiconductor material, and a TFT including a channel having a Sn-based p-type oxide semiconductor, and a fabrication method. The Sn-based oxide semiconductors disclosed herein comprise p-type ternary and higher order oxides comprising a Sn(II) cation and one or more additional metals.Tin (II) oxide (also known as tin-containing oxide or tin oxide; SnO) is a promising p-type metal oxide semiconductor due to its relatively high carrier mobility. Tin (IV) (also known as tin oxide or tin dioxide; SnO2) is an n-type material in comparison. (It should be noted that in some references, there is a tendency to mention metal oxides and to omit the ratio of ions or atoms of the composition. For example, an indium gallium zinc oxide (IGZO) film is generally called InGaZnO, but the ratio of ions may be Not 1:1:1:1. Similarly, tin (IV) oxide (SnO2) can be referred to as SnO in this abbreviation. However, as used herein, SnO refers to tin-containing oxides, while SnO2 refers to tin oxide. .)Although the description below mainly refers to ternary oxides, quaternary and higher order oxides are also provided. The ternary oxides described herein may be referred to as Sn-M-O, where Sn refers to tin (II) and M is a different (ie, non-tin) metal. In some embodiments, M is selected from the d region or the p region of the periodic table. As used herein, the term metal includes a metal such as silicon (Si).In some embodiments, M is selected from Groups 3 and 4 of the Periodic Table, or tungsten (W), boron (B), niobium (Nb), aluminum (Al), gallium (Ga), or silicon (Si) One of them. The Group 3 metal contains strontium (Sc) and strontium (Y). The Group 4 metal includes titanium (Ti), zirconium (Zr), and hafnium (Hf).The quaternary oxides described herein may be referred to as Sn-M1-M2-O, where Sn refers to tin (II) and M1 and M2 are different metals (ie, M1 is a non-tin metal that is not M2). In some embodiments, one or both of M1 and M2 are selected from Groups 3 and 4 of the Periodic Table, or one of W, B, Nb, Al, Ga, or Si. The Group 3 metal contains strontium (Sc) and strontium (Y). The Group 4 metal includes titanium (Ti), zirconium (Zr), and hafnium (Hf). Similarly, a five-membered Sn-based p-type oxide semiconductor may comprise three or more metals, one of which is Sn(II).Examples of the Sn-MO p-type semiconductor include Sn-WO, Sn-Ti-O, Sn-BO, Sn-Nb-O, Sn-Al-O, Sn-Ga-O, Sn-Sc-O, Sn-YO , Sn-Zr-O, and Sn-Hf-O. While the examples presented above are characterized as ternary, quaternary or higher order compounds, p-type semiconducting oxides can also be characterized as a combination of binary oxides. For example, the Sn-M-O oxide can also be characterized as a combination of a Sn-O binary oxide and an M-O binary oxide. Therefore, the Sn-based p-type semiconductor can be a combination of SnO and two or more different metal oxides. According to various embodiments, the Sn-based p-type oxide semiconductor may or may not be equivalent.The metal in the tin-based p-type metal oxide semiconductor can have various oxidation states, or a combination of oxidation states. The oxidized state may depend on the state of the material, wherein the amorphous material has a wide range of permissible oxidation states exhibiting p-type conductivity. The Sn-based p-type oxide semiconductors described herein can be ionic or have mixed ions and covalent features, depending in part on their constituent elements.The relative ratio of metals in the Sn-based p-type oxide semiconductors disclosed herein can vary, wherein Sn(II) is at least about 10% of the total molar amount of metal in the p-type oxide semiconductor. For example, Sn-M-O can be characterized as Sn(x)M(1-x)Oz, where x is at least 0.3 and z is a non-zero number depending on the particular metal employed. Similarly, Sn-M1-M2-O can be characterized as Sn(x)M1(y1=1-x-y2)M2(y2=1-x-y1)Oz, where x is at least 0.3 and z is dependent A non-zero number of specific metals used.The compositional ranges disclosed herein are for ternary or higher order compounds. The amount of the secondary component (eg, M, M1, M2, or Sn(II)) is greater than the amount of dopant. However, the Sn-based p-type oxides disclosed herein may be doped or undoped. Examples of the dopant may include hydrogen and a metal. The dopant is present at a much lower level than the metal cation component of the ternary or higher order metal oxide compound. For example, in a p-type metal oxide film having the chemical formula AxByO, if y is less than 0.05, B can be regarded as a dopant. The dopant can also be characterized as being less than 1% (atoms) of the film.The tin (II) oxide is a promising p-type metal oxide semiconductor due to its relatively high hole mobility exceeding 1 cm 2 /V·s. Mobility characterizes how carriers (holes or electrons) move through the semiconductor in the presence of an electric field and is defined as μ = vd / E, where vd is the drift velocity of the electron and E is the electric field. Mobility can be determined from Hall effect measurements (and reported as Hall mobility) or extracted from TFT performance measurements (and reported as field effect mobility). For example, carrier mobility can be extracted from experimental measurements of drain current (Id) and gate bias (Vg). Field effect mobility can be determined from saturation mode or linear zone measurements.Figure 5 shows the local state density (DOS) of SnO and SnO2 and mixed valence (Sn(II) and Sn(IV)) oxides Sn3O4. The local DOS provides a qualitative picture of the electronic structure of the material and provides an understanding of the high mobility of experimental observations of SnO. The valence band of SnO is formed by a hybrid orbital of O 2p and Sn 5s. As shown in FIG. 5, at the valence band maximum (VBM; at 0 eV), there is substantial overlap of the spherical Sn 5s orbital with the O 2p orbital. It is believed that the high hole mobility observed by the experimental observation of SnO is due to the fact that the spherical Sn 5s orbit provides a main carrier path with significant overlap with the O 2p orbital. In contrast, in VBM, the O 2p orbitals dominate SnO2, with no Sn 5s contributing or overlapping. Therefore, SnO2 is an n-type conductor. The mixed valence Sn3O4 oxide also shows the overlap of the Sn 5s orbital with the O 2p orbital; however, the fabrication of the mixed valence Sn3O4 can be difficult.In addition to mobility, semiconductor materials can be characterized by band gaps and crystallization. SnO has, for example, an indirect band gap of 0.8 eV and a crystallization temperature of about 300 °C.In some embodiments, the ternary and higher order Sn-based p-type oxide compounds provided herein can be characterized by having a contribution from spherical orbital to VBM that results in high hole mobility. In some embodiments, the Sn-based p-type oxide has a larger band gap than SnO. In some embodiments, the Sn-based p-type oxide has a higher crystallization temperature than SnO.A Sn-based p-type oxide semiconductor can be implemented in a p-type TFT. In addition to carrier mobility, a TFT can also be characterized by a threshold voltage (Vth) which is the minimum gate-to-source voltage difference that produces a conductive path between the source and the drain; on/off current Ratio; and subthreshold slope, which is a measure of the switching behavior of the TFT. Further, the TFT may be characterized by its breaking current. The off current refers to the leakage current for the gate electrode below the threshold voltage. Leakage current can result in reduced performance characteristics; for example, leakage current in the TFT of the display device can indicate a change in pixel brightness, an increase in noise, and a reduction in gray level shading. According to various embodiments, TFT characteristics including high mobility and low off current can be provided.Figure 6 provides a pseudo-band diagram determined from density function theory (DFT) calculations for various Sn(II)-based ternary oxides. The band structure plot shows the change in energy E with the wave vector κ. While DFT band structure mapping may be associated with systematic errors that may result in inaccurate quantitative band gap determination, the band structure plot in Figure 6 provides a qualitative assessment of the Sn-based p-type oxide semiconductors described herein. The following characteristics can be evaluated: the presence of a gap between the valence band (VB) and the conductive strip (CB) and the shape of the strip structure. The presence of a gap between VB and CB indicates that the ternary Sn(II)-based oxide is a p-type material. The shape of the plot at the valence band maximum (VBM) provides information about the mobility. This is because the effective mass of the cavity is inversely proportional to the curvature of the VBM, where a large curvature indicates a smaller effective mass and a higher mobility than a small curvature.Drawings for SnTaO3 and SnMo4O6 in SnWO4, SnPb2O4, Sn2TiO4, SnB4O7, Sn6SiO8, SnTaO3, SnMo4O6, SnNb2O6, Sn2Nb2O7 indicate that these ternary oxides do not exhibit p-type conductivity. For SnMo4O6, this is evident from the lack of any band gap indicating that the material is a conductor rather than a semiconductor. For SnTaO3, the band structure plot indicates that the ternary oxide is an n-type semiconductor with the Fermi level being above the conduction band minimum (CBM).The remaining plots indicate that W, Ti, Nb, B, Pb, and Si may all be components of the Sn-based p-type oxide semiconductor disclosed herein. Turning first to SnWO4, a band gap with a reasonably large band dispersion was observed. The same is true for Sn2TiO4 and SnB4O7. This indicates that the Sn-M-O oxide in which M is W, Ti or B is a good p-type metal oxide semiconductor. As discussed further below, although the band gap of SnB4O7 is relatively wide, it can be reduced by modulating the B content in the oxide.Sn6SiO8 also has a band gap with a reasonably large band dispersion. Plotting of SnNb2O6 and Sn2Nb2O7 indicates p-type conductivity, but the relatively small curvature at the VBM may indicate less mobility. Similarly, the SnPb2O4 plot shows p-type conductivity with relatively small band dispersion. These results indicate that the Sn-M-O oxide in which M is Nb, Pb and Si may also be a useful p-type semiconductor.As discussed above with respect to Figure 5, the Sn 5s orbitals contribute to the VBM of SnO. According to various embodiments, the Sn-based p-type oxide comprises a Sn 5s orbital contribution to the VBM of the oxide. Figure 7 shows a partial DOS of Sn-B-O oxide. As can be seen from Figure 7, the Sn 5s orbital contributes to the VBM. This is atypical for oxide semiconductors that typically do not have such s-orbital contributions, and indicates that the Sn-B-O oxide is a p-type oxide semiconductor.The relative ratio of the components of the ternary oxide compound may vary and is not limited to the example in FIG. In some embodiments, the relative amount of M in the Sn-M-O compound can be adjusted to change the band gap. For example, referring to the SnB4O7 band structure drawing in Fig. 6, the band gap is relatively wide. Although the wide bandgap semiconductor can reduce the off current, the mobility is affected. The relative amount of B in the Sn-B-O p-type oxide can be reduced to narrow the band gap. Similarly, the relative amount of M in any Sn-M-O oxide can be modulated to provide a wider band gap.For Sn(x)M(1-x)Oz oxide, x may be at least 0.2 in some embodiments. In some embodiments, x can vary from about 0.2 to 0.95. In some embodiments, x can be at least 0.3. Again, in some embodiments, x can vary from 0.3 to 0.9. In some embodiments where M is B, x can vary from about 0.7 to 0.9. In some embodiments, the ratio of Sn:M (atoms) can be between about 0.1 and 9.5, or between about 0.2 and 5, or between about 1 and 5, or between about 2 and 5.For Sn(x)M1(y1=1-x-y2)M2(y2=1-x-y1)Oz oxide, x may be at least 0.2 in some embodiments. In some embodiments, x can vary from about 0.2 to 0.95. In some embodiments, x can be at least 0.3. Again, in some embodiments, x can vary from 0.3 to 0.9. In some embodiments, the ratio of Sn:(M1+M2) (atoms) can be between about 0.1 and 9.5, or between about 0.2 and 5, or between about 1 and 5, or between about 2 and 5. .As described above, in some embodiments, a Sn-based p-type oxide semiconductor can be implemented in a TFT having a relatively low off current. Figure 8 shows a ternary Sn-based p-type oxide a-Sn0.8-B0.2O and a-Sn0.9 compared to a p-channel TFT having a binary oxide nc-SnO:H - The drain-source current (IDS) of the p-channel TFT of B0.1O as a function of the gate-source voltage (VGS). Both films were deposited by pulsed laser deposition (PLD) at room temperature using ceramic targets of pure SnO and B-doped SnO. The B-doped SnO target was prepared by a standard solid state reaction method using SnO and B2O3. After deposition, the film was subjected to thermal annealing at 250 ° C for 30 minutes in a hydrogen coating atmosphere. As can be seen in Fig. 8, the ternary Sn-B-O oxide TFT has a lower off current than the SnO TFT. Figure 8 also shows that the off current can be modulated by increasing or decreasing the B content.According to various embodiments, the Sn-based p-type oxide semiconductors disclosed herein may be amorphous or crystalline, including single crystal and polycrystalline materials. In some embodiments, the polycrystalline material can exhibit nanocrystalline properties. In some embodiments, the Sn-based p-type oxide semiconductor disclosed herein has a crystallization temperature higher than SnO, which is about 300 °C. This may be useful for the fabrication of, for example, p-channel TFTs having amorphous oxide channels. As discussed further below, in some embodiments, the Sn-based p-type oxide semiconductor is annealed during TFT fabrication to, for example, reduce defects. A p-type oxide semiconductor material having a higher crystallization temperature allows for a higher annealing temperature without crystallization.9 is a flow chart illustrating an example of a method of fabricating a Sn-based p-type oxide semiconductor layer, in accordance with some embodiments. Process 900 can be performed in a different order and/or using different, fewer, or additional operations. In some implementations, process 900 can be described with reference to one or more processing chambers and controllers, where the controller can be programmed to control any of the operations described herein.At block 910 of process 900, a substrate is provided. The substrate can comprise any substrate material, including a substantially transparent material such as glass or plastic. The general transparency as used herein may be defined as the transmittance of visible light of about 70% or more, such as about 80% or more, or about 90% or more. The glass substrate (sometimes referred to as a glass sheet or panel) can be or comprise borosilicate glass, soda lime glass, photovoltaic glass, quartz,or other suitable glass material. Non-glass substrates such as polycarbonate, acrylic, polyimide, polyethylene terephthalate (PET) or polyetheretherketone (PEEK) substrates can be used. Other suitable substrate materials can comprise a flexible substrate material. In some embodiments, the substrate can have a size from a few microns to hundreds of microns.At block 920 of process 900, a Sn-based p-type oxide semiconductor layer is formed over the substrate. Examples of p-type metal oxide semiconductors are given above, and include Sn-WO, Sn-Ti-O, Sn-BO, Sn-Nb-O, Sn-Al-O, Sn-Ga-O, Sn- Sc-O, Sn-YO, Sn-Zr-O, and Sn-Hf-O, wherein Sn refers to Sn(II). The Sn-based p-type oxide semiconductor layer may include a channel region aligned with or aligned with the gate electrode, wherein the channel region is located in a source region and a drain region of the oxide semiconductor layer between. In some embodiments, the thickness of the Sn-based p-type oxide semiconductor layer can be between about 10 nm and about 100 nm. Block 920 may involve depositing a Sn-based p-type oxide layer by any method suitable for the material being deposited, including a physical vapor deposition (PVD) process, a chemical vapor deposition (CVD) process, and an atomic layer deposition (ALD) process. . The PVD process includes thermal evaporation deposition, sputter deposition, and pulsed laser deposition (PLD) processes. For example, Sn-M-O can be deposited by sputtering a SnO target and an MO target or sputtering a Sn-M-O target. An ALD process for forming a ternary Sn-based p-type oxide semiconductor is further discussed below with reference to FIGS. 10 and 11.At block 930 of process 500, a Sn-based p-type oxide semiconductor layer is optionally thermally annealed. The Sn-based p-type oxide semiconductor layer may be annealed in any suitable atmosphere such as an oxygen or hydrogen atmosphere. For example, the p-type oxide semiconductor layer can be exposed to the H2-containing process gas at a temperature between about 250 ° C and 400 ° C.In some embodiments, the process can continue to form one or more dielectric or metal layers on the Sn-based p-type oxide semiconductor layer. For example, in some embodiments, a dielectric layer, such as an oxide or nitride, is formed over the Sn-based p-type oxide semiconductor layer such that the dielectric layer contacts the Sn-based p-type oxide semiconductor layer. For example, the dielectric layer can be one of a passivation layer, a gate dielectric layer, and an etch stop layer. The dielectric layer may comprise any suitable material, including oxides and nitrides such as SiO2 or Al2O3. In some embodiments, the dielectric layer can have a thickness between about 10 nm and about 1000 nm, such as between about 300 nm and about 500 nm. The Sn-based p-type oxide semiconductor layer and the dielectric layer may form part of the TFT.In some embodiments, the process 900 further includes: forming a source electrode on a source region of the Sn-based p-type oxide semiconductor layer; and forming a drain on a drain region of the Sn-based p-type oxide semiconductor layer electrode. In order to form the source electrode and the drain electrode, the source electrode and the drain electrode may be etched. Accordingly, process 900 can further include etching the source and drain electrodes to expose a channel region of the Sn-based p-type oxide semiconductor layer. In some embodiments, forming a dielectric layer occurs prior to forming the source and drain electrodes. This may include instances where the dielectric layer is an etch stop layer or a gate dielectric. In some implementations, forming a dielectric layer can occur after forming the source and drain electrodes. This may include an example in which a dielectric layer is formed over the source and drain electrodes to protect the passivation layer of the TFT.In some implementations, the process 900 further includes forming a gate electrode over the substrate. In some implementations, a gate electrode can be formed on the substrate and a gate dielectric can be formed on the gate electrode for the bottom gate TFT. In some implementations, the dielectric oxide layer can act as a gate dielectric and the gate electrode can be formed over the gate dielectric for the top gate TFT.In some implementations, block 920 of process 900 involves ALD deposition of a Sn-based p-type oxide semiconductor layer. The ALD process can use a surface-mediated deposition reaction to deposit a film on a layer-by-layer basis. The first reactant can be directed over the substrate, wherein at least some of the first reactant is chemisorbed or physically adsorbed onto the surface of the substrate to form a layer. The layer may be, but is not necessarily, a single layer or a sub-monolayer of adsorbed reactant molecules. The deposition can be self-limiting such that once the saturated layer is deposited, the reactants do not continue to adsorb on the surface. In some embodiments, the ALD process can be performed in a subsaturation mechanism. In these processes, one or more of the reactants may be finite such that a subsaturated adsorbent layer is formed on the surface of the substrate.For the deposition of Sn-based p-type oxides, the reactants that can be used include tin (II)-based organic precursors, such as bis[bis(trimethylsilyl)amino]tin(II), tin(II) acetyl. Pyruvate and tin (II) 2,4-pentanedioate. Also usable are, for example, bis(N,N'-diisopropylethenyl)tin(II) and tin(II) fluorenyl (racemic-1,3-di-tri-butyl-4,5) Synthesis of an example of an N-heterocyclic tinene compound such as dimethyl-1,3-diaza-2-tin cyclopentane-2-ylidene by Sang Bok Kim et al. It is described in Chem. Mater.)" (2014, 26, 3065-3073).Examples of the titanium precursor which can be used in the ALD deposition of the Sn-Ti-O oxide include an organic titanium compound such as tetrakis(diacetamido)titanium(IV), bis(tri-butylcyclopentadienyl)titanium ( IV) Dichloride, and titanium (IV) diisopropoxy (2,2,6,6-tetramethyl-3,5-heptanedionate), tetraethoxytitanium, tetramethoxy Titanium and titanium tetraisopropoxide. Examples of tungsten precursors that can be used in the ALD deposition of Sn-W-O oxides include tungsten hexafluoride, tungsten hexachloride, and tungsten hexacarbonyl. In some embodiments, an organotungsten compound such as a tungsten bis(alkylimino)bis(alkylamino) compound (eg, bis(tri-butylamino)bis(dimethylamino)tungsten (VI) and Hexa(dimethylamido)tungsten (VI)). Examples of boron precursors that can be used in the ALD deposition of Sn-B-O oxides include boron tribromide and borane, including borane (BH3), diborane (B2H6), and triborane (B3H7). Examples of the ruthenium precursor which can be used in the ALD deposition of the Sn-Nb-O oxide include an organic ruthenium compound such as ruthenium (V) ethoxide and tris(diacetamido)(tri-butylamino) ruthenium (V). ).Other compounds that may be incorporated into the ALD deposited Sn-based film may include Hf, Si, Al, Ga, Sc, Y, and Zr. Any precursor suitable for ALD deposition can be used in such embodiments.Examples of oxidizing agents that may be employed include oxygen (O2), ozone (O3), water (H2O), hydrogen peroxide (H2O2), and combinations thereof. In some embodiments, an oxyhydroxide containing agent such as water or hydrogen peroxide is employed. This oxidizing agent can, in some embodiments, be a source of hydrogen for the deposited layer.In some embodiments, depositing the Sn-based p-type oxide semiconductor layer involves controlling the temperature such that a p-type oxide semiconductor is formed during deposition instead of an n-type metal oxide semiconductor or oxide insulator. In some embodiments, a relatively weak oxidant (either alone or as a diluent for a stronger oxidant) can be employed to promote the formation of a p-type semiconductor. Examples of weak oxidizing agents include water, carbon dioxide (CO2), carbon monoxide (CO), methanol (CH3OH), ethanol (C2H6OH), isopropanol (C3H7OH), and combinations thereof.In some embodiments, the plasma energy can be applied during one or more pulses in the reactant pulse. For example, the cycle can include the following sequence: metal reactant (plasma disconnect) / purge / oxidant (plasma on) / purge. In some embodiments, a plasma may be applied during the hydrogen peroxide containing to promote hydrogen incorporation into the film. In some embodiments, a relatively low temperature (eg, less than about 300 ° C, less than about 250 ° C, or less than about 200 ° C) during ALD deposition of a p-type metal oxide film, wherein plasma is applied during the oxidant pulse energy.Various reactant pulse sequences can be employed to deposit a ternary or higher order Sn-based p-type oxide semiconductor layer by ALD. 10 and 11 are flow diagrams illustrating an example of an ALD method of fabricating a ternary Sn-based p-type oxide semiconductor layer, in accordance with some embodiments. It will be appreciated that the methods disclosed in Figures 10 and 11 can be extended to deposit a quaternary or higher order Sn-based p-type oxide semiconductor layer.Turning first to Figure 10, at block 1010 of process 1000, a substrate is provided. The substrate can comprise any of the substrate materials as discussed above with respect to block 910 of FIG. At block 1020 of process 1000, a pulse exposed to the Sn(II)-containing reactant is exposed to form an adsorbent layer of the reactant on the substrate.At block 1030 of process 1000, the substrate comprising the adsorption layer containing the Sn(II) reactant is exposed to a reactant pulse containing the second metal to form an adsorption layer of the reactant containing the second metal. Examples of reactants containing a second metal include W, Ti, Nb or B reactants, as well as other examples given above.At block 1040 of process 1000, the substrate is exposed to an oxidant pulse to react with the adsorbed layer of the reactant and form a ternary Sn-based p-type oxide semiconductor layer. At block 1050 of the process, blocks 1020 through 1040 are repeated until the desired thickness of the ternary Sn-based p-type oxide semiconductor layer is achieved.In some embodiments, blocks 1020 and 1030 can be performed such that a Sn(II)-containing reactant, a second metal-containing reactant is simultaneously introduced, thereby forming an adsorption layer comprising the two reactants.In FIG. 11, at block 1110 of process 1100, a substrate is provided as described above with respect to FIGS. 9 and 10. At block 1120 of process 1100, the substrate is exposed to a Sn(II) containing reactant pulse to form an adsorption layer of Sn(II) containing reactant on the substrate. Examples of reactants containing Sn(II) are given above. At block 1130 of process 1100, the substrate is exposed to an oxidant pulse to react with the adsorption layer of the Sn(II) containing reactant. At block 1140 of the process, blocks 1120 and 1130 are optionally repeated one or more times. At block 1150 of process 1100, the substrate is exposed to a reactant pulse containing a second metal to form an adsorbent layer of the reactant containing the second metal. Examples of reactants containing a second metal include W, Ti, Nb or B reactants, as well as other examples given above. At block 1160 of process 1100, the substrate is exposed to an oxidant pulse to react with the adsorbent layer of the reactant containing the second metal, and a ternary Sn-based p-type oxide semiconductor layer is formed. At block 1170 of process 1100, blocks 1150 and 1160 are optionally repeated one or more times. At block 1180 of process 1100, blocks 1120 through 1170 are repeated until the desired thickness of the ternary Sn-based p-type oxide semiconductor layer is achieved.According to various embodiments, the Sn(II)-containing reactant and the second metal-containing reaction can be controlled by varying the flow rate, the number of doses, or the concentration of the reactant pulses, and the number of cycles including each of these pulses. The relative ratio of things.In some embodiments, the Sn-based p-type oxide semiconductor layer as disclosed above may form part of a CMOS TFT device including a p-type TFT and an n-type TFT. FIG. 12 is an example illustrating a cross-sectional view of a CMOS TFT device in accordance with some embodiments. In FIG. 12, CMOS TFT device 1200 includes a p-type top gate TFT 1202a and an n-type top gate TFT 1202b on a substrate 1210. Examples of the substrate are as described above. In the example of FIG. 12, a p-type top gate TFT 1202a and an n-type top gate TFT 1202b are formed on the dielectric layer 1211; however, in some embodiments, they may be formed on the substrate 1210, as in FIG. 4B. In the example.The p-type top gate TFT 1202a includes a ternary or higher order Sn-based p-type oxide semiconductor layer including a channel region 1240a and source and drain regions 1242a. The source and drain electrodes 1270a contact the source and drain regions 1242a of the Sn-based p-type oxide semiconductor layer, and the gate electrode 1220a overlies the gate dielectric 1230a. The Sn-based p-type oxide semiconductor layer of the p-type TFT 1202a may comprise any of the Sn-based p-type oxides discussed above.The n-type top gate TFT 1202b includes an n-type metal oxide semiconductor layer including a channel region 1240b and source and drain regions 1242b. Source and drain electrodes 1270b contact the source and drain regions 1242b of the n-type metal oxide layer, and gate electrode 1220b overlies the gate dielectric 1230b. Source and drain electrodes 1270a and 1270b may be formed in dielectric layer 1280 that separates p-type top gate TFT 1202a from n-type top gate TFT 1202b.In some embodiments, the n-type metal oxide semiconductor is amorphous and may include indium (In)-containing, zinc-containing (Zn), tin-containing (Sn), hafnium-containing (Hf), and gallium-containing (Ga)-containing oxidation. Semiconductor. Examples of the n-type amorphous oxide semiconductor include InGaZnO, InZnO, InHfZnO, InSnZnO, SnZnO, InSnO, GaZnO, and ZnO.In some implementations, the CMOS TFT includes a bottom gate TFT as discussed above with reference to FIG. 4A. For example, the CMOS TFT device shown in the example of FIG. 12 can be used as a part of a driving circuit of, for example, a display device.In some embodiments, the ternary Sn-based p-type oxide semiconductor layer can be implemented in a flexible all-oxide CMOS TFT device formed on a flexible substrate. FIG. 13A is an illustration of a schematic diagram illustrating an all-oxide CMOS inverter on a flexible substrate, in accordance with some embodiments. The per-oxide CMOS inverter 1300 includes an a-IGZOn channel TFT and a p-channel Sn-M-O p-channel TFT on a flexible PET substrate. All-oxide CMOS inverters can have high cutoff frequencies due to the relatively high mobility of the oxide channel.Figures 13B and 13C show experimental data for an all-oxide CMOS inverter circuit 1350 comprising p-channel SnO:H TFTs and n-channel a-IGZO TFTs on a SiO2/n+Si substrate. Figure 13B shows current-voltage curves 1352 and 1354 for p-channel SnO:H TFTs and n-channel a-IGZO TFTs, respectively. The saturation mobility μs is about 9 cm 2 /Vs for the n-channel and about 2 cm 2 /Vs for the p-channel. Figure 13C shows the voltage transfer characteristics (VTC) of an all-oxide CMOS inverter. It shows the clear inverter action of full force. The voltage gain defined as dVOUT/dVIN is greater than 13 at the maximum of Vdd=7V. It should be noted that although the data in Figures 13B and 13C reflect hydrogenated SnO p-channel, similar mobility and transfer characteristics will be observed for the ternary Sn-based p-channel TFT as described above.14A and 14B are system block diagrams illustrating a display device 40 that includes a plurality of IMOD display elements and TFTs as described herein. Display device 40 can be, for example, a smart phone, a cellular or a mobile phone. However, the same components of display device 40 or slight variations thereof also illustrate various types of display devices, such as televisions, computers, tablet computers, electronic readers, handheld devices, and portable media devices.The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The outer casing 41 can be formed from any of a variety of manufacturing processes, including injection molding and vacuum forming. Additionally, the outer casing 41 can be made from any of a variety of materials including, but not limited to, plastic, metal, glass, rubber, and ceramic, or a combination thereof. The outer casing 41 can include a removable portion (not shown) that can be interchanged with other removable portions having different colors or containing different indicia, pictures, or symbols.Display 30 can be any of a variety of displays including bistable or analog displays, as described herein. Display 30 can also be configured to include a flat panel display such as a plasma, EL, OLED, STN LCD, or TFT LCD or a non-flat panel display such as a CRT or other tubular device. Additionally, display 30 can include an IMOD based display, as described herein.The components of display device 40 are schematically illustrated in Figure 8A. Display device 40 includes a housing 41 and can include additional components that are at least partially encapsulated therein. For example, display device 40 includes a network interface 27 that includes an antenna 43 that can be coupled to transceiver 47. Network interface 27 may be a source for image data that may be displayed on display device 40. Thus, network interface 27 is an example of an image source module, but processor 21 and input device 48 can also function as an image source module. Transceiver 47 is coupled to processor 21, which is coupled to conditioning hardware 52. The conditioning hardware 52 can be configured to condition the signal (eg, to filter or otherwise manipulate the signal). Adjustment hardware 52 can be coupled to speaker 45 and microphone 46. Processor 21 can also be coupled to input device 48 and driver controller 29. Driver controller 29 can be coupled to frame buffer 28 and to array driver 22, which in turn can be coupled to display array 30. One or more elements in display device 40 (including elements not specifically depicted in FIG. 8A) can be configured to function as a memory device and configured to communicate with processor 21. In some embodiments, power supply 50 can provide power to substantially all of the components in a particular display device 40 design.Network interface 27 includes an antenna 43 and a transceiver 47 such that display device 40 can communicate with one or more devices via a network. Network interface 27 may also have some processing power to mitigate, for example, the data processing requirements of processor 21. Antenna 43 can transmit and receive signals. In some embodiments, antenna 43 is in accordance with the IEEE 16.11 standard (including IEEE 16.11 (a), (b) or (g)) or IEEE 802.11 standard (including IEEE 802.11a, b, g, n) and further embodiments thereof And transmit and receive RF signals. In some other implementations, antenna 43 transmits and receives RF signals in accordance with thestandard. In the case of a cellular telephone, antenna 43 can be designed to receive Code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Global System for Mobile Communications (GSM), GSM/Universal Packets. Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1xEV-DO, EV-DO Rev. A, EV-DO Rev. B, High Speed ​​Packet Access (HSPA), High Speed ​​Downlink Packet Access (HSDPA), High Speed ​​Uplink Packet Access (HSUPA), Evolved High Speed ​​Packet Access (HSPA+), Long Term Evolution (LTE) ), AMPS or other known signals used to communicate within a wireless network, such as a system utilizing 3G, 4G or 5G technology. The transceiver 47 can pre-process the signals received from the antenna 43 such that it can be received by the processor 21 and further manipulated. The transceiver 47 can also process the signals received from the processor 21 such that the signals can be transmitted from the display device 40 via the antenna 43.In some embodiments, the transceiver 47 can be replaced by a receiver. Additionally, in some embodiments, the network interface 27 can be replaced with an image source that can store or generate image data to be sent to the processor 21. The processor 21 can control the overall operation of the display device 40. Processor 21 receives data (e.g., compressed image data) from network interface 27 or an image source and processes the data into raw image data or into a format that can be readily processed into raw image data. Processor 21 may send the processed data to driver controller 29 or frame buffer 28 for storage. Raw data generally refers to information that identifies the characteristics of an image at each location within an image. For example, such image characteristics may include color, saturation, and gray levels.Processor 21 may include a microcontroller, CPU or logic unit to control the operation of display device 40. Adjustment hardware 52 may include amplifiers and filters for transmitting signals to speaker 45 and for receiving signals from microphone 46. The conditioning hardware 52 can be a discrete component within the display device 40 or can be incorporated within the processor 21 or other components.The drive controller 29 can retrieve raw image data generated by the processor 21 directly from the processor 21 or from the frame buffer 28, and can reformat the original image data for high speed transfer to the array driver 22. In some implementations, the driver controller 29 can reformat the raw image data into a data stream having a raster-like format such that it has a temporal order suitable for scanning across the display array 30. Drive controller 29 then sends the formatted information to array driver 22. Although the driver controller 29, such as an LCD controller, is often associated with the system processor 21 as a separate integrated circuit (IC), such a controller can be implemented in a number of ways. For example, the controller can be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated with the array driver 22 in hardware.The array driver 22 can receive the formatted information from the driver controller 29 and can reformat the video data into a set of parallel waveforms that are applied to the number of xy matrices of the display elements from the display multiple times per second. Hundreds and sometimes thousands (or more) of leads.In some implementations, the driver controller 29, array driver 22, and display array 30 are suitable for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (eg, an IMOD display element controller). Additionally, array driver 22 can be a conventional driver or a bi-stable display driver (eg, an IMOD display device driver). Moreover, display array 30 can be a conventional display array or a bi-stable display array (eg, a display including an array of IMOD display elements). In some embodiments, the driver controller 29 can be integrated with the array driver 22. This embodiment is applicable to highly integrated systems such as mobile phones, portable electronic devices, watches or small area displays.In some implementations, input device 48 can be configured to allow, for example, a user to control the operation of display device 40. Input device 48 may include a keypad such as a QWERTY keyboard or telephone keypad, buttons, switches, rocker arms, touch sensitive screens, touch sensitive screens integrated with display array 30, or pressure sensitive or heat sensitive films. Microphone 46 can be configured as an input device for display device 40. In some embodiments, voice commands through the microphone 46 can be used to control the operation of the display device 40.Power supply 50 can include a variety of energy storage devices. For example, the power supply 50 can be a rechargeable battery, such as a nickel cadmium battery or a lithium ion battery. In embodiments where a rechargeable battery is used, the rechargeable battery can be charged using power from, for example, a wall socket or photovoltaic device or array. Alternatively, the rechargeable battery can be charged wirelessly. The power supply 50 can also be a renewable energy source, a capacitor or a solar cell, including a plastic solar cell or a solar cell lacquer. The power supply 50 can also be configured to receive power from a wall outlet.In some embodiments, control programmability resides in a drive controller 29 that can be located at several locations in an electronic display system. In some other implementations, control programmability resides in array driver 22. The above optimizations can be implemented in any number of hardware and/or software components and in various configurations.As used herein, a phrase referring to a list of items "at least one of" refers to any combination of those items, including a single member. As an example, "at least one of: a, b or c" is intended to encompass: a, b, c, a-b, a-c, b-c, and a-b-c.The various illustrative logic, logic blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally in terms of functionality and is described in the various illustrative components, blocks, modules, circuits, and steps described above. Whether such functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system.Hardware and data processing devices described in connection with the aspects disclosed herein to implement various illustrative logic, logic blocks, modules, and circuits may be implemented or executed by: general purpose single or multi-chip processors, digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or designed to perform the functions described herein Any combination. A general purpose processor can be a microprocessor or any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, specific steps and methods may be performed by circuitry specifically for a given function.In one or more aspects, the functions described can be implemented in hardware, digital electronic circuitry, computer software, firmware (including the structures disclosed in this specification and their structural equivalents), or in any combination thereof. Embodiments of the subject matter described in this specification can also be implemented as one or more computer programs, i.e., computer program instructions, encoded on a computer storage medium for execution by a data processing device or for controlling operation of a data processing device. One or more modules.If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor-executable software module that can reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can enable the transfer of a computer program from one place to another. A storage medium may be any available media that can be accessed by a computer. By way of example and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or may be used for storage in the form of an instruction or data structure. Desired program code and any other medium that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. Disk and disc as used herein include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, wherein the disc usually reproduces data magnetically, and the disc is laser- Optically reproducing data. Combinations of the above may also be included within the scope of computer readable media. In addition, the operations of the methods or algorithms may reside on any one or any combination or collection of code and instructions on a machine-readable medium and computer readable medium that can be incorporated into a computer program product.Various modifications to the described embodiments of the invention may be apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. implementation plan. Therefore, the claims are not intended to be limited to the embodiments shown herein, but are to be accorded to the broadest scope of the invention and the principles and novel features disclosed herein. In addition, those skilled in the art will readily appreciate that the terms "upper" and "lower" are sometimes used to facilitate the description of the various figures and indicate the relative position of the orientation corresponding to the map on the appropriately oriented page, and may not reflect For example, the proper orientation of the implemented IMOD display elements.Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can be implemented in various embodiments or in any suitable subcombination. Moreover, while features may be described above as acting in some combination or even as originally claimed, in some cases one or more features from the claimed combination may be deleted from the combination and claimed Combinations can be made for sub-combinations or sub-combinations.Similarly, while the operations are depicted in a particular order in the drawings, those skilled in the art will readily recognize that such <RTIgt; To achieve the desired result. Additionally, the drawings may schematically depict one or more example processes in flow chart form. However, other operations not depicted may be incorporated in the illustrated example process. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the above-described embodiments should not be construed as requiring such separation in all embodiments, and it is understood that the described program components and systems can generally be integrated together in a single software product. Medium or packaged into multiple software products. Further, other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.
A multi-product integrated circuit die includes at least two different portions, of which at least one portion can be deliberately rendered non-operational in some manner (e.g., non-functional, inaccessible, and/or non-programmable) within the package. A selection code storage circuit stores a product selection code. A first value of the product selection code selects the option where both the first and second portions of the first die are operational. A second value of the product selection code selects the option where only the first portion of the first die is operational. The selection code storage circuit can include non-volatile memory or a fuse structure, or the product selection code can be configured as a package bonding option. The product selection code can also enable boundary scan for the operational portion of the die, and omit from the boundary scan chain any portions of the die that are deliberately rendered non-operational.
What is claimed is:1. A programmable integrated circuit (IC), comprising:a plurality of columns of configurable circuits;wherein at least a first one of the columns includes a plurality of input/output (I/O) banks, each I/O bank including a plurality of I/O blocks, a plurality of the columns includes a plurality of rows of configurable logic blocks (CLBs), and a second one of the columns includes clock distribution circuitry coupled to the I/O blocks and configurable logic blocks;wherein a first plurality of the configurable circuits includes: a first plurality of adjacent ones of the I/O banks, a first plurality of adjacent ones of the rows of CLBs, and a first portion of the second column of clock distribution circuitry that is coupled to the first plurality of adjacent ones of the I/O banks and to the first plurality of adjacent ones of the rows of CLBs;wherein a second plurality of the configurable circuits includes: a second plurality of adjacent ones of the I/O banks, a second plurality of adjacent ones of the rows of CLBs, and a second portion of the second column of clock distribution circuitry that is coupled to the second plurality of adjacent ones of the I/O banks and to the second plurality of adjacent ones of the rows of CLBs;a selection code storage circuit coupled to the first and second pluralities of configurable circuits and storing a product selection code, wherein:a first value of the product selection code renders operational the first and second pluralities of configurable circuits, anda second value of the product selection code renders operational the first plurality of configurable circuits while rendering non-operational the second plurality of configurable circuits.2. The programmable IC of claim 1, wherein the first plurality of circuits comprises columns of the circuits, and the second plurality of circuits comprises a continuation of the columns of the circuits extending the columns included in the first plurality.3. The programmable IC of claim 2, wherein the first plurality of circuits comprises a first rectangular area, and the second plurality of circuits comprises a second rectangular area adjacent to the first rectangular area.4. The programmable IC of claim 1, further comprising third and fourth pluralities of configurable circuits, wherein:a third value of the product selection code renders operational the first and third pluralities of configurable circuits while rendering non-operational the second and fourth pluralities of configurable circuits.5. The programmable IC of claim 4, wherein a fourth value of the product selection code renders operational the first and second pluralities of configurable circuits while rendering non-operational the third and fourth pluralities of configurable circuits.6. The programmable IC of claim 1, wherein the product selection code includes at least one "don't-care" bit.7. The programmable IC of claim 1, wherein the selection code storage circuit comprises a set of fuses selectively fused to encode the product selection code.8. The programmable IC of claim 1, wherein the programmable IC further comprises an integrated circuit (IC) package, and wherein the selection code storage circuit comprises at least one node selectively coupled to a constant-value pin inside the IC package to encode the product selection code.9. The programmable IC of claim 1, wherein the selection code storage circuit comprises a set of non-volatile memory cells programmed with the product selection code.10. The programmable IC of claim 1, wherein:the first value of the product selection code enables a first boundary scan chain that encompasses the first and second pluralities of configurable circuits; andthe second value of the product selection code enables a second boundary scan chain that encompasses the first plurality of configurable circuits while bypassing the second plurality of configurable circuits.11. A programmable integrated circuit (IC), comprising:a plurality of configurable circuits;wherein the configurable circuits are arranged in a plurality of columns;wherein at least a first one of the columns includes a plurality of input/output (I/O) banks, each I/O bank including a plurality of I/O blocks, a plurality of the columns includes a plurality of rows of configurable logic blocks (CLBs), and a second one of the columns includes clock distribution circuitry coupled to the I/O blocks and configurable logic blocks;a configuration interface coupled to program the configurable circuits; anda selection code storage circuit coupled to the configuration interface, wherein:a first value of the product selection code enables the configuration interface to program a first subset of the configurable circuits,the first subset of the configurable circuits includes: a first plurality of adjacent ones of the I/O banks, a first plurality of adjacent ones of the rows of CLBs, and a first portion of the second column of clock distribution circuitry that is coupled to the first plurality of adjacent ones of the I/O banks and to the first plurality of adjacent ones of the rows of CLBs,a second value of the product selection code enables the configuration interface to program a second subset of the configurable circuits, andthe second subset of configurable circuits includes: a second plurality of adjacent ones of the I/O banks, a second plurality of adjacent ones of the rows of CLBs, and a second portion of the second column of clock distribution circuitry that is coupled to the second plurality of adjacent ones of the I/O banks and to the second plurality of adjacent ones of the rows of CLBs.12. The programmable IC of claim 11, wherein the first subset of the configurable circuits comprises an entirety of the configurable circuits in the programmable IC, and the second subset comprises a strict subset of the configurable circuits in the programmable IC.13. The programmable IC of claim 12, wherein:the plurality of configurable circuits are disposed in columns of similar configurable circuits; andthe second subset comprises a central portion of each of the columns.14. The programmable IC of claim 11, wherein:a third value of the product selection code enables the configuration interface to program a third subset of the configurable circuits.15. The programmable IC of claim 11, wherein the selection code storage circuit comprises a set of fuses selectively fused to encode the product selection code.16. The programmable IC of claim 11, wherein the programmable IC further comprises an IC package, and wherein the selection code storage circuit comprises at least one node selectively coupled to a constant-value pin inside the IC package to encode the product selection code.17. The programmable IC of claim 11, wherein the selection code storage circuit comprises a set of non-volatile memory cells programmed with the product selection code.18. The programmable IC of claim 11, wherein the product selection code includes at least one "don't-care" bit.19. The programmable IC of claim 11, further comprising a boundary scan chain coupled to the selection code storage circuit, and wherein:the first value of the product selection code configures the boundary scan chain to encompass the first and second subsets of the configurable circuits; andthe second value of the product selection code configures the boundary scan chain to encompass the first subset of the configurable circuits while bypassing the second subset of the configurable circuits.20. A programmable integrated circuit (IC), comprising:a plurality of columns of configurable circuits;wherein at least a first one of the columns includes a plurality of input/output (I/O) banks, each I/O bank including a plurality of I/O blocks, a plurality of the columns includes a plurality of rows of configurable logic blocks (CLBs), and a second one of the columns includes clock distribution circuitry coupled to the I/O blocks and configurable logic blocks;wherein a first plurality of the configurable circuits includes: a first plurality of adjacent ones of the I/O banks, a first plurality of adjacent ones of the rows of CLBs, and a first portion of the second column of clock distribution circuitry that is coupled to the first plurality of adjacent ones of the I/O banks and to the first plurality of adjacent ones of the rows of CLBs;wherein a second plurality of the configurable circuits includes: a second plurality of adjacent ones of the I/O banks, a second plurality of adjacent ones of the rows of CLBs, and a second portion of the second column of clock distribution circuitry that is coupled to the second plurality of adjacent ones of the I/O banks and to the second plurality of adjacent ones of the rows of CLBs;means for storing a product selection code for the programmable IC, the means for storing being coupled to the first and second pluralities of configurable circuits, wherein:a first value of the product selection code enables programming of the first and second pluralities of configurable circuits, anda second value of the product selection code enables programming of the first plurality of configurable circuits while disabling programming of the second plurality of configurable circuits.
FIELD OF THE INVENTIONThe invention relates to integrated circuit devices (ICs). More particularly, the invention relates to structures and methods of providing multiple ICs of different sizes (e.g., a family of related programmable logic devices) from a single die.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs) are a well-known type of integrated circuit that can be programmed to perform specified logic functions. One type of PLD, the field programmable gate array (FPGA), typically includes an array of programmable tiles. These programmable tiles can include, for example, input/output blocks (IOBs), configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM), multipliers, digital signal processing blocks (DSPs), processors, clock managers, delay lock loops (DLLs), and so forth.Each programmable tile typically includes both programmable interconnect and programmable logic. The programmable interconnect typically includes a large number of interconnect lines of varying lengths interconnected by programmable interconnect points (PIPs). The programmable logic implements the logic of a user design using programmable elements that can include, for example, function generators, registers, arithmetic logic, and so forth.The programmable interconnect and programmable logic are typically programmed by loading a stream of configuration data into internal configuration memory cells that define how the programmable elements are configured. The configuration data can be read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Another type of PLD is the Complex Programmable Logic Device, or CPLD. A CPLD includes two or more "function blocks" connected together and to input/output (I/O) resources by an interconnect switch matrix. Each function block of the CPLD includes a two-level AND/OR structure similar to those used in Programmable Logic Arrays (PLAs) and Programmable Array Logic (PAL) devices. In CPLDs, configuration data is typically stored on-chip in non-volatile memory. In some CPLDs, configuration data is stored on-chip in non-volatile memory, then downloaded to volatile memory as part of an initial configuration sequence.For all of these programmable logic devices (PLDs), the functionality of the device is controlled by data bits provided to the device for that purpose. The data bits can be stored in volatile memory (e.g., static memory cells, as in FPGAs and some CPLDs), in non-volatile memory (e.g., FLASH memory, as in some CPLDs), or in any other type of memory cell.Other PLDs are programmed by applying a processing layer, such as a metal layer, that programmably interconnects the various elements on the device. These PLDs are known as mask programmable devices. PLDs can also be implemented in other ways, e.g., using fuse or antifuse technology. The terms "PLD" and "programmable logic device" include but are not limited to these exemplary devices, as well as encompassing devices that are only partially programmable. For example, one type of PLD includes a combination of hard-coded transistor logic and a programmable switch fabric that programmably interconnects the hard-coded transistor logic.PLD providers typically provide "families" of PLDs, i.e., groups of related PLD products of different sizes. For example, a family of PLDs might all use the same basic tile, but include different numbers of the tiles, so they have different logic capacities. Therefore, a PLD user does not need to pay for a PLD with a much larger capacity than he or she actually requires to implement a particular design. A typical method of generating a family of PLDs is to first manufacture the family member having the greatest anticipated customer demand. Once the first family member has been debugged and characterized and is deemed to meet the product specifications, other family members are manufactured, e.g., with each new family member being different from the one before. Each new member of the PLD family requires a new mask set, at a cost that can exceed one million dollars per mask set. Therefore, purchasing mask sets for a family of PLDs can be very costly.It is desirable to provide structures and methods that can reduce the cost of manufacturing families of PLDs and/or other integrated circuits (ICs). It is also desirable to provide structures and methods that can reduce the production cost of an individual IC included in such a family.SUMMARY OF THE INVENTIONThe invention provides methods of manufacturing a family of packaged integrated circuits (ICs) having at least two different logic capacities, and multi-product dies that can be used to implement such methods. A first IC die includes at least two different portions, of which at least one portion can be deliberately rendered non-operational in some manner (e.g., non-functional, inaccessible, and/or non-programmable) within the package. A first set of the first IC dies are packaged such that both portions of the dies are operational. A second set of the first IC dies are packaged such that only the first portion of each die is operational. Once the first and second sets are packaged and the second set of ICs has been evaluated, a decision can be made whether or not to manufacture a second IC die that includes the first portion of the first die, and excludes the second portion. Thus, for example, if the second set of packaged ICs proves to be popular with customers, or proves to be fully functional as desired, the second IC die can be manufactured as a cost saving measure. If the second set of packaged ICs proves to be unpopular or contains a design defect, the cost of a mask set for the second IC die has been avoided.In some embodiments, the ICs are programmable logic devices (PLDs). The first and second portions of the first PLD die can be configured together with a single configuration bit stream. When packaged such that the second portion is non-operational, the first PLD die can be configured with a second configuration bit stream smaller than the first configuration bit stream, and this second bit stream can also be used to configure the second PLD die. Thus, by packaging the partially operational first die in the same package as the second die, the two products can be interchangeably supplied to customers, who can use either product in a system in a transparent manner.In some embodiments of the first PLD die, the selection of a configuration path for the bit stream (e.g., configuring the entire PLD or only the first portion of the PLD) is controlled by a product selection code. A first value of the product selection code selects the option where both the first and second portions of the first die are operational. A second value of the product selection code selects the option where only the first portion of the first die is operational. In one embodiment, the second IC die is also identified with the first value of the product selection code (i.e., the first product selection code signifies "configure the entire IC"). In another embodiment, the second IC die is identified with the second value of the product selection code (i.e., the second product selection code signifies "configure only the first portion of the IC"). In yet another embodiment, the second IC die is designed to accept either the first or the second value of the product selection code, and in either case, to configure the entirety of the second IC die.The product selection code can be stored, for example, in non-volatile memory or a fuse structure, or can be configured as a package bonding option. For example, in one embodiment, each package includes jumpers (added elements such as wires external to the die) that tie each bit of the product selection code to a power high or ground pin of the package. In some embodiments, the product selection code also enables boundary scan for the operational portion of the die, and omits from the boundary scan chain any portions of the die that are deliberately rendered non-operational.According to another aspect of the invention, an IC die (e.g., a PLD die) is manufactured that has the capability of being configured as at least two differently-sized family members, e.g., as described above. The IC die is tested prior to packaging. If the first portion of the IC die is fully functional, but the second portion includes a localized defect, then the IC die is packaged with a product selection code that configures the IC die to operate as only the first portion of the die. (A localized defect is a defect that affects only a small part of the IC functionality.) The second portion of the die is deliberately rendered non-operational. Therefore, the IC die can still be sold as a fully functional packaged IC.According to yet another aspect of the invention, a method is provided of modeling two IC dies (e.g., two PLD dies) using the same software model, even though the two IC dies include physical differences. For example, a first PLD die includes first and second portions, e.g., as described above, and is encoded to render the first portion operational and the second portion non-operational. At a boundary between the two portions, interconnect lines traversing the boundary include a first section in the first portion and a second section in the second portion. The second PLD die includes the first portion of the first PLD die, and omits the second portion. To maintain consistent loading, the interconnect lines extending to the edge of the second die are coupled together in pairs, with each resulting piece of wire being of essentially the same length as the corresponding wires in the first PLD die.The same software model can be used for the encoded first die and the second die, even though the physical structures are different. In one embodiment, this is accomplished by providing a termination model that omits the pair coupling, adds an RC load to compensate for the omitted connection, and (in the case of bidirectional interconnect lines) flags one interconnect line in each pair of interconnect lines as being invalid for use by routing software for the PLD.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures.FIG. 1 illustrates an FPGA architecture that includes several different types of programmable logic blocks.FIG. 2 illustrates another FPGA architecture that includes several different types of programmable logic blocks.FIG. 3 illustrates a multi-product IC die including several different portions, in which all of the portions are operational.FIG. 4 illustrates the IC die of FIG. 3 in which first portions of the die are deliberately rendered non-operational.FIG. 5 illustrates the IC die of FIG. 3 in which second portions of the die are deliberately rendered non-operational.FIG. 6 illustrates the IC die of FIG. 3 in which third portions of the die are deliberately rendered non-operational.FIG. 7 illustrates a second IC die that can be manufactured, if desired, which includes the operational portions of FIG. 4.FIG. 8 illustrates a third IC die that can be manufactured, if desired, which includes the operational portions of FIG. 5.FIG. 9 illustrates a fourth IC die that can be manufactured, if desired, which includes the operational portions of FIG. 6.FIG. 10 illustrates an FPGA architecture that includes several different types of programmable logic blocks, and which can be used, for example, as the first IC die illustrated in FIG. 3.FIG. 11 illustrates the steps of an exemplary method of manufacturing a family of related IC dies.FIG. 12 illustrates a flip-chip package including a multi-product die.FIG. 13 illustrates the flip-flip package of FIG. 12 including a smaller die.FIG. 14 illustrates the steps of an exemplary method of providing a family of packaged ICs.FIG. 15 illustrates the steps of an exemplary method of modeling multiple PLDs having similar functionality but different hardware implementations.FIG. 16 illustrates how unidirectional interconnect lines can be implemented at the optional internal boundaries of a multi-product die.FIG. 17 illustrates how the unidirectional interconnect lines of FIG. 16 can be modified when manufacturing a second die that includes only a portion of the multi-product die of FIG. 16.FIG. 18 illustrates how the unidirectional interconnect lines of FIGS. 16 and 15 can be modeled using a single software model.FIG. 19 illustrates how bidirectional interconnect lines can be implemented at the optional internal boundaries of a multi-product die.FIG. 20 illustrates how the bidirectional interconnect lines of FIG. 19 can be modified when manufacturing a second die that includes only a portion of the multi-product die of FIG. 19.FIG. 21 illustrates how the bidirectional interconnect lines of FIGS. 19 and 20 can be modeled using a single software model.FIG. 22 illustrates a multi-product IC die in which a boundary scan chain includes only the operational portions of the multi-product die, based on a product selection code.FIG. 23 illustrates how an exemplary multi-product PLD die is configured when a first product selection code enables programming of the entire die.FIG. 24 illustrates how the multi-product PLD die of FIG. 23 is configured when a second product selection code enables programming of only first portions of the die.FIG. 25 illustrates how the multi-product PLD die of FIG. 23 is configured when a third product selection code enables programming of only second portions of the die.FIG. 26 illustrates how the multi-product PLD die of FIG. 23 is configured when a fourth product selection code enables programming of only third portions of the die.FIG. 27 illustrates one way in which the configuration process can be controlled, for example, to configure a multi-product PLD die as shown in FIGS. 23-26.FIG. 28 illustrates a first way in which rows of logic can be hidden in a multi-product PLD die.FIG. 29 illustrates a second way in which rows of logic can be hidden in a multi-product PLD die.DETAILED DESCRIPTION OF THE DRAWINGSThe present invention is applicable to a variety of integrated circuits (ICs). The present invention has been found to be particularly applicable and beneficial for programmable logic devices (PLDs). An appreciation of the present invention is presented by way of specific examples utilizing PLDs such as field programmable gate arrays (FPGAs). However, many aspects of the present invention are not so limited.As noted above, advanced FPGAs can include several different types of programmable logic blocks in the array. For example, FIG. 1 illustrates an FPGA architecture 100 that includes a large number of different programmable tiles including multi-gigabit transceivers (MGTs 101), configurable logic blocks (CLBs 102), random access memory blocks (BRAMs 103), input/output blocks (IOBs 104), configuration and clocking logic (CONFIG/CLOCKS 105), digital signal processing blocks (DSPs 106), specialized input/output blocks (I/O 107) (e.g., configuration ports and clock ports), and other programmable logic 108 such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks (PROC 110).In some FPGAs, each programmable tile includes a programmable interconnect element (INT 111) having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element (INT 111) also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of FIG. 1.For example, a CLB 102 can include a configurable logic element (CLE 112) that can be programmed to implement user logic plus a single programmable interconnect element (INT 111). A BRAM 103 can include a BRAM logic element (BRL 113) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured embodiment, a BRAM tile has the same height as five CLBs, but other numbers (e.g., four) can also be used. A DSP tile 106 can include a DSP logic element (DSPL 114) in addition to an appropriate number of programmable interconnect elements. An IOB 104 can include, for example, two instances of an input/output logic element (IOL 115) in addition to one instance of the programmable interconnect element (INT 111). As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 115 typically are not confined to the area of the input/output logic element 115.In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 1) is used for configuration, clock, and other control logic. Horizontal areas 109 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA.Some FPGAs utilizing the architecture illustrated in FIG. 1 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, the processor block PROC 110 shown in FIG. 1 spans several columns of CLBs and BRAMs.Note that FIG. 1 is intended to illustrate only an exemplary FPGA architecture. For example, the numbers of logic blocks in a column, the relative width of the columns, the number and order of columns, the types of logic blocks included in the columns, the relative sizes of the logic blocks, and the interconnect/logic implementations included at the top of FIG. 1 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic, but the number of adjacent CLB columns varies with the overall size of the FPGA.FIG. 2 illustrates an exemplary FPGA 200 utilizing the general architecture shown in FIG. 1. The FPGA of FIG. 2 includes CLBs 202, BRAMs 203, I/O blocks divided into "I/O Banks" 204 (each including 40 I/O pads and the accompanying logic), configuration and clocking logic 205, DSP blocks 206, clock I/O 207, clock management circuitry (CMT) 208, configuration I/O 217, and configuration and clock distribution areas 209.In the FPGA of FIG. 2, an exemplary CLB 202 includes a single programmable interconnect element (INT 211) and two different "slices", slice L (SL 212) and slice M (SM 213). In some embodiments, the two slices are the same (e.g., two copies of slice L, or two copies of slice M). In other embodiments, the two slices have different capabilities. In some embodiments, some CLBs include two different slices and some CLBs include two similar slices. For example, in some embodiments some CLB columns include only CLBs with two different slices, while other CLB columns include only CLBs with two similar slices.FIG. 3 illustrates a multi-product IC die 300, i.e., an IC die that can be configured as two or more different ICs. For example, the multi-product die of FIG. 3 can be a multi-product PLD die that can be configured as several different PLDs having different logic capacities. The operational portions are shown with a diagonal fill pattern. Thus, in the IC product shown in FIG. 3, the die 300 is configured so that all portions (A, B(1), B(2), C, D(1), and D(2)) are operational. Note that the die represented by this simple block diagram may also include small amounts of logic and/or interconnect lines that "finish off" the die edges. For example, rows or columns of termination tiles may be included on all four edges of the die. Termination tiles are described in more detail in conjunction with FIG. 15, below.FIG. 4 shows the same multi-product die as FIG. 3, but only portions A, B(1), and B(2) are operational. Portions C, D(1), and D(2) are deliberately rendered non-operational. This can be accomplished in any of several ways. For example, when the IC die is a PLD die, the non-operational portions can be configured so that the configuration bit stream for the PLD bypasses these portions. Additionally or alternatively, the non-operational portions can be isolated from power sources. For example, an IC die typically includes many power pins. The different portions of the multi-product die can be coupled to different power pins, and the power pins of the non-operational portions can be coupled to one or more ground pins of the IC package, instead of being coupled to power pins of the package. In some embodiments, a boundary scan chain is modified to bypass the non-operational portions, as is later described in conjunction with FIG. 20.FIG. 5 also shows the same multi-product die as FIG. 3, but only portions A and C are operational. Portions B(1), B(2), D(1), and D(2) are deliberately rendered non-operational. FIG. 6 shows yet another configuration of the multi-product die of FIG. 3, in which only portion A is operational, while portions B(1), B(2), C, D(1), and D(2) are deliberately rendered non-operational.Thus, it can be seen that the single die 300 yields four different products, each including different (and in this example, overlapping) portions of the complete die. When this approach is used, four products are obtained while manufacturing only one mask set. Clearly, where the cost of the three eliminated mask sets is more significant than the continuing costs of manufacturing the dies, a cost saving has been effected. However, if one or more of these smaller products (e.g., the products illustrated in FIGS. 4-6) proves to be very popular with customers, a new mask set can be ordered, and the product can be manufactured as a smaller die, with the consequent reduction in costs. In one embodiment, early in the product cycle a family of products is manufactured from a single multi-product die, and later in the product cycle each family member is manufactured from an individual die. For example, engineering samples can be obtained from a multi-product die, while production ICs are obtained using prior art methods. In some embodiments, dies are obtained from both sources at the same time.Note that more than four products could be obtained from the IC die of FIG. 3, if desired. For example, a product could be obtained that includes portions A and B(1) of IC die 300, or portions A, C, B(1) and D(1), or some other combination of the portions illustrated in FIG. 3.FIG. 7 shows a smaller die 700 that can be manufactured, as desired, if the product illustrated in FIG. 4 proves to be highly popular. Similarly, FIG. 8 shows the smaller die 800 corresponding to the product of FIG. 5, and FIG. 9 shows the smaller die 900 corresponding to the product of FIG. 6. Note that in the smaller dies of FIGS. 7-9, the omission of portions of the larger die may require the addition of small amounts of logic and/or interconnect lines that "finish off" the die edges. For example, rows or columns of termination tiles can be added to these smaller dies, as described in more detail in conjunction with FIG. 15, below.In some embodiments, the smaller dies (e.g., the dies of FIGS. 7-9) can be shipped to customers without significant impact on the customers' systems, previously implemented using the corresponding products (e.g., from FIGS. 4-6). For example, the corresponding products can be packaged in IC packages having the same type and size, with the same pinouts, e.g., by utilizing flip-chip packaging. The same software model can be provided for the two corresponding products, as is later described in conjunction with FIGS. 14-19.FIG. 10 illustrates an exemplary FPGA implemented as a multi-product die 1000. For example, comparing die 1000 with die 300 of FIG. 3, it can be seen that portion B(1) corresponds to the area delineated by vertical lines V1 and V2, and horizontal lines H1 and H2. Portion A corresponds to the area delineated by vertical lines V1 and V2, and horizontal lines H2 and H3. Portion B(2) corresponds to the area delineated by vertical lines V1 and V2, and horizontal lines H3 and H4. Portion D(1) corresponds to the area delineated by vertical lines V2 and V3, and horizontal lines H1 and H2. Portion C corresponds to the area delineated by vertical lines V2 and V3, and horizontal lines H2 and H3. Portion D(2) corresponds to the area delineated by vertical lines V2 and V3, and horizontal lines H3 and H4.Note that the exemplary multi-product FPGA die of FIG. 10 is implemented using a columnar architecture, as are the FPGAs of FIGS. 1 and 2. The columnar architecture (e.g., in which the input/output blocks occur in vertical columns) facilitates the implementation of the multi-product die by simplifying the logic that enables and disables portions of the die, and simplifies the effort to provide consistent packaging among the various products derived from the die.As in the FPGAs of FIGS. 1 and 2, the multi-product die of FIG. 10 can include many different types of logic blocks. The exemplary die of FIG. 10 is selected for illustration because the simplicity of the architecture does not obscure the inventive concept. However, additional types and sizes of logic blocks can also be included in the multi-product die, if desired. In addition to the logic blocks shown in FIG. 2, which are identified by the same element numbers in the two figures, the multi-product die of FIG. 10 includes the following design elements: logic block 1001, which implements the well-known PCI Express interface; EMAC logic blocks 1002, which implement the well-known Ethernet MAC interface; and GTP banks 1003, which include low power versions of well-known multi-gigabit transceivers.Clearly, the multi-product die of FIG. 10 includes several optional internal boundaries at which the operational portions can optionally be separated from portions that are deliberately rendered non-operational. For example, horizontal lines H2 and H3 provide such boundaries, as does vertical line V2. Care is preferably taken so that these boundaries do not intersect functional logic blocks. Note, however, that the boundaries between portions of the die are not necessarily straight lines, as shown in the examples herein. The boundaries can be adjusted to ensure that they include complete logic blocks, if necessary. However, the use of straight-line boundaries facilitates the manufacture of smaller dies that include only portions of the original multi-product die, e.g., as shown in FIGS. 7-9. The "row and column" organization illustrated in FIG. 10, for example, facilitates the use of straight boundary lines in a multi-product die.The multi-product die of FIG. 10 is designed to be easily segmented. For example, the interconnect structure of the die is preferably designed to include only repetitive interconnect, e.g., the "high fanout" interconnect structure included in many known FPGAs can be omitted. The clock structure, also, is designed to be operable when only portions of the die are operational. The configuration and clock distribution logic 209 shown in FIG. 10 is designed as a "tree" that expands horizontally from a center column, then spreads vertically both up and down from the horizontal "branches". In the embodiment of FIG. 10, the optional boundaries H2 and H3 are placed such that they do not intersect the final vertical extensions of the clock and configuration tree.FIG. 11 illustrates the steps of an exemplary method of manufacturing a family of related IC dies, starting with a multi-product die. In step 1101, two or more first IC dies are manufactured, where each of the first IC dies includes a first portion and a second portion. In step 1102, at least one of the first IC dies is packaged so that both the first portion and the second portion are operational (e.g., fully functional, accessible to the user, and/or programmable by a configuration bit stream when the IC is a PLD). This step results in a packaged integrated circuit in which the entire multi-product die is fully operational. Note that the terms "packaging", "packaged IC", and other related terms refer to the process of assembling a die structure that includes an IC die and typically provides a physical interface between a system and the IC die, and/or the structure resulting from such a process. In a preferred embodiment, the IC dies are packaged in flip-chip packages. However, the methods and structures of the invention are not limited to processes and assemblies involving flip-chip packages, nor to other types of IC packages currently known, but can be applied to other die assemblies, including die assemblies that have not yet been developed.In step 1103, at least one of the first IC dies is packaged so that the first portion is operational, and the second portion is rendered non-operational in some manner. For example, the second portion can be rendered non-functional by disconnecting it from the power supply, can be made inaccessible by a system and by other logic included on the die, and/or can be made non-programmable in the case of a PLD.In step 1104, a second IC die is manufactured, based on an evaluation relating to the die produced by step 1103. The evaluation might be an evaluation of the die itself, or an assessment of other factors relating to the die. For example, the decision can be made to order a mask set and manufacture the second IC die when dies resulting from step 1103 prove to have very good sales figures, or when test results show that the dies from step 1103are fully functional. The evaluations relating to the second IC die might include, additionally or alternatively, an assessment of customer demand, yield results, production risks, production costs, inventory costs, availability of engineering resources, and so forth.In some embodiments, the first IC die is packaged in step 1102 with a first product selection code, and in step 1103 with a second product selection code. For example, a multi-product die configurable as four different dies (e.g., as shown in FIGS. 3-6) can have a two-bit product selection code coded into the packaged part. For example, in one embodiment code "11" selects the configuration shown in FIG. 3, wherein neither rows nor columns are rendered non-operational. Code "10" selects the configuration shown in FIG. 5, where some rows (the topmost and bottommost rows of logic blocks) are rendered non-operational. Code "01" selects the configuration shown in FIG. 4, where some columns (the rightmost columns of logic blocks) are rendered non-operational. Code "00" selects the configuration shown in FIG. 6, where some rows (the topmost and bottommost rows) and some columns (the rightmost columns) of logic blocks are rendered non-operational.In some embodiments, the second IC die is packaged with the first product selection code. For example, when the code "11" selects a fully operational die, the same code "11" can be used for the second IC die, which is also fully operational. In another embodiment, the code for the die packaged in step 1103 is also used in step 1104. In this embodiment, the code identifies the operational portion(s), no matter what the derivation of the final product. In some embodiments, the second IC die has no product selection code, because the die is not designed such that any portion(s) of the die can be rendered non-operational. In other embodiments, the second IC die is designed to render the entire die operational regardless of the value of the product selection code (e.g., the product selection code is a "don't care" value).A product selection code can be included in a packaged IC using any of several methods. For example, the code can be stored on the IC die itself in a non-volatile memory provided for that purpose. Alternatively or additionally, the code can be set by selectively fusing some or all of a set of fuses (e.g., polysilicon fuses) included in the IC die. Alternatively or additionally, the code can be set using a packaging option, e.g., by selectively coupling one or more nodes within the IC die (e.g., the bits of the product selection code) to one or more constant-value pins inside the packaged IC. In some embodiments, the code is set by either coupling the bits of the product selection code to ground within the package, or by leaving the bits unbonded. An internal pull-up ensures that an unbonded product selection code bit is pulled to a high value. In some embodiments, multiple methods are provided for encoding the product selection, to ensure success of the encoding process. In some embodiments a fuse circuit output signal and a signal from a package bump (enabling a package option) are ORed together, such that either of these methods can be used to set the product selection code. For example, if the package bump is tied to ground, the output signal from the fuse circuit determines the value of the product selection code. Alternatively, if the fuse is grounded, the signal from the package bump determines the value of the product selection code. In some embodiments having multiple methods of setting the product selection code, one method overrides one or more other methods. For example, in one embodiment one or more blown fuses always determine the product selection code, regardless of a value designated by a connection within the package. This embodiment is useful when there is a need to redefine a die after packaging to meet changing customer demands.In some embodiments, the product selection code also sets an IC identification code to uniquely describe the product. For example, the product selection code can control a multiplexer to select one of four stored identification codes identifying the four products that can be produced from the single multi-product die. This IC identification code can be used, for example, to identify the device, the fabrication facility, the date of fabrication, revision numbers, numbers of rows and/or columns of CLBs, or to provide other information regarding the device. The IC identification code can be used, for example, to provide information about the product via a JTAG boundary scan process.In step 1105, the second IC die is packaged, preferably using the same package that was used to package the second set of first IC dies in step 1103. If the same package is used and the dies are carefully designed, the die produced in step 1105 can be used interchangeably with the dies produced in step 1103.For example, flip-chip technology, which is well known, can be used to package the dies, as is now described in conjunction with FIGS. 12 and 13. In flip-chip packaging, no wire bonding is needed. The flip-chip package includes a piece part (1201) on which the IC die is placed. Solder bumps on the IC die make contact with corresponding bump pads (e.g., bump pad 1203) on the piece part. When the larger die (1202) is packaged in step 1103, the die will make contact with a larger number of the bump pads, e.g., all of the bump pads in the piece part might be in contact with the IC die, as shown in FIG. 12. However, signals to and from the non-operational portions of the IC, are not necessarily placed into contact with the rest of the system, because there is no reason to do so.FIG. 13 shows the same piece part (1201) used with a smaller die (1302) that includes only the operational portions of the die of FIG. 12. The smaller IC die (e.g., the die packaged in step 1105), when packaged using the same piece part, will make contact with fewer of the bump pads on the piece part, and the unused bump pads are left without connections. For example, in FIG. 13 bump pad 1203 is not connected to the die 1302. In some embodiments, the I/Os that are critical to the functionality of the dies (e.g., configuration pins, global clock pins) are placed in the region that is always connected to the package, regardless of which die is used. In some embodiments, these unused bump pads are passivated using any of the known passivation methods, e.g., to prevent oxidation and/or other deterioration.In some embodiments, the packages utilized in steps 1102, 1103, and 1105 all utilize the same piece part. In another embodiment, the package utilized in step 1105 is the same as the package utilized in step 1103, but different from the package utilized in step 1102. In some embodiments, a family of PLDs includes the family members shown in Table 1. Four different die can be packaged in various ways to produce four different products. A "Y" (for "yes") indicates that the die named at the top of the column can be used to generate the product named at the left of the row. Thus, each "Y" corresponds to an available packaged IC die.The four products are shown in the leftmost column of the table. The LX50T product corresponds, for example, to the PLD structure shown in FIG. 3. The LX50 corresponds, for example, to the structures shown in FIGS. 4 and 7. Note that either of the two structures shown in FIGS. 4 and 7 yields the same product. The LX30T corresponds to the structures shown in FIGS. 5 and 8, and the LX30 corresponds to the structures shown in FIGS. 6 and 9.The four die are shown in the top row of the table. The "30" die corresponds, for example, to the structure shown in FIG. 9. The "30T" die corresponds, for example, to the structure shown in FIG. 8. The "50" die corresponds, for example, to the structure shown in FIG. 7. The "50T" corresponds to any of the structures shown in FIGS. 3-6.Note that in this exemplary PLD product family, all but the smallest of the three smaller die can also be used to generate yet smaller products. For example, referring to FIG. 7, the "50" die can also be used to generate a "LX30" product in which only portion "A" is operational, and portions B(1) and B(2) are non-operational.<tb><sep>TABLE 1<tb><sep>\Die<sep><tb><sep>Product\<sep>30<sep>30T<sep>50<sep>50T<tb><sep>LX30<sep>Y<sep>Y<sep>Y<sep>Y<tb><sep>LX30T<sep><sep>Y<sep><sep>Y<tb><sep>LX50<sep><sep><sep>Y<sep>Y<tb><sep>LX50T<sep><sep><sep><sep>YIn some embodiments, all of the packaged IC dies (i.e., each box with a "Y" in Table 1) are packaged using packages of the same size and type.In one embodiment, all of the packaged dies in each row use the same package, but each row uses a different package for the dies in that row. For example, the LX30 product uses a first package whether derived from any of the 30, 30T, 50, and 50T dies, but the LX30T product uses a second package different from the first package in at least one of size and type. In this embodiment, each product (e.g., LX30, LX30T, etc.) can have its own product selection code, and the product selection code can be included in the package rather than being encoded using fuses or external wiring. In this embodiment, for example, the product selection codes might be, for example, as shown in Table 2.<tb><sep>TABLE 2<tb><sep>\Die<sep><tb><sep>Product\<sep>30<sep>30T<sep>50<sep>50T<tb><sep>LX30<sep>00<sep>00<sep>00<sep>00<tb><sep>LX30T<sep><sep>10<sep><sep>10<tb><sep>LX50<sep><sep><sep>01<sep>01<tb><sep>LX50T<sep><sep><sep><sep>11It might at some time be considered an advantage to reduce the number of unique package piece parts utilized by the family of PLDs shown in Table 2. To provide this additional flexibility, the product selection codes can be assigned in such a way that they include "don't-care" values, as shown in Table 3. In Table 3, an "X" indicates a "don't-care" value. In the embodiment of Table 3, a "30" die can be packaged in either the "LX30-type" package used by the products in the first row (with a product selection code of "00") or in the "LX50-type" package used by the products in the third row (with a product selection code of "01"). Similarly, a "30T" die can be packaged in either the "LX30T" type of package used by the products in the second row (with a product selection code of "10") or in the "LX50T-type" package used by the products in the fourth row (with a product selection code of "11"). Thus, the product selection code in this embodiment indicates the package type and size, and not the product itself.<tb><sep>TABLE 3<tb><sep>\Die<sep><tb><sep>Product\<sep>30<sep>30T<sep>50<sep>50T<tb><sep>LX30<sep>0X<sep>0X<sep>00<sep>00<tb><sep>LX30T<sep><sep>1X<sep><sep>10<tb><sep>LX50<sep><sep><sep>01<sep>01<tb><sep>LX50T<sep><sep><sep><sep>11Returning now to FIG. 11, the same package pinout is preferably used for the packaged IC dies produced from steps 1103 and 1105, to enable the transparent substitution of one of the packaged dies for the other. Thus, for example, a system designed to accept the packaged IC from step 1103 exhibits the same functionality when the packaged IC from step 1105 is substituted. The larger die (the packaged IC from step 1103) is likely to have a higher leakage current than the smaller die (the packaged IC from step 1105), but unless the leakage current is close to system tolerance, this minor variation should not affect the functionality of the system. Further, a system designed to accept the packaged IC with a larger leakage current (i.e., the first-manufactured die) should continue to function properly when and if a packaged IC with a smaller leakage current (i.e., the later-manufactured die) is used.In some embodiments, when the IC dies are PLD dies, the packaged IC resulting from step 1103 is programmed with a configuration bit stream smaller than the packaged IC resulting from step 1102, because only the first portion of the packaged IC resulting from step 1103 is operational. In these embodiments, the packaged IC resulting from step 1105 preferably uses the same configuration bit stream as the packaged IC resulting from step 1103, again to enable the transparent substitution of one of the packaged dies for the other.Note that some publicly available PLDs have previously been manufactured in which certain portions of the configurable logic were deliberately "blacked out" by instructing the PLD implementation software to ignore certain areas of the dies. For example, a PLD including four columns of block RAM was sold as a less expensive version of the die including only two columns of block RAM, to avoid incurring the higher costs of testing the extra two columns of block RAM. Thus, the end cost to the PLD user was reduced.However, this known "family" of PLDs differed from those described herein in several important ways, including but not limited to the following. For example, the PLD with two columns of block RAM was not manufactured as a stand-alone product. Further, the packaged PLD was not encoded with a product selection code that controlled the behavior of the packaged PLD. Instead, the PLD implementation software determined whether or not to use the two additional columns of block RAM based on which PLD was being targeted. Yet further, the two additional columns of block RAM were always configured, whether or not they were being used by the design. Thus, the configuration bit stream for both products was the same size.In addition to reducing initial manufacturing costs for products having a relatively small end demand, the methods disclosed herein can also reduce ongoing manufacturing costs due to defective dies. As IC dies grow larger, the likelihood that a defect will be included in a given die increases at a rate even larger than the die size. Therefore, it is generally desirable to improve the manufacturing yield by enabling the use of dies that include localized manufacturing defects.FIG. 14 illustrates the steps of an exemplary method of enhancing the yield of IC devices using a multi-product die, e.g., such as those described herein. In step 1401, an IC die is fabricated that include first and second portions. In step 1402, the IC die is tested. When no localized defects are detected, in step 1403 the IC die is packaged with a first product selection code that enables both portions of the IC die. For example, when the IC die is a PLD die, the first product selection code can enable programming of both portions of the PLD die. (Note that it is also possible to package an IC die with no localized defects using a product selection code that enables only a portion of the IC die, if desired.)When localized defects are detected in the second portion of the die, in step 1404 the IC die is packaged with a second product selection code that enables the first portion, but disables the second portion, of the IC die. For example, when the IC die is a PLD die, the second product selection code can enable programming of the first portion, and disable programming of the second portion, of the PLD die. When defects are detected that adversely affect the first portion of the die (not shown), the IC die can be discarded, if desired.Various approaches can be used to test the multi-product die and identify the location of any localized defect. For example, a first set of tests could test the functionality associated with the largest possible family member (i.e., the family member including all portions of the die). If the first set of tests passes, then the testing is complete. If the first set of tests fails, a second set of tests could be applied that tests the functionality associated with a smaller family member (i.e., a family member including only a subset of the die), and so forth. In another embodiment, the tests could be written to test all functionality of the entire multi-product die and to identify from the test results the location of the localized defects.FIG. 15 illustrates the steps of an exemplary method of modeling multiple PLDs having similar functionality, but different hardware implementations. For example, this method can be used to model corresponding PLDs as shown in FIGS. 4 and 7, FIGS. 5 and 8, or FIGS. 6. and 9.In step 1501, a first PLD is provided that includes first configurable tiles and first interconnect lines coupled between the tiles. The first PLD is logically divisible into first and second portions, with each of the portions including some of the configurable tiles. (The first PLD cannot necessarily be physically divided into the first and second portions without disrupting the functionality of one or both of the two portions.) Each of the first interconnect lines has a first section included in the first portion and a second section included in the second portion.In step 1502, a second PLD is provided that includes second configurable tiles and is substantially similar to the first portion of the first PLD. The second PLD also includes second interconnect lines coupled to the second configurable tiles. The second interconnect lines are substantially similar to the first sections of the first interconnect lines. Each of the second interconnect lines is coupled to another of the second interconnect lines at a boundary of the second PLD to form pairs of the second interconnect lines. These interconnections are further described below, in connection with FIGS. 17 and 20.In step 1503, the first PLD is encoded to render the second portion of the first PLD non-operational. For example, the first PLD can be encoded to enable programming of the first portion and disable programming of the second portion, to disconnect power from the second portion, and/or render the second portion transparent to a system including the first PLD. In one embodiment, the first PLD is similar to PLD 300 of FIG. 3, for example, and the second PLD is similar to the PLD illustrated in FIG. 8. In this embodiment, the encoded first PLD resulting from step 1503 resembles the PLD illustrated in FIG. 5.In step 1504, a software model is provided that is correct for both the second PLD from step 1502 and the encoded first PLD from step 1503, even though the two PLDs have physical differences. Exemplary software models having this capability are described below in conjunction with FIGS. 16-21.One implementation of step 1504 is shown in steps 1511-1513. Each of the first and second PLDs includes configurable tiles that are substantially similar to one another. (Note that additional configurable tiles different from each other can also be included in one or both of the PLDs.) In step 1511, a tile model is provided for the configurable tile. This software model represents one of the configurable tiles in the first and second PLDs. Because the configurable tiles are substantially similar, the same tile model can be used for the configurable tiles in each PLD. This type of software model, modeling a standard configurable tile, is well known.In step 1512, a first termination model is provided that can be applied above the upper edge of one of the configurable tiles. For example, in the second PLD the first termination model can be applied above the row of tiles along the upper edge of the second PLD. In the first PLD, the first termination model can be applied above the top row of configurable tiles included in the first portion of the PLD. As is clear from the above description, interconnect lines along the upper edge of the configurable tile can be coupled into pairs (e.g., at the top edge of the second PLD, as described above) or can continue on to an adjacent configurable tile above (e.g., interconnect lines at the top edge of the first portion of the first PLD continue into the second portion of the PLD). However, the first (upper) termination model is the same for both PLDs.It will be understood that the terms "above", "below", "upper", "lower", "northward", "southward", "left", "right", and so forth as used herein are relative to one another and to the conventions followed in the figures and specification, and are not indicative of any particular orientation of or on the physical dies. Note also that the terms "column" and "row" are used to designate direction with respect to the figures herein, and that a "column" in one embodiment can be a "row" in another embodiment.In step 1513, a second termination model is provided that can be applied below the lower edge of one of the configurable tiles. For example, in the second PLD the second termination model can be applied below the row of tiles along the lower edge of the second PLD. In the first PLD, the second termination model can be applied below the bottom row of configurable tiles included in the first portion of the PLD. As is clear from the above description, interconnect lines along the lower edge of the configurable tile can be coupled into pairs (e.g., at the bottom edge of the second PLD, as described above) or can continue on to an adjacent configurable tile below (e.g., interconnect lines at the bottom edge of the first portion of the first PLD continue into the second portion of the PLD). However, the second (lower) termination model is the same for both PLDs.In some embodiments, a single speeds file is provided as part of, or along with, the upper and lower termination models. In other words, a single computer file can be used to describe the timing for both the second PLD and the first PLD, when the first PLD is encoded to behave in the same manner as the second PLD. In some embodiments, providing a single speeds file for both the second PLD and the first PLD (when the first PLD is encoded to behave in the same manner as the second PLD) enables either of these devices to be shipped to a customer interchangeably, by ensuring that both products use the same software. The timing is the same for the two PLD dies because of the manner in which the interconnect lines are implemented and modeled, as is now described.Examples of upper termination models are now described in conjunction with FIGS. 16-21. FIGS. 16-18 illustrate how unidirectional interconnect lines can be modeled such that the same model can be applied for continuing and paired interconnect lines. FIGS. 19-21 illustrate how bidirectional interconnect lines can be modeled such that the same model can be applied for continuing and paired interconnect lines. Lower termination models can be similar to the upper termination models.FIG. 16 illustrates how unidirectional interconnect lines can be implemented at the optional internal boundaries of a multi-product die, e.g., at the upper boundary between a first portion and a second portion of the die. The PLD illustrated in FIG. 16 includes a first portion (1601) and a second portion (1602A/B), and is encoded to render the first portion of the die operational and the second portion non-operational. Two interconnect lines N and S are programmably coupled to configurable tiles 1611 via programmable connections 1612. Northward traveling interconnect line N begins in the first portion and extends into the second portion. Southward traveling interconnect line S begins in the second portion and extends into the first portion. Interconnect lines N and S are of the same length (e.g., span the same number of tiles), and form a pair of interconnect lines in which the section of interconnect line N within the first portion of the PLD is the same length as the section of interconnect line S within the second portion of the PLD. For example, if the section of interconnect line N within the first portion spans three tiles, and the section of interconnect line N within the second portion spans two tiles, then interconnect line S displays the opposite configuration. That is, the section of interconnect line S within the second portion spans three tiles, and the section within the first portion spans two tiles.FIG. 17 illustrates how unidirectional interconnect lines N and S appear in the second PLD, which includes the first portion 1601 of the first PLD, while the second portion 1602A/B of the first PLD die is physically omitted from the second PLD die. An upper row of termination tiles 1722 is added above the top row of configurable tiles in the first section 1601, and a lower row of termination tiles 1724 is added below the bottom row of tiles. Exemplary interconnect lines N and S are coupled together in the upper termination tile using interconnection 1723. Thus, interconnect line N/S has essentially the same length and the same loading as interconnect line N and interconnect line S in the first PLD of FIG. 16. In other words, the length and loading will vary only insignificantly because of the added interconnection 1723.Note that the numbers of N and S interconnect lines are preferably the same, and the two kinds of vertical interconnect lines are preferably symmetrical and arranged as described above to facilitate this implementation. For example, the spacing, the loading, and the processing layers (e.g., metal layers) used to implement the N and S interconnect lines are preferably well-matched.The coupling together of interconnect lines in pairs at a boundary is a known approach to the process of creating a family of differently-sized PLDs. However, if traditional software modeling were applied to the two PLDs of FIGS. 16 and 17, two different software models would be required for the two PLDs. For example, the software model for the PLD of FIG. 16 would show that interconnect line N extended northward into the second portion of the PLD. The software model for the PLD of FIG. 17 would show that interconnect line N looped around through interconnection 1723 and returned into the first portion. However, in this case, the two PLDs could not be used interchangeably in a system, because a design implemented in the second PLD, for example, might take advantage of the loop-back that is unavailable in the first PLD. Thus, some of the advantage of the multi-product die approach would be lost.In order to enable software modeling of a multi-product die such as those described herein, a new modeling method has been developed in which a single software model can be applied to both the encoded first PLD (FIG. 16) and the second PLD (FIG. 17). Such a software model for the upper termination tile is illustrated in FIG. 18. The software model of FIG. 18 applies only to unidirectional interconnect lines at a boundary, in this example at the upper boundary of the first portion 1601.The software model 1801 for the first portion includes interconnect lines MN and MS, and configurable tiles 1811, corresponding to the interconnect lines N, S and tiles 1611 of FIGS. 16 and 17. However, a model of the termination tile includes an RC circuit 1831 coupled to model MN for interconnect line N. This RC circuit models the loading of the portion of interconnect line N that extends upward into the second portion of the die. Notice that interconnection 1723 between interconnect lines N and S is not modeled. Therefore, this interconnection will not be used by the PLD implementation software, and resulting design implementations can be used by either the encoded first PLD of FIG. 16 or the second PLD of FIG. 17. However, interconnection 1723 is physically provided in the second PLD die, so that interconnect line N displays the same loading characteristics as the corresponding interconnect line in the PLD of FIG. 16. Because interconnect line S will not be used, no RC circuit need be modeled for interconnect line S.In some embodiments, the MS interconnect line is simply omitted from the model. Because this interconnect line cannot be used, no useful information is lost.FIGS. 19-21 illustrate how bidirectional interconnect lines can be modeled using a single software model when the multi-product die approach is used. FIG. 19 illustrates how bidirectional interconnect lines can be implemented at the optional internal boundaries of a multi-product die, e.g., at the upper boundary between a first portion and a second portion of the die. The PLD illustrated in FIG. 19 includes a first portion (1901) and a second portion (1902A/B), and is encoded to enable the first portion and disable the second portion of the die. Two interconnect lines J and K are programmably coupled to configurable tiles 1911 via programmable connections 1912. Interconnect lines J and K can be driven from either end, and can optionally be driven from some other point in the interconnect line as well. Interconnect lines J and K are of the same length (e.g., span the same number of tiles), and form a pair of interconnect lines in which the section of interconnect line J within the first portion of the PLD is the same length as the section of interconnect line K within the second portion of the PLD. For example, if the section of interconnect line J within the first portion spans three tiles, and the section of interconnect line J within the second portion spans two tiles, then interconnect line K displays the opposite configuration. That is, the section of interconnect line K within the second portion spans three tiles, and the section within the first portion spans two tiles.FIG. 20 illustrates how bidirectional interconnect lines J and K appear in the second PLD, which includes the first portion 1901 of the first PLD, while the second portion of the first PLD die is physically omitted from the second PLD die. An upper row of termination tiles 2022 is added above the top row of configurable tiles in the first section 1901, and a lower row of termination tiles 2024 is added below the bottom row of tiles. Exemplary interconnect lines J and K are coupled together in the upper termination tile using interconnection 2023. Thus, interconnect line J/K has essentially the same length and the same loading as interconnect line J and interconnect line K in the first PLD of FIG. 19. In other words, the length and loading will vary only insignificantly because of the added interconnection 2023.Similar to the case of the unidirectional interconnect lines N and S, if traditional software modeling were applied to the two PLDs of FIGS. 19 and 20, two different software models would be required for the two PLDs. Thus, the two PLDs could not be used interchangeably in a system, and some of the advantage of the multi-product die approach would be lost.FIG. 21 illustrates a software model for the bidirectional interconnect lines J and K of FIGS. 19 and 20. This software model can be applied to both the encoded first PLD (FIG. 19) and the second PLD (FIG. 20). The software model of FIG. 21 applies only to bidirectional interconnect lines at a boundary, in this example at the upper boundary of the first portion 1901.The software model 2101 for the first portion includes interconnect lines MJ and MK, and configurable tiles 2111, corresponding to the interconnect lines J, K and tiles 1911 of FIGS. 19 and 20. A model of the termination tile includes an RC circuit 2131 coupled to model MJ for interconnect line J. This RC circuit models the loading of the portions of interconnect line J that extend upward into the second portion of the die. However, note that no RC circuit is included for model MK for interconnect line K. In the second PLD (FIG. 20), interconnect lines J and K are shorted together via interconnection 2023. This interconnection ensures that the loading for interconnect lines J and K is consistent between the encoded first PLD of FIG. 19 and the second PLD of FIG. 20. However, the interconnection also means that if interconnect line J is used, interconnect line K cannot be used to route a different signal. Therefore, the software model in FIG. 21 includes a flag 2141 that flags interconnect line K (modeled by MK) invalid for use by routing software for the PLD. Note that in the embodiment of FIG. 21, either of interconnect lines J and K could have been selected to be used in the model, with the other of the two interconnect lines being flagged as invalid. Preferably, the longer of the two interconnect lines is used, with the shorter interconnect line being flagged as invalid, to increase the usefulness of the modeled interconnect line.In some embodiments, the flag 2141 is implemented by removing in the software model all programmable connections (programmable interconnect points, or PIPs) providing access onto and off of interconnect line K (model MK). In some embodiments, the MK interconnect line is simply omitted from the model. Because this interconnect line cannot be used, no useful information is lost.Notice that interconnection 2023 between interconnect lines J and K is not modeled. Therefore, this interconnection will not be used by the PLD implementation software, and resulting design implementations can be used by either the encoded first PLD of FIG. 19 or the second PLD of FIG. 20. However, interconnection 2023 is physically provided in the second PLD die, so that interconnect lines J and K display essentially the same loading characteristics as the corresponding interconnect lines in the PLD of FIG. 19.Returning to FIG. 18, note that the flag 2141 that is included for the bidirectional interconnect lines (FIG. 21) is not needed for the unidirectional interconnect line model shown in FIG. 18. The unidirectional interconnect lines have only one driver, at the source end of the interconnect line. Therefore, interconnect line S cannot be driven from within the first portion 1601 of the die. Hence, it is not necessary to flag the interconnect line as unusable when only the first portion of the die is in use.In some embodiments, the left and right edges of the die also include termination tiles, which can be similar to the upper and lower termination tiles, or can be implemented in a different fashion. In one embodiment, the left termination tiles are modeled to accurately reflect the "U-turns" performed by the interconnect lines along the left edge of the die. This approach works correctly in this instance, because the left edge of the die is the same for each of the full and partial die. The right termination tiles cannot be modeled to accurately reflect the die, because there is an optional boundary along the right-hand edge of some partial die. Therefore, the right termination tiles are modeled in a fashion similar to the upper and lower termination tiles, for at least the partial die that terminate at the optional boundary.As previously described, the multi-product die described herein can be encoded to render first portion(s) of the die operational, and to render second portion(s) of the die non-operational. As noted above, a non-operational portion of the die can be made non-operational in one or more of several ways. For example, the product selection code can disconnect the second portion from the IC power supply. In some embodiments, the product selection code simply disconnects signal sources in the second portion from destinations located in the first portion and/or from IC pads. Thus, the second portion continues to function, but is transparent to the system of which the IC forms a part. In other embodiments, the non-operational portion of the die is simply disabled using an enable/disable signal. These and other methods of rendering a circuit non-operational are encompassed by the present description. Thus, the term "operational" as used herein generally refers to the ability to perform logical functions in a manner that affects a system, while the term "non-operational" generally means an inability to affect such a system.In some embodiments, when a portion of an IC has been made transparent to a system, the boundary scan chain is configured to skip over the non-operational portion of the IC die, i.e., any parts of the boundary scan chain through the non-operational portion are bypassed. FIG. 22 illustrates an exemplary die in which a portion of the die can be removed from the boundary scan chain based on the value of a product selection code. Note that while the IC die pictured in FIG. 22 is a PLD die, this type of arrangement can also be applied to a non-programmable or partially-programmable integrated circuit.The IC die of FIG. 22 is divided into several different portions, which correspond in this example to the portions A, B(1), B(2), C, D(1), and D(2) of FIG. 3. Thus, FIG. 22 provides an implementation of a configurable boundary scan chain that can be used in the multi-product die illustrated in FIG. 3. In the embodiment of FIG. 22, a product selection code (PSC[1:0]) value of 1,1 configures the boundary scan chain to include the input/output blocks in all of portions A, B(1), B(2), C, D(1), and D(2). A product selection code (PSC[1:0]) value of 1,0 configures the boundary scan chain to include the input/output blocks in portions A and C, and omit the input/output blocks in portions B(1), B(2), D(1), and D(2). A product selection code (PSC[1:0]) value of 0,1 configures the boundary scan chain to include the input/output blocks in portions A, B(1), and B(2), and omit the input/output blocks in portions C, D(1), and D(2). A product selection code (PSC[1:0]) value of 0,0 configures the boundary scan chain to include only the input/output blocks in portion A, omitting the input/output blocks in portions B(1), B(2), C, D(1), and D(2). In some embodiments, the boundary scan chain includes logic blocks other than the input/output blocks.The multi-product die of FIG. 22 includes tiles 2201-2040, logic elements 2250 included in the tiles, multiplexers 2251-2258, boundary scan input pad TDI (Test Data In), boundary scan output pad TDO (Test Data Out), and selection code storage circuit PS_CODE, coupled together as shown in FIG. 22. The selection code storage circuit stores a product selection code PSC[1:0]. A low value for signal PSC[0] selects "hide rows", i.e., hide portions B(1), D(1), B(2), and D(2). Therefore, when signal PSC[0] has a low value, multiplexer 2251 selects the boundary scan signal that bypasses tiles 2201-2202, multiplexer 2252 selects the boundary scan signal that bypasses tiles 2211-2212, multiplexer 2253 selects the boundary scan signal that bypasses tiles 2221-2222, and multiplexer 2254 selects the boundary scan signal that bypasses tiles 2231-2232. Similarly, multiplexer 2255 selects the boundary scan signal that bypasses tiles 2209-2210, multiplexer 2256 selects the boundary scan signal that bypasses tiles 2219-2220, and multiplexer 2257 selects the boundary scan signal that bypasses tiles 2229-2230. 4-input multiplexer 2258 selects either the boundary scan output from portion C (when PSC[1] is high) or the boundary scan output from the last column of portion A (when PSC[1] is low).A low value for signal PSC[1] selects "hide columns", i.e., hide portions D(1), C, and D(2). Therefore, when signal PSC[1] has a low value, 4-input multiplexer 2259 selects either the boundary scan output from the last column of portion B(2) (when PSC[0] is high) or the boundary scan output from the last column of portion A (when PSC[1] is low).In some embodiments, multiplexers similar to multiplexers 2251-2257 are added between additional rows of logic elements, in order to facilitate the design process. For example, a row of multiplexers similar to multiplexers 2251-2257 can be added wherever a column of input/output blocks intersects a clock regional boundary.In some embodiments, more than one bit is provided for "hide rows" and/or "hide columns". In these embodiments, the number of variations increases with the number of bits provided, as will be clear to those of skill in the relevant arts.Note that in some embodiments, signals other than boundary scan signals can span more than one portion of the multi-product dies, and the methods described herein can also be applied to these other signals. The multi-product die is preferably designed to allow these signals to operate in the same manner for all implementations of a given product. For example, vertical signals can be designed to enter the bottom edge of a tile in the same fashion whether coming from an adjacent tile included in the same portion, or from a tile in a different portion of the die.In some embodiments, when a second portion of a multi-product PLD die is non-operational, the second portion is made transparent to the configuration bitstream. Thus, the second portion is not configured, or remains in a default, non-operational state. In some embodiments, the ability to change the configuration flow of a multi-product die is implemented in a fashion similar to that shown for the boundary scan chain in FIG. 22. However, in many PLDs the configuration bitstream is written as a series of frames, rather than as a serial bitstream.In these embodiments, the size of a frame is preferably smaller than the full height of the PLD. Configuration data is provided to the PLD using a segmented configuration frame, such that only some of the rows of tiles are addressed by each frame. For example, FIGS. 23-26 illustrate four different configuration sequences that can be utilized for a single multi-product die, such that four different products result. FIG. 23 illustrates a configuration sequence that can be used, for example, to yield the fully-configured product of FIG. 3. FIG. 24 illustrates a configuration sequence that can be used to yield the product of FIG. 4. FIG. 25 illustrates a configuration sequence that can be used to yield the product of FIG. 5. FIG. 26 illustrates a configuration sequence that can be used to yield the product of FIG. 6.In the product of FIG. 23, all portions of the die are configured. In the pictured embodiment, the configuration data is loaded by first following the pattern shown by the arrow "1st", then the pattern shown by the arrow "2nd", and so forth until the "6th" set of rows has been configured. The result is a multi-product die in which all of the portions (A, B(1), B(2), C, D(1), and D(2)) are configured and operational.In the product of FIG. 24, portions A, B(1), and B(2) of the die are configured. In the pictured embodiment, the configuration data is loaded by first following the pattern shown by the arrow "1st", then the pattern shown by the arrow "2nd", and so forth until the "6th" set of rows has been configured. The result is a multi-product die in which portions A, B(1), and B(2) are configured and operational, and portions C, D(1), and D(2) are unconfigured and non-operational.When the configuration process reaches the last column to be configured, a signal (e.g., "last_col_rollover") is sent from the last column back to the configuration logic, and the process switches to the next row in the configuration sequence. The last column configured is determined, for example, by the product selection code. For example, in the embodiment of FIG. 24 the last column is the right-hand column of portions B(1), A, and B(2). Any column having the potential to be the last column to be configured (e.g., any column including the head of an arrow in any of FIGS. 23-26) includes an option cell that identifies the column as a possible last column. The option cell and the product selection code are used to determine when the column actually is the last column to be configured. When configuration of such a column is complete, the column sends the signal to the configuration logic that causes the process to switch to the next row in the configuration sequence.In the product of FIG. 25, portions A and C of the die are configured. In the pictured embodiment, the configuration data is loaded by first following the pattern shown by the arrow "1st", then the pattern shown by the arrow "2nd", and so forth until the "4th" set of rows has been configured. The result is a multi-product die in which portions A and C are configured and operational, and portions B(1), B(2), D(1), and D(2) are unconfigured and non-operational.In the product of FIG. 26, only portion A of the die is configured. In the pictured embodiment, the configuration data is loaded by first following the pattern shown by the arrow "1 st", then the pattern shown by the arrow "2nd", and so forth until the "4th" set of rows has been configured. The result is a multi-product die in which portion A is configured and operational, and portions B(1), B(2), C, D(1), and D(2) are unconfigured and non-operational.In some embodiments, unconfigured portions of the die default to a known state, in which all configuration memory cells are set to a low value. The PLD is designed such that when all configuration memory cells store a low value, there is no contention within the die. Therefore, in this state none of the nodes in the hidden portions of the die have an effect on the configured portions of the die. In some embodiments, a PLD includes a signal called "GHIGH" that sets all driven nodes to a low value during the configuration process, and the same signal simply remains asserted after configuration in the unconfigured regions of the die. Another signal called GPOWERDOWN is also asserted. Signal GPOWERDOWN reduces the power high value for the unconfigured regions, to avoid consuming unnecessary power. The power high value can be reduced to zero volts (ground), or to an intermediate level lower than the normal operating voltage of the die. In some embodiments, different unconfigured regions are reduced to different power high voltages.Note that each of the four products illustrated in FIGS. 23-26 is configured with a configuration bit stream of a different size. The configuration begins with a row near the center of the die and follows the same pattern each time, from the row just above a horizontal centerline upward, then from the row just below the centerline downward. The configuration logic switches to the row just below the centerline when it receives a high value on the "last_row_rollover" signal from the last row above the centerline, and it ends the configuration process when it receives a high value on the "last_row_rollover" signal from the last row below the centerline. This sequence simplifies the configuration logic. However, in other embodiments, configuration is performed following other patterns.FIG. 27 illustrates one way in which the configuration process can be controlled, for example, to configure a multi-product PLD die as shown in FIGS. 23-26. Each row of tiles or logic elements (ROW 1-ROW 2 in FIG. 27) has a "hide_row" signal that is asserted whenever the row is hidden. When a row is hidden, the associated "hide_row" signal sets the "islast" signal of the previous row to a high value, thereby tagging this row as the last row to be configured. Note that the previous row is the row preceding the present row in the configuration process. In FIG. 27, the previous row is the row below the present row, and is also the adjacent row in the direction of the configuration control logic 2710.When the "last_col_rollover" signal is asserted in the row before the hidden row closest to the configuration control logic 2710, a high value is generated on the "last_row_rollover" signal via AND gate 2701 and OR gate 2702, and is provided through the chain of OR gates to the configuration control logic 2710. Note that the last row is determined by the value of the "hide_row" signal in the next row in the configuration process, or by the VDD connection in the termination row (TERM ROW). Therefore, the selection of which row is the last row to be configured (e.g., which product will be generated from a multi-product die) can be changed simply by controlling the value of the "hide_row" signals.Note that in the pictured embodiment, the row physically the farthest from the configuration logic still generates a correct "last_row_rollover" signal when no rows are hidden. Note also that when the hiding scheme of FIG. 27 is applied to the die of FIGS. 23-26, the logic is duplicated in the bottom half of the die (e.g., FIG. 27 shows the hiding logic only for the top half of the die). This duplication provides the added benefit of allowing asymmetrical hiding between the top and the bottom halves of the die.FIG. 28 illustrates one way in the "hide_row" signal shown in FIG. 27 can be generated in a multi-product PLD. In the embodiment of FIG. 28, a 2-bit vertical bus is used to control when rows are hidden and when they are not hidden. In some embodiments (not shown), the bus includes one bit or more than two bits. A via option (shown by a circle at the intersection of two lines) connects the "hide_row" signal of a row to one of the bits of the bus, or to ground. When the "hide_row" signal is connected to ground, the row is never hidden. When the "hide_row" signal is connected to one of the bits of the bus, the value of that bit determines whether or not the row is hidden. Thus, the association of a row with a hiding combination is fixed by the structure of the die, but whether or not that row is actually hidden depends on the value of the bus bit.In one embodiment, each row hiding combination hides a contiguous set of rows, starting with the row(s) farthest from the configuration control logic. In one embodiment, when the "hide_row" signal has a first value the row is not included in the configuration process and any I/O blocks in that row are bypassed by the boundary scan chain. When the "hide_row" signal has a second value, the row is included in the configuration process and any I/O blocks in that row are included in the boundary scan chain.Note that in FIG. 28, some of the circles (via options) are black and some are white. The black circles indicate via options in which a via (a physical connection) is provided between the two lines that intersect at the via. Thus, in the exemplary embodiment of FIG. 28, the "hide_row" signal in row 0 is connected to ground, the "hide_row" signal in row 1 is connected to signal "hide_rows_A", and the "hide_row" signal in row 2 is connected to signal "hide_rows_B". Table 4 shows the different hiding combinations for the embodiment of FIG. 26. The first and second columns show the values of the "hide_rows_A" and "hide_rows_B" signals. The third column shows which rows are hidden in the pictured embodiment in response to the given values.<tb><sep>TABLE 4<tb><sep>hide_rows_A<sep>hide_rows_B<sep>Hidden Rows<tb><sep>0<sep>0<sep>No Hidden Rows<tb><sep>0<sep>1<sep>Hide Row 2<tb><sep>1<sep>1<sep>Hide Rows 1 and 2FIG. 29 shows the same die in which different via options are selected. In the embodiment of FIG. 29, the "hide_row" signal in row 0 is connected to signal "hide_rows_A", the "hide_row" signal in row 1 is connected to signal "hide_rows_B", and the "hide_row" signal in row 2 is connected to signal "hide_rows_B". Table 5 shows the different hiding combinations for the embodiment of FIG. 29. The first and second columns show the values of the "hide_rows_A" and "hide_rows_B" signals. The third column shows which rows are hidden in the pictured embodiment in response to the given values.<tb><sep>TABLE 5<tb><sep>hide_rows_A<sep>hide_rows_B<sep>Hidden Rows<tb><sep>0<sep>0<sep>No Hidden Rows<tb><sep>0<sep>1<sep>Hide Rows 1 and 2<tb><sep>1<sep>1<sep>Hide Rows 0, 1, and 2In other embodiments, the "hide_row" signals are generated using other methods, e.g., tied to power high or ground, or set to known values using fuses or package options. It will be apparent to those of skill in the art that these and many other methods can be utilized to provide known values for the "hide_row" signals. In some embodiments, the values of the "hide_row" signals are controlled by the previously-described product selection code. For example, in one embodiment the values of signals "hide_rows_A" and "hide_rows_B" are determined by the product selection code (which is set by fuses or package options, for example). Signals "hide_rows_A" and "hide_rows_B" then determine the values of the "hide_row" signals depending on the via options, as described above.Those having skill in the relevant arts of the invention will now perceive various modifications and additions that can be made as a result of the disclosure herein. For example, the above text describes the circuits of the invention in the context of programmable logic devices (PLDs) such as FPGAs and CPLDs. However, certain aspects of the invention can also be implemented in other integrated circuits, including non-programmable circuits.Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents.
Some embodiments include an integrated structure having a stack of memory cell levels. A pair of channel-material-pillars extend through the stack. A source structure is under the stack. The source structure includes a portion having an upper region, a lower region, and an intermediate region between the upper and lower regions. The upper and lower regions have a same composition and join to one another at edge locations. The intermediate region has a different composition than the upper and lower regions. The edge locations are directly against the channel material of the channel-material-pillars. Some embodiments include methods of forming an integrated assembly.
CLAIMS l/we claim,1. An integrated structure, comprising: a stack of memory cell levels; a pair of channel-material-pillars extending through the stack; and a source structure under the stack; the source structure comprising a portion having an upper region, a lower region, and an intermediate region between the upper and lower regions; the upper and lower regions comprising a same composition and joining to one another at edge locations; the intermediate region comprising a different composition than the upper and lower regions; the edge locations being directly against the channel material of the channel- material-pillars.2. The integrated structure of claim 1 wherein the intermediate region comprises semiconductor material.3. The integrated structure of claim 1 wherein the intermediate region comprises insulative material.4. The integrated structure of claim 1 wherein the intermediate region comprises conductive material.5. The integrated structure of claim 1 wherein the intermediate region comprises amorphous carbon.6. The integrated structure of claim 1 wherein the intermediate region comprises silicon dioxide.7. The integrated structure of claim 1 wherein the intermediate region comprises silicon nitride.8. The integrated structure of claim 1 wherein the intermediate region comprises metal.9. The integrated structure of claim 1 wherein the intermediate region has a vertical thickness of less than or equal to about 30 nm.10. The integrated structure of claim 1 wherein the intermediate region has a vertical thickness of less than or equal to about 10 nm.11. The integrated structure of claim 1 wherein the upper and lower regions comprise conductively-doped semiconductor material.12. The integrated structure of claim 1 wherein the upper and lower regions comprise conductively-doped silicon.13. The integrated structure of claim 1 wherein the conductive levels comprise metal.14. The integrated structure of claim 1 wherein the memory cell levels of the stack are spaced from one another by intervening levels comprising silicon dioxide.15. An integrated structure, comprising: a stack of alternating insulative levels and conductive levels; a source structure under the stack; a panel extending through the conductive levels, the panel being between a first block region and a second block region; a first channel-material-pillar extending through the stack and being in the first block region; a bottom of the first channel-material- pillar extending into the source structure; a second channel-material-pillar extending through the stack and being in the second block region; a bottom of the second channel- material-pillar extending into the source structure; and the source structure comprising a portion having an upper region, a lower region, and an intermediate region between the upper and lower regions; the upper and lower regions comprising a same composition and joining to one another at edge locations; the intermediate region comprising a different composition than the upper and lower regions; the edge locations being directly against the channel material of the first and second channel-material-pillars.16. The integrated structure of claim 15 wherein the panel comprises an outer liner region configured as a trough, and comprises an inner core region within said trough; and wherein the outer liner region comprises a same material as the intermediate region and is continuous with the material of the intermediate region.17. The integrated structure of claim 16 wherein said same material wraps around terminal regions of the insulative levels adjacent the panel.18. The integrated structure of claim 15 wherein the panel comprises an outer liner region configured as a trough, and comprises an inner core region within said trough; and wherein the outer liner region comprises a different material than the intermediate region.19. The integrated structure of claim 15 wherein the intermediate region comprises semiconductor material.20. The integrated structure of claim 15 wherein the intermediate region comprises insulative material.21. The integrated structure of claim 15 wherein the intermediate region comprises conductive material.22. The integrated structure of claim 15 wherein the intermediate region comprises one or more of amorphous carbon, silicon dioxide and silicon nitride.23. A method of forming an integrated assembly, comprising: forming a construction to comprise a source structure, and to comprise a stack of alternating first and second levels over the source structure; the source structure including semiconductor material over metal-containing material, and including a sacrificial-material seam extending laterally within the semiconductor material; forming first and second openings to extend through the stack, through the semiconductor material and the sacrificial-material seam therein, and to the metal-containing material; forming first and second pillars within the first and second openings, respectively; the first and second pillars including first and second channel-material-cylinders, respectively, and including cell materials outwardly of the first and second channel-material-cylinders; forming a third opening between the first and second openings; the third opening extending to the sacrificial-material seam; removing the sacrificial material of the sacrificial-material seam to form a conduit extending from the first pillar to the second pillar; removing the cell materials adjacent the conduit to extend the conduit to the first and second channel-material-cylinders; forming conductively-doped semiconductor material within the conduit to line the conduit; the conductively-doped semiconductor material being directly against the first and second channel-material- cylinders; a void remaining within the lined conduit and being open to the third opening; forming a first material within the void and the third opening; out-diffusing dopant from the conductively-doped semiconductor material into the channel material of the first and second channel- material-cylinders, the out-diffused dopant extending upwardly to at least one of the first levels; and forming conductive material within the first levels.24. The method of claim 23 further comprising recessing the first material within the third opening to a level beneath the lowest of the first levels; and then forming one or more insulative materials within the third opening and over the recessed first material.25. The method of claim 24 wherein the third opening is a trench which separates a first block region from a second block region; wherein the first pillar is within the first block region; and wherein the second pillar is within the second block region.26. The method of claim 23 wherein the first material comprises silicon dioxide.27. The method of claim 23 wherein the first material comprises carbon.28. The method of claim 23 wherein the first material is an insulative material.29. The method of claim 23 wherein the first material is a conductive material.30. The method of claim 23 comprising forming memory cells along the first levels, with the memory cells comprising regions of the first and second channel-material-cylinders.31. The method of claim 23 wherein the metal-containing material of the source structure comprises WSi, where the chemical formula indicates primary constituents rather than a specific stoichiometry.
INTEGRATED ASSEMBLIES, AND METHODS OF FORMING INTEGRATED ASSEMBLIESRELATED PATENT DATAThis application claims priority to and the benefit of U.S. Patent Application Serial No. 16/723,136, filed December 20, 2019, the disclosure of which is incorporated herein by reference.TECHNICAL FIELDIntegrated assemblies (e.g., memory devices configured for NAND). Methods of forming integrated assemblies (e.g., integrated memory devices).BACKGROUNDMemory provides data storage for electronic systems. Flash memory is one type of memory, and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of flash memory, and may be configured to comprise vertically-stacked memory cells.Before describing NAND specifically, it may be helpful to more generally describe the relationship of a memory array within an integrated arrangement. FIG. 1 shows a block diagram of a prior art device 1000 which includes a memory array 1002 having a plurality of memory cells 1003 arranged in rows and columns along with access lines 1004 (e.g., wordlines to conduct signals WL0 through WLm) and first data lines 1006 (e.g., bitlines to conduct signals BL0 through BLn). Access lines 1004 and first data lines 1006 may be used to transfer information to and from the memory cells 1003. A row decoder 1007 and a column decoder 1008 decode address signals A0 through AX on address lines 1009 to determine which ones of the memory cells 1003 are to be accessed. A sense amplifier circuit 1015 operates to determine the values of information read from the memory cells 1003. An I/O circuit 1017 transfers values of information between the memory array 1002 and input/output (I/O) lines 1005. Signals DQ0 through DON on the I/O lines 1005 can represent values of information read from or to be written into the memory cells 1003. Other devices can communicate with the device 1000 through the I/O lines 1005, the address lines 1009, or the control lines 1020. A memory control unit 1018 is used to control memory operations to be performed on the memory cells 1003, and utilizes signals on the control lines 1020. The device 1000 can receive supply voltage signals Vcc and Vss on a first supply line 1030 and a second supply line 1032, respectively. The device 1000 includes a select circuit 1040 and an input/output (I/O) circuit 1017. The select circuit 1040 can respond, via the I/O circuit 1017, to signals CSEL1 through CSELn to select signals on the first data lines 1006 and the second data lines 1013 that can represent the values of information to be read from or to be programmed into the memory cells 1003. The column decoder 1008 can selectively activate the CSEL1 through CSELn signals based on the A0 through AX address signals on the address lines 1009. The select circuit 1040 can select the signals on the first data lines 1006 and the second data lines 1013 to provide communication between the memory array 1002 and the I/O circuit 1017 during read and programming operations.The memory array 1002 of FIG. 1 may be a NAND memory array, and FIG. 2 shows a schematic diagram of a three-dimensional NAND memory device 200 which may be utilized for the memory array 1002 of FIG. 1. The device 200 comprises a plurality of strings of charge- storage devices. In a first direction (Z-Z’), each string of charge-storage devices may comprise, for example, thirty-two charge-storage devices stacked over one another with each charge-storage device corresponding to one of, for example, thirty-two tiers (e.g., TierO- Tier31 ). The charge-storage devices of a respective string may share a common channel region, such as one formed in a respective pillar of semiconductor material (e.g., polysilicon) about which the string of charge-storage devices is formed. In a second direction (X-X’), each first group of, for example, sixteen first groups of the plurality of strings may comprise, for example, eight strings sharing a plurality (e.g., thirty- two) of access lines (i.e., “global control gate (CG) lines”, also known as wordlines, WLs). Each of the access lines may couple the charge- storage devices within a tier. The charge-storage devices coupled by the same access line (and thus corresponding to the same tier) may be logically grouped into, for example, two pages, such as P0/P32, P1/P33, P2/P34 and so on, when each charge-storage device comprises a cell capable of storing two bits of information. In a third direction (Y-Y’), each second group of, for example, eight second groups of the plurality of strings, may comprise sixteen strings coupled by a corresponding one of eight data lines. The size of a memory block may comprise 1 ,024 pages and total about 16MB (e.g., 16 WLs x 32 tiers x 2 bits = 1 ,024 pages/block, block size = 1 ,024 pages x 16KB/page = 16MB). The number of the strings, tiers, access lines, data lines, first groups, second groups and/or pages may be greater or smaller than those shown in FIG. 2.FIG. 3 shows a cross-sectional view of a memory block 300 of the 3D NAND memory device 200 of FIG. 2 in an X-X’ direction, including fifteen strings of charge-storage devices in one of the sixteen first groups of strings described with respect to FIG. 2. The plurality of strings of the memory block 300 may be grouped into a plurality of subsets 310, 320, 330 (e.g., tile columns), such as tile columm, tile columnj and tile columnK, with each subset (e.g., tile column) comprising a “partial block” (sub-block) of the memory block 300. A global drain-side select gate (SGD) line 340 may be coupled to the SGDs of the plurality of strings. For example, the global SGD line 340 may be coupled to a plurality (e.g., three) of sub-SGD lines 342, 344, 346 with each sub-SGD line corresponding to a respective subset (e.g., tile column), via a corresponding one of a plurality (e.g., three) of sub- SGD drivers 332, 334, 336. Each of the sub-SGD drivers 332, 334, 336 may concurrently couple or cut off the SGDs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global source-side select gate (SGS) line 360 may be coupled to the SGSs of the plurality of strings. For example, the global SGS line 360 may be coupled to a plurality of sub-SGS lines 362, 364, 366 with each sub-SGS line corresponding to the respective subset (e.g., tile column), via a corresponding one of a plurality of sub- SGS drivers 322, 324, 326. Each of the sub-SGS drivers 322, 324, 326 may concurrently couple or cut off the SGSs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global access line (e.g., a global CG line) 350 may couple the charge-storage devices corresponding to the respective tier of each of the plurality of strings. Each global CG line (e.g., the global CG line 350) may be coupled to a plurality of sub-access lines (e.g., sub-CG lines) 352, 354, 356 via a corresponding one of a plurality of sub-string drivers 312, 314 and 316. Each of the sub-string drivers may concurrently couple or cut off the charge-storage devices corresponding to the respective partial block and/or tier independently of those of other partial blocks and/or other tiers. The charge-storage devices corresponding to the respective subset (e.g., partial block) and the respective tier may comprise a “partial tier” (e.g., a single “tile”) of charge-storage devices. The strings corresponding to the respective subset (e.g., partial block) may be coupled to a corresponding one of sub-sources 372, 374 and 376 (e.g., “tile source”) with each sub-source being coupled to a respective power source.The NAND memory device 200 is alternatively described with reference to a schematic illustration of FIG. 4.The memory array 200 includes wordlines 202i to 202N, and bitlines 228i to 228M. The memory array 200 also includes NAND strings 206i to 206M. Each NAND string includes charge-storage transistors 208i to 208N. The charge-storage transistors may use floating gate material (e.g., polysilicon) to store charge, or may use charge-trapping material (such as, for example, silicon nitride, metallic nanodots, etc.) to store charge.The charge-storage transistors 208 are located at intersections of wordlines 202 and strings 206. The charge-storage transistors 208 represent non-volatile memory cells for storage of data. The charge- storage transistors 208 of each NAND string 206 are connected in series source-to-drain between a source-select device (e.g., source- side select gate, SGS) 210 and a drain-select device (e.g., drain-side select gate, SGD) 212. Each source-select device 210 is located at an intersection of a string 206 and a source-select line 214, while each drain-select device 212 is located at an intersection of a string 206 and a drain-select line 215. The select devices 210 and 212 may be any suitable access devices, and are generically illustrated with boxes in FIG. 4.A source of each source-select device 210 is connected to a common source line 216. The drain of each source-select device 210 is connected to the source of the first charge-storage transistor 208 of the corresponding NAND string 206. For example, the drain of source- select device 2101 is connected to the source of charge-storage transistor 208i of the corresponding NAND string 206i. The source- select devices 210 are connected to source-select line 214.The drain of each drain-select device 212 is connected to a bitline (i.e., digit line) 228 at a drain contact. For example, the drain of drain- select device 2121 is connected to the bitline 228i. The source of each drain-select device 212 is connected to the drain of the last charge- storage transistor 208 of the corresponding NAND string 206. For example, the source of drain-select device 2121 is connected to the drain of charge-storage transistor 208N of the corresponding NAND string 206i. The charge-storage transistors 208 include a source 230, a drain 232, a charge-storage region 234, and a control gate 236. The charge- storage transistors 208 have their control gates 236 coupled to a wordline 202. A column of the charge-storage transistors 208 are those transistors within a NAND string 206 coupled to a given bitline 228. A row of the charge-storage transistors 208 are those transistors commonly coupled to a given wordline 202.The vertically-stacked memory cells of three-dimensional NAND architecture may be block-erased by generating hole carriers beneath them, and then utilizing an electric field to sweep the hole carriers upwardly along the memory cells.Gating structures of transistors may be utilized to provide gate- induced drain leakage (GIDL) which generates the holes utilized for block-erase of the memory cells. The transistors may be the source- side select (SGS) devices described above. The channel material associated with a string of memory cells may be configured as a channel material pillar, and a region of such pillar may be gatedly coupled with an SGS device. The gatedly coupled portion of the channel material pillar is a portion that overlaps a gate of the SGS device.It can be desired that at least some of the gatedly coupled portion of the channel material pillar be heavily doped. In some applications it can be desired that the gatedly coupled portion include both a heavily- doped lower region and a lightly-doped upper region; with both regions overlapping the gate of the SGS device. Specifically, overlap with the lightly-doped region provides a non-leaky “OFF” characteristic for the SGS device, and overlap with the heavily-doped region provides leaky GIDL characteristics for the SGS device. The terms “heavily-doped” and “lightly-doped” are utilized in relation to one another rather than relative to specific conventional meanings. Accordingly, a “heavily- doped” region is more heavily doped than an adjacent “lightly-doped” region, and may or may not comprise heavy doping in a conventional sense. Similarly, the “lightly-doped” region is less heavily doped than the adjacent “heavily-doped” region, and may or may not comprise light doping in a conventional sense. In some applications, the term “lightly- doped” refers to semiconductor material having less than or equal to about 1018atoms/cm3of dopant, and the term “heavily-doped” refers to semiconductor material having greater than or equal to about 1022atoms/cm3of dopant.The channel material may be initially doped to the lightly-doped level, and then the heavily-doped region may be formed by out-diffusion from an underlying doped semiconductor material.It is desired to develop improved methods of achieving desired heavily-doped regions of channel material pillars. It is also desired to develop improved memory devices.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a block diagram of a prior art memory device having a memory array with memory cells.FIG. 2 shows a schematic diagram of the prior art memory device of FIG. 1 in the form of a 3D NAND memory device.FIG. 3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in an X-X’ direction.FIG. 4 is a schematic diagram of a prior art NAND memory array.FIGS. 5 and 5A are diagrammatic views of a region of an example memory device (memory array, memory configuration). FIG. 5 is a diagrammatic cross-sectional side view. FIG. 5A is a diagrammatic top- down view along the line 5A-5A of FIG. 5. The cross-sectional side view of FIG. 5 is along the line 5-5 of FIG. 5A.FIG. 6 is a diagrammatic cross-sectional side view of an example integrated assembly at an example process stage of an example embodiment method for forming an example memory device.FIG. 7 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 6.FIG. 7A is an enlarged view of the region “A” of FIG. 7. FIG. 8 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 7.FIG. 9 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 8.FIG. 10 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 9.FIG. 10A is an enlarged view of the region “A” of FIG. 10.FIG. 11 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 10.FIG. 12 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 11 .FIG. 13 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 12.FIG. 14 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 13.FIG. 15 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 14.FIG. 16 is a diagrammatic cross-sectional side view of the example integrated assembly of FIG. 6 at an example process stage subsequent to that of FIG. 15.DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSSome embodiments include integrated structures having a source-structure-portion which includes an upper region, a lower region, and an intermediate region between the upper and lower regions. The upper and lower regions have a same composition and join to one another at edge locations. The intermediate region has a different composition than the upper and lower regions. The edge locations may be directly against channel material of channel-material- pillars. Some embodiments include methods of forming integrated structures. Example embodiments are described with reference to FIGS. 5-16.Referring to FIGS 5 and 5A, an integrated assembly 10 includes a stack 12 of alternating conductive levels 14 and insulative levels 16.The conductive levels 14 include conductive regions 70. The conductive regions 70 may comprise any suitable composition(s). In the shown embodiment, the conductive regions include a conductive core material 76 (e.g., tungsten), and a conductive liner material 74 (e.g., titanium nitride) which at least partially surrounds the core material.Dielectric-barrier material 72 extends at least partially around the conductive regions 70. The dielectric-barrier material 72 may comprise any suitable composition(s); and in some embodiments comprises high- k material (e.g., AIO, where the chemical formula indicates primary constituents rather than a specific stoichiometry). The term “high-k” means a dielectric constant greater than that of silicon dioxide.The insulative levels 16 comprise insulative material 22. The insulative material 22 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. In some embodiments, the levels 16 may be referred to as intervening levels provided between the conductive levels 14.A source structure 18 is under the stack 12. The source structure 18 comprises materials 24 and 26. The material 24 may comprise conductively-doped semiconductor material (e.g., conductively-doped silicon), and the material 26 may be a metal-containing material (e.g., WSi, where the chemical formula indicates primary constituents rather than a specific stoichiometry). The source structure 18 also includes a portion 84 having an upper region 86, a lower region 88, and intermediate region 90 between the upper and lower regions.The upper and lower regions 86 and 88 comprise a same composition 58 as one another, and join to one another at edge locations 93. The intermediate region 90 comprises a composition 62 which is a different from the composition 58.In some embodiments, the composition 58 may comprise conductively-doped semiconductor material (e.g., conductively-doped silicon). In such embodiments, the composition 58 may be the same as the composition 24, or may be different than the composition 24.The composition 62 may be semiconductive, insulative, or conductive. In some embodiments, the composition 62 includes semiconductor material (e.g., silicon, germanium, etc.). In some embodiments, the composition 62 includes one or more of amorphous carbon, silicon dioxide and silicon nitride. In some embodiments, the composition 62 includes metal (e.g., tungsten, titanium, etc.) and/or metal-containing compositions (e.g., metal carbide, metal nitride, metal silicide, etc.).The intermediate region 90 may have any suitable vertical thickness T, and in some embodiments such vertical thickness may be less than or equal to about 30 nanometers (nm) or less than or equal to about 10 nm. In some embodiments, the vertical thickness T may be within a range of from about 5 nm to about 30 nm.The source structure 18 may be analogous to the source structures 216 described in the “Background” section. The source structure may be coupled with control circuitry (e.g., CMOS). The control circuitry may be under the source structure 18 or may be in any other suitable location. A conductive material of the source structure 18 may be coupled with the control circuitry (e.g. CMOS) at any suitable process stage.The source structure is shown to be supported by an insulative material 32. The insulative material 32 may comprise any suitable composition(s); including for example, one or more of silicon dioxide, silicon nitride, etc. The insulative material 32 may be supported by a semiconductor substrate (base). Such substrate is not shown in FIG. 5 to simplify the drawing.Pillars 50 extend through the stack 12, through the material 24 of the source structure 18, and to an upper surface of the metal-containing material 26 within the source structure 18. The pillars 50 along the cross-section of FIG. 5 are labeled as 50a and 50b so that they may be distinguished relative to one another. The pillars 50a and 50b may be referred to as first and second pillars, respectively.The pillars 50 include channel-material 36, cell materials within a region 42 adjacent the channel material, and dielectric material 40. In some embodiments, the channel material 36 may be considered to be configured as channel-material-pillars (or channel-material-cylinders) 38 which are comprised by the pillars 50. The channel-material-pillars 38 along the cross-section of FIG. 5 are labeled as 38a and 38b so that they may be distinguished relative to one another. The channel- material-cylinders 38a and 38b may be referred to as first and second channel- mate rial-cylinders, respectively.The memory cell materials within the regions 42 may comprise tunneling material, charge-trapping material and charge-blocking material, as described in more detail below with reference to FIGS. 7 and 7A.Doped regions 66 (indicated by stippling) are within lower regions of the channel-material-cylinders 38a and 38b. The edge locations 93 described above are directly against portions of the doped regions 66 of the channel-material-cylinders 38.The assembly 10 of FIG. 5 is shown as a memory device comprising memory cells 92 and source-select devices (SGS devices) 94. A lowermost of the conductive levels 14 is labeled 14a, and the doped region 66 extends to the conductive level 14a. The conductive level 14a comprises the SGS devices 94. In the shown embodiment, the dopant (indicated by stippling) extends partially across the level 14a to achieve the desired balance between non-leaky “OFF” characteristics for the SGS devices and leaky GIDL characteristics for the SGS devices. Although only one of the conductive levels is shown to be incorporated into the source-select devices, in other embodiments multiple conductive levels may be incorporated into the source-select devices. The conductive levels may be electrically coupled with one another (ganged) to be together incorporated into long-channel source- select devices. If multiple of the conductive levels are incorporated into the source-select devices, the out-diffused dopant may extend upwardly across two or more of the conductive levels 14 which are incorporated into the source-select devices.The memory cells 92 (e.g., NAND memory cells) are vertically stacked one atop another. The memory cells 92 are along the first levels 14. Each of the memory cells 92 comprises a region of the semiconductor material (channel material) 36, and comprises regions (control gate regions) 96 of the conductive levels 14. The regions of the conductive levels which are not comprised by the memory cells 92 may be considered to be wordline regions (or routing regions) 98 which couple the control gate regions with driver circuitry and/or with other suitable circuitry. The memory cells 92 also comprise the cell materials (e.g., the tunneling material, charge-storage material, dielectric-barrier material and charge-blocking material) within the regions 42.In some embodiments, the conductive levels 14 associated with the memory cells 92 may be referred to as wordline/control gate levels (or memory cell levels), in that they include wordlines and control gates associated with vertically-stacked memory cells of NAND strings. The NAND strings may comprise any suitable number of memory cell levels. For instance, the NAND strings may have 8 memory cell levels, 16 memory cell levels, 32 memory cell levels, 64 memory cell levels, 512 memory cell levels, 1024 memory cell levels, etc.An opening (slit, trench) 52 extends through the conductive levels 14, and an insulative panel 100 is provided within such opening. The panel 100 may extend in and out of the page relative to the cross- sectional view of FIG. 5, as indicated in the top-down view of FIG. 5A.In some embodiments, the pillars 50 may be considered to be representative of a large number of substantially identical channel material pillars extending across the memory device 10; with the term “substantially identical” meaning identical to within reasonable tolerances of fabrication and measurement. FIG. 5A shows the pillars 50 arranged within a matrix (with the pillars 50 being hexagonally- packed in the illustrated embodiment), and shows the slit 52 (and the panel 100 therein) extending through the matrix of the channel material pillars. In some embodiments, the slit 52 (and the panel 100 therein) may divide the pillars between a first block region 102 and a second block region 104. Accordingly, the memory cells 92 on one side of the slit 52 may be considered to be within the first block region 102, and the memory cells 92 on the other side of the slit 52 may be considered to be within a second block region 104. The block regions 102 and 104 may be analogous to the blocks (or sub-blocks) described above in the “Background” section of this disclosure.The integrated assembly 10 of FIGS. 5 and 5A may be considered to correspond to a memory device, or memory array, comprising the memory cells 92. Such integrated assembly may be formed with any suitable processing. Example processing is described with reference to FIGS. 6-16.Referring to FIG. 6, the integrated assembly 10 is shown at an initial process stage. The assembly 10 includes a preliminary stack 12 of first and second levels 14 and 16 over a preliminary source structure 18.The first levels 14 comprise a material 20, and the second levels 16 comprise a material 22. The materials 20 and 22 may comprise any suitable compositions. In some embodiments, the material 20 may comprise, consist essentially of, or consist of silicon nitride; and the material 22 may comprise, consist essentially of, or consist of silicon dioxide. The material 22 of FIG. 6 may be identical to that described above with reference to FIG. 5.The preliminary source structure 18 includes the semiconductor material 24 over the metal-containing material 26, and includes a seam 28 extending laterally within the semiconductor material 24. The seam 28 comprises sacrificial material 30, and may be referred to as a sacrificial- material-seam.The semiconductor material 24 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc.; with the term lll/V semiconductor material referring to semiconductor materials comprising elements selected from groups III and V of the periodic table (with groups III and V being old nomenclature, and now being referred to as groups 13 and 15). For instance, in some embodiments the semiconductor material 24 may comprise conductively-doped silicon (e.g., n-type silicon). The silicon may be in any suitable crystalline form or combination of crystalline forms (e.g., monocrystalline, polycrystalline, amorphous).The metal-containing material 26 may comprise any suitable metal-containing composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.) and/or metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.). In some embodiments, the metal-containing material 26 may comprise, consist essentially of, or consist of WSi, where the chemical formula indicates primary constituents rather than a specific stoichiometry. The WSi may be alternatively referred to as WSix, where x is a number greater than zero.The sacrificial material 30 is a material which can be selectively removed relative to the semiconductor material 24. For purposes of interpreting this disclosure and the claims follow, a material is considered to be selectively removable relative to another material if the material may be etched faster than the other material. In some embodiments, the sacrificial material 30 may comprise, consist essentially of, or consist of one or more metal nitrides (e.g., titanium nitride, tantalum nitride, tungsten nitride, etc.)· For instance the sacrificial material 30 may comprise TiN, where the chemical formula indicates primary constituents rather than a specific stoichiometry.The metal-containing material 26 is supported by the insulative material 32. The insulative material 32 may comprise any suitable composition(s); such as, for example, one or more of silicon dioxide, silicon nitride, aluminum oxide, etc.The insulative material 32 may be supported by a base (not shown). The base may comprise semiconductor material; and may, for example, comprise, consist essentially of, or consist of monocrystalline silicon (Si). The base may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. In some applications, the base may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit fabrication. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, etc.In some embodiments, the stack 12 and source structure 18 of FIG. 6 may be together considered to be a construction 34.Referring to FIG. 7, openings 34 are formed to extend through the stack 12, through the semiconductor material 24 and seam 28, and to the metal-containing material 26. The openings 34 of FIG. 7 may be referred to as first and second openings 34a and 34b to distinguish them from one another. The semiconductor material (channel material) 36 is formed within the openings 34. The semiconductor material 36 forms the channel-material-pillars (channel-material-cylinders) 38 within the openings 34. The illustrated channel-material-cylinders 38 of FIG. 7 may be referred to as first and second channel-material- cylinders 38a and 38b to distinguish them from one another.The semiconductor material 36 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc. In some embodiments, the semiconductor material 36 may comprise, consist essentially of, or consist of appropriately-doped silicon.In the illustrated embodiment, the channel-material-pillars 36 are annular rings (as shown in the top-down view of FIG. 5A), with such annular rings surrounding an insulative material 40. Such configuration of the channel-material-pillars may be considered to correspond to a “hollow” channel configuration, with the insulative material 40 being provided within the hollows of the channel material pillars. In other embodiments, the channel-material-pillars (channel-material-cylinders) may be configured as solid pillars (cylinders).The channel-material-pillars 36 are spaced from the materials 20 and 22 of the stack 12 by intervening regions 42. The regions 42 comprise one or more cell materials (memory cell materials), with such cell materials being formed within the openings 34 prior to the channel material 36. The cell materials of the regions 42 may comprise tunneling material 44, charge-storage material 46 and charge-blocking material 48, as shown in FIG. 7A relative to an enlarged view of a region “A” of FIG. 7.The tunneling material 44 may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, etc.The charge-storage material 46 may comprise any suitable composition(s); and in some embodiments may comprise charge- trapping material (e.g., one or more of silicon nitride, silicon oxynitride, conductive nanodots, etc.).The charge-blocking material 48 may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, silicon oxynitride, aluminum oxide, hafnium oxide, zirconium oxide, etc.The materials 36, 44, 46 and 48 may be together considered to form the pillars 50, with the channel-material-pillars 38 being included within such pillars 50. The illustrated pillars 50 of FIG. 7 may be referred to as first and second pillars 50a and 50b, respectively, to distinguish them from one another.Referring to FIG. 8, a third opening 52 is formed between the first and second openings 34a and 34b. The third opening 52 corresponds to the slit (trench) described above with reference to FIG. 5. The third opening 52 passes through the stack 12 and to the sacrificial-material- seam 28. The opening 52 may or may not penetrate the sacrificial material 30 of the seam 28. In some embodiments the openings 34 are cylindrical openings and the opening 52 is a trench (slit) which extends in and out of the page relative to the cross-section of FIG. 8, as may be understood with reference to FIG. 5A.The opening 52 has sidewall surfaces 53 which extend along the materials 20 and 22 of the stack 12. In the shown embodiment, the sidewall surfaces 53 are tapered. In other embodiments, the sidewall surfaces 53 may be substantially vertically straight; with the term “substantially vertically straight” meaning vertically straight to within reasonable tolerances of fabrication and measurement.Referring to FIG. 9, protective material 54 is formed along the sidewall surfaces 53 of the opening 52. In some embodiments, the protective material 54 may be considered to line the sidewall surfaces 53.The protective material 54 may comprise any suitable composition(s). In some embodiments, the protective material 54 may comprise, consist essentially of, or consist of silicon; and specifically may comprise silicon which is effectively undoped (e.g., comprising an intrinsic dopant concentration, and in some embodiments comprising a dopant concentration of less than or equal to about 1016atoms/cm3).Referring to FIG. 10, the sacrificial material 30 of the seam 28 (FIG. 9) is selectively removed relative to the semiconductor material 24, and relative to the protective material 54. Such forms conduits 56. The conduits 56 are extended through the through the cell materials 44, 46 and 48 within the regions 42 (as shown in FIG. 10A) to expose sidewall surfaces 55 of the semiconductor material (channel material) 36. Accordingly, the conduits 56 are extended to the first and second channel-material-cylinders 38a and 38b of the first and second pillars 50a and 50b.In some embodiments, the conduits 56 may have vertical dimensions Di within a range of from about 10 nanometers (nm) to about 50 nm.Referring to FIG. 11 , the conductively-doped semiconductor material 58 is formed within the conduits 56 (FIG. 10) to line the conduits (i.e., to partially fill the conduits). The liner of the conductively- doped semiconductor material 58 within the conduits 56 may have a vertical thickness D2 within a range of from about 5 nm to about 20 nm. A void 60 is within the partially-filled conduit, with such void being open to (i.e., being continuous with) the third opening 52.The semiconductor material 58 may be referred to as a second semiconductor material to distinguish it from the first semiconductor material 24.The semiconductor material 58 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc. In some embodiments, the semiconductor material 58 may comprise silicon which is heavily doped (e.g., doped to a concentration of at least about 1021atoms/cm3) with n-type dopant (e.g., phosphorus). The conductively-doped semiconductor material 58 is directly against the first and second channel-material-cylinders 38a and 38b. In some embodiments, it is found that it can be difficult to uniformly fill the conduits 56 with semiconductor material 58 due to the semiconductor material 58 prematurely pinching off the conduits. Generally, semiconductor material deposits as a relatively rough material, and it can be difficult to uniformly fill the conduits 56 during deposition of such rough material. The partial fill of the conduits disclosed herein may enable the material 58 to entirely fill regions along the surfaces 55 of the channel-material-cylinders 38a and 38b without the premature pinch-off of the conduits.The material 58 is adjacent the material 54 along the sidewalls of the slit 52. In the shown embodiment, the materials 54 and 58 merge to form a material 54/58. In other embodiments, the material 58 may remain discrete from the material 54 so that the illustrated material 54/58 is actually a laminate of materials 54 and 58.Referring to FIG. 12, the opening 52 and voids 60 (FIG. 11 ) are filled with the material 62. In some embodiments, the material 62 may be referred to as a fill material. In some embodiments, the material 62 may be referred to as a third material to distinguish it from the first and second materials 20 and 22 of the stack 12.The material 62 may comprise any suitable composition(s); and in some embodiments may include semiconductive material, insulative material and/or conductive material.In some example embodiments, the material 62 may comprise, consist essentially of, or consist of silicon dioxide; and may be formed as a spin-on dielectric (SOD) or a spin-on glass (SOG). Additionally, or alternatively, at least some of the silicon dioxide of the material 62 may be formed by atomic layer deposition (ALD) to ensure that the material 62 entirely fills the voids 60 (FIG. 11 ). Although the material 62 is shown entirely filling the voids 60 (which may be preferred in some applications), it is to be understood that some embodiments will have regions of the voids 60 remaining after the formation of the material 62 (i.e., the material 62 may not entirely fill the voids 60).In some example embodiments, the material 62 may comprise, consist essentially of, or consist of carbon (e.g., amorphous carbon).In some example embodiments, the material 62 may comprise, consist essentially of, or consist of metal (e.g., tungsten, titanium, etc.) and/or metal-containing compositions (e.g., metal nitride, metal carbide, metal silicide, tungsten nitride, titanium nitride, tungsten silicide, titanium silicide, etc.).In some example embodiments, the material 62 may comprise, consist essentially of, or consist of semiconductor material (either undoped, or conductively doped); such as, for example, silicon, germanium, semiconductor oxide, etc.Referring to FIG. 13, the material 62 is recessed within the third opening 52 (i.e., the slit 52) to a level beneath the lowest of the first levels 16 (i.e., to a level beneath the bottommost of the levels 16 of the stack 12).Referring to FIG. 14, the materials 54/58 are removed from along the sidewalls 53 of the opening (slit) 52, and protective material 64 is formed along the conductively-doped semiconductor material 24 at the bottom of the opening 52. The protective material 64 may protect the conductively-doped semiconductor material 24 (e.g., conductively- doped silicon) from being exposed to a subsequent etch (described below with reference to FIG. 15) which may otherwise undesirably remove the conductively-doped semiconductor material. The protective material 64 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. The protective material 64 may be formed by oxidizing regions of the materials 54/58 (FIG. 13), oxidizing a region of the material 24, and/or by deposition (e.g., atomic layer deposition, chemical vapor deposition, etc.).Dopant is out-diffused from the conductively-doped semiconductor material 58 into the semiconductor material (channel material) 36 to form the heavily-doped regions 66 within lower portions of the channel-material-pillars 38. Stippling is utilized to indicate the dopant within the heavily-doped regions 66. The doped regions 66 extend upwardly to the lowest of the first levels 14.The out-diffusion from the doped material 58 into the semiconductor material 36 may be accomplished with any suitable processing, including, for example, suitable thermal processing (e.g., thermal processing at a temperature exceeding about 300°C for a duration of at least about two minutes).Referring to FIG. 15, the material 20 (FIG. 14) of the first levels 14 is removed to leave voids 68 along the first levels 16. In some embodiments, the material 20 may comprise silicon nitride, and may be removed with an etch utilizing hot phosphoric acid. The material 64 protects the conductively-doped semiconductor material 24 from being exposed to such hot phosphoric acid.Referring to FIG. 16, the voids 68 (FIG. 15) are lined with the dielectric-barrier material 72, and are then filled with the conductive material 70. The conductive material 70 may comprise any suitable composition(s); and in the shown embodiment comprises the conductive core material 74 (e.g., tungsten) at least partially surrounded by the conductive liner material 76 (e.g., titanium nitride).The first levels 14 of FIG. 16 may be considered to be conductive levels, and the stack 12 may be considered to comprise alternating insulative levels 16 and conductive levels 14.The insulative materials 80 and 82 are formed within the opening (trench, slit) 52. The material 80 may comprise a same composition as the material 62, or may comprise a different composition than the material 62. Accordingly, a dashed line 81 is provided to indicate a possible interface where the materials 62 and 80 join to one another in embodiments in which the materials 62 and 80 comprise different compositions relative to one another. If the materials 62 and 80 comprise a same composition as one another, then such materials will some merge into a single continuous material which extends within the opening 52 and within the source structure 18.The insulative materials 80 and 82 may comprise any suitable composition(s); and may comprise a same composition as one another, or different compositions. In some embodiments the material 80 may comprise, consist essentially of, or consist of silicon dioxide; and the material 82 may comprise, consist essentially of, or consist of one or more of silicon, silicon nitride, carbon, etc. In some embodiments, the material 80 may be considered to form an outer liner region within the trench (slit) 52, and may be considered to be configured as a trough 83. The material 82 to be considered to be configured as an inner core region 85 within such trough.In the illustrated embodiment, the material 80 wraps around terminal regions 78 of the insulative levels 16.The assemblies and structures discussed above may be utilized within integrated circuits (with the term “integrated circuit” meaning an electronic circuit supported by a semiconductor substrate); and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms “dielectric” and “insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term “insulative” (or “electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The terms “electrically connected” and “electrically coupled” may both be utilized in this disclosure. The terms are considered synonymous. The utilization of one term in some instances and the other in other instances may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being “on”, “adjacent” or “against” another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present. The terms "directly under", "directly over", etc., do not indicate direct physical contact (unless expressly stated otherwise), but instead indicate upright alignment.Structures (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structures generally extend upwardly from an underlying base (e.g., substrate). The vertically- extending structures may extend substantially orthogonally relative to an upper surface of the base, or not.Some embodiments include an integrated structure having a stack of memory cell levels. A pair of channel-material-pillars extend through the stack. A source structure is under the stack. The source structure includes a portion having an upper region, a lower region, and an intermediate region between the upper and lower regions. The upper and lower regions have a same composition and join to one another at edge locations. The intermediate region has a different composition than the upper and lower regions. The edge locations are directly against the channel material of the channel-material-pillars.Some embodiments include an integrated structure comprising a stack of alternating insulative levels and conductive levels. A source structure is under the stack. A panel extends through the conductive levels. The panel is between a first block region and a second block region. A first channel-material-pillar extends through the stack and is in the first block region. A bottom of the first channel-material-pillar extends into the source structure. A second channel-material-pillar extends through the stack and is in the second block region. A bottom of the second channel-material-pillar extends into the source structure. The source structure comprises a portion having an upper region, a lower region, and an intermediate region between the upper and lower regions. The upper and lower regions comprise a same composition and join to one another at edge locations. The intermediate region comprises a different composition than the upper and lower regions. The edge locations are directly against the channel material of the first and second channel-material-pillars.Some embodiments include a method of forming an integrated assembly. A construction is formed to comprise a source structure, and to comprise a stack of alternating first and second levels over the source structure. The source structure includes semiconductor material over metal-containing material, and includes a sacrificial-material seam extending laterally within the semiconductor material. First and second openings are formed to extend through the stack, through the semiconductor material and the sacrificial-material seam therein, and to the metal-containing material. First and second pillars are formed within the first and second openings, respectively. The first and second pillars include first and second channel-material-cylinders, respectively, and include cell materials outwardly of the first and second channel-material-cylinders. A third opening is formed between the first and second openings. The third opening extends to the sacrificial-material seam. The sacrificial material of the sacrificial- material seam is removed to form a conduit extending from the first pillar to the second pillar. The cell materials adjacent the conduit are removed to extend the conduit to the first and second channel-material- cylinders. Conductively-doped semiconductor material is formed within the conduit to line the conduit. The conductively-doped semiconductor material is directly against the first and second channel-material- cylinders. A void remains within the lined conduit and is open to the third opening. A first material is formed within the void and the third opening. Dopant is out-diffused from the conductively-doped semiconductor material into the channel material of the first and second channel-material-cylinders. The out-diffused dopant extends upwardly to at least one of the first levels. Conductive material is formed within the first levels.In compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents.
System and methods are provided for dynamically managing a first-in/first-out (FIFO) command queue of a system controller. One or more commands are received into the command queue, a command being associated with a priority parameter. A current command first in line to be executed in the command queue is determined, the current command being associated with a first priority parameter. A second command associated with a second priority parameter is determined, the second priority parameter being largest among priority parameters associated with the one or more commands. A final priority parameter for the current command is computed based at least in part on the second priority parameter.
CLAIMS What is claimed is: 1. A method for dynamically managing a first-in/first-out (FIFO) command queue of a system controller, the method comprising: receiving one or more commands into the command queue, a command being associated with a priority parameter; determining a current command first in line to be executed in the command queue, the current command being associated with a first priority parameter; determining a second command associated with a second priority parameter, the second priority parameter being largest among priority parameters associated with the one or more commands; computing a final priority parameter for the current command based at least in part on the second priority parameter; and outputting the final priority parameter in order for the current command to be selected for execution when the final priority parameter satisfies a predetermined condition. 2. The method of claim 1, and further comprising: computing a second final priority parameter for a second current command in a second FIFO command queue in the system controller; wherein if the final priority parameter is larger than the second final priority parameter, the final priority parameter satisfies the predetermmed condition so that the current command is selected for execution. 3. The method of claim 1, and further comprising: computing a second final priority parameter for a second current command in a second FIFO command queue in the system controller; computing a third final priority parameter for a third current command in a third FIFO command queue in the system controller; wherein if the final priority parameter is larger than both the second final priority parameter and the third final priority parameter, then the final priority parameter satisfies the predetermined condition so that the current command is selected for execution. 4. The method of claim 1, wherein: the current command is associated with a first wait-time parameter, the first wait-time parameter indicating a duration of the current command in the command queue; and the second command is associated with a second wait-time parameter, the second wait- time parameter indicating a duration of the second command in the command queue. 5. The method of claim 4, wherein: when the second wait-time parameter is larger than a predetermined threshold, the final priority parameter is computed to be equal to a first value; and when the second wait-time parameter is smaller than or equal to the predetermined threshold, the final priority parameter is computed to be equal to a second value. 6. The method of claim 5, wherein the first value is equal to the second priority parameter, and the second value is equal to half of a sum of the second priority parameter and the first priority parameter. 7. The method of claim 4, wherein: when the second wait-time parameter is larger than a first threshold and the first wait- time parameter is larger than a second threshold, the final priority parameter is computed to be equal to a first value; when the second wait-time parameter is smaller than or equal to the first threshold and the first wait-time parameter is larger than the second threshold, the final priority parameter is computed to be equal to a second value; when the second wait-time parameter is larger than the first threshold and the first wait- time parameter is smaller than or equal to the second threshold, the final priority parameter is computed to be equal to a third value; and when the second wait-time parameter is smaller than or equal to the first threshold and the first wait-time parameter is smaller than or equal to the second threshold, the final priority parameter is computed to be equal to a fourth value. 8. The method of claim 1 , wherein the final priority parameter is computed to be equal to the second priority parameter. 9. An integrated circuit for dynamically managing a first-in/first-out (FIFO) command queue of a system controller, the integrated circuit comprising: an interface circuit confi gured to receive one or more commands into the command queue, a command being associated with a priority parameter; a monitoring circuit configured to determine a current command first in line to be executed in the command queue, the current command being associated with a first priority parameter, and determine a second command associated with a second priority parameter, the second priority parameter being largest among priority parameters associated with the one or more commands; and a selection circuit configured to compute a final priority parameter for the current command based at least in part on the second priority parameter and output the final priority parameter in order for the current command to be selected for execution when the final priority parameter satisfies a predetermined condition. 10. The integrated circuit of claim 9, and further comprising: a second selection circuit configured to compute a second final priority parameter for a second current command in a second FIFO command queue in the system controller; wherein if the final priority parameter is larger than the second final priority parameter, the final priority parameter satisfies the predetermined condition so that the current command is selected for execution. 1 1. The integrated circuit of claim 9, and further comprising: a second selection circuit configured to compute a second final priority parameter for a second current command in a second FIFO command queue in the system controller; a third selection circuit configured to compute a third final priority parameter for a third current command in a third FIFO command queue in the system controller; wherein if the final priority parameter is larger than both the second final priority parameter and the third final priority parameter, the final priority parameter satisfies the predetermined condition so that the current command is selected for execution. 12. The integrated circuit of claim 9, wherein: the current command is associated with a first wait-time parameter, the first wait-time parameter indicating a duration of the current command in the command queue; and the second command is associated with a second wait-time parameter, the second wait- time parameter indicating a duration of the second command in the command queue. 13. The integrated circuit of claim 12, wherein : when the second wait-time parameter is larger than a predetermined threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a first value; and when the second wait-time parameter is smaller than or equal to the predetermined threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a second value. 14. The integrated circuit of claim 12, wherein: when the second wait-time parameter is larger than a first threshold and the first wait- time parameter is larger than a second threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a first value; when the second wait-time parameter is smaller than or equal to the first threshold and the first wait-time parameter is larger than the second threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a second value; when the second wait-time parameter is larger than the first threshold and the first wait- time parameter is smaller than or equal to the second threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a third value; and when the second wait- time parameter is smaller than or equal to the first threshold and the first wait-time parameter is smal ler than or equal to the second threshold, the selection circuit is further configured to compute the final priority parameter to be equal to a fourth value. 15. The integrated circuit of claim 9, wherein the selection circuit is further configured to compute the final priority parameter to be equal to the second priority parameter. 16. A system for dynamically managing a first-irv'first-out (FIFO) command queue of a system controller, the system comprising: one or more data processors; a computer-readable memory encoded with programming instructions for commanding the one or more data processors to perform steps comprising:receiving one or more commands into the command queue, a command being associated with a priority parameter; determining a current command first in line to be executed in the command queue, the current command being associated with a first priority parameter; determining a second command associated with a second priority parameter, the second priority parameter being largest among priority parameters associated with the one or more commands; computing a final priority parameter for the current command based at least in part on the second priority parameter; and outputting the final priority parameter in order for the current command to be selected for execution when the final priority parameter satisfies a predetermined condition. 17. The system of claim 16, wherein the programming instructions encoded in the computer-readable memory are adapted for commanding the one or more data processors to perform further steps comprising: computing a second final priority parameter for a second current command in a second FIFO command queue in the system controller; wherein if the final priority parameter is larger than the second final priority parameter, the final priority parameter satisfies the predetermined condition so that the current command is selected for execution. 18. The system of claim 16, wherein the programming instructions encoded in the computer-readable memory are adapted for commanding the one or more data processors to perform further steps comprising: computing a second final priority parameter for a second current command in a second FIFO command queue in the system controller; computing a third final priority parameter for a third current command in a third FIFO command queue in the system controller; wherein if the final priority parameter is larger than both the second final priority parameter and the third final priority parameter, the final priority parameter satisfies the predetermined condition so that the current command is selected for execution. 19. The system of claim 16, wherein: the current command is associated with a first wait-time parameter, the first wait-time parameter indicating a duration of the current command in the command queue; and the second command is associated with a second wait-time parameter, the second wait- time parameter i ndi cating a duration of the second command in the command queue. 20. The system of claim 19, wherein: when the second wait-time parameter is larger than a predetermined threshold, the final priority parameter is computed to be equal to a first value; and when the second wait-time parameter is smaller than or equal to the predetermined threshold, the final priority parameter is computed to be equal to a second value.
SYSTEMS AND METHODS FOR DYNAMIC PRIORITY CONTROL CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This disclosure claims priority to and benefit from U.S. Provisional Patent Application No, 61/591,705, filed on January 27, 2012, the entirety of which is incorporated herein by reference. FIELD [0002] The technology described in this patent document relates generally to data processing and more particularly to priority control in data processing. BACKGROUND [0003] A memory system often includes semiconductor memory devices, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, etc. Various source devices, such as processors, peripheral devices (e.g., input/output devices), audio and video devices, may generate memory operation commands, including read memory operations to transfer data from memory devices to the source devices and write memory operations to transfer data from the source devices to the memory devices. Usually, a memory controller is implemented to receive the memory operation commands from the source devices and to control the memory devices to perform memory operations in response to the commands. The memory controller often includes command queues to capture the memory operation commands. [0004] Priority parameters (e.g., Quality of Sendee (QoS) parameters) of the memory operation commands may be transmitted as parts of the commands to the memory controller.The memory controller may arbitrate among memory operation commands from different command queues and schedule execution of such commands based on their respective priority parameters. FIG. 1 illustrates an example of a memory controller scheduling execution of memory operation commands. An arbiter component 108 in a memory controller 100 schedules execution of memory operation commands 104 from multiple command queues 102 based on priority parameters 106 of the memory operation commands 104. As shown in FIG. I, the memory controller 100 includes multiple system interface ports (SIPs) 1 10 which correspond to multiple command queues 102 respectively. A command queue stores one or more memory operation commands 104 which each include a priority parameter 106 (e.g., QoS). Each command queue has a current command which is at the top of the command queue and thus first in line to be serviced. The arbiter component 108 compares the priority parameters (e.g., QoS) of the current commands in different command queues, and selects one current command with a highest priority parameter to be serviced. For example, a command queue often operates in a first-in-first-out (FIFO) manner. That is, a current command of a command queue is the one that is received earlier than other commands in the command queue. SUMMARY [0005] In accordance with the teachings described herein, systems and methods are provided for dynamical ly managing a first-in/first-out (FI FO) command queue of a system controller. One or more commands are received into the command queue, a command being associated with a priority parameter. A current command first in line to be executed in the command queue is determined, the current command being associated with a first priority parameter, A second command associated with a second priority parameter is determined, the second priority parameter being largest among priority parameters associated with the one or more commands.A final priority parameter for the current command is computed based at least in part on the second priority parameter, [0006] In another embodiment, an integrated circuit for dynamically managing a tlrst-in/first- out (FIFO) command queue of a system controller includes, an interface circuit configured to receive one or more commands into the command queue, a command being associated with a priority parameter, a monitoring circuit configured to determine a current command first in line to be executed in the command queue, the current command being associated with a first priority parameter, and determine a second command associated with a second priority parameter, the second priority parameter being largest among priority parameters associated with the one or more commands, and a selection circuit configured to compute a final priority parameter for the current command based at least in part on the second priority parameter and output the final priority parameter in order for the current command to be selected for execution when the final priority parameter satisfies a predetermined condition. [0007] In yet another embodiment, a system for dynamically managing a first-in/first-out (FIFO) command queue of a system controller includes one or more data processors, and a computer-readable memory encoded with programming instructions for commanding the one or more data processors to perform steps. The steps include, receiving one or more commands into the command queue, a command being associated with a priority parameter, determining a current command first in line to be executed in the command queue, the current command being associated with a first priority parameter, and determining a second command associated with a second priority parameter, the second priority parameter being largest among priority parameters associated with the one or more commands. The steps further include computing a final priority parameter for the current command based at least in part on the second priority parameter, andoutputting the final priority parameter in order for the current command to be selected for execution when the final priority parameter satisfies a predetermined condition. BRIEF DESCRIPTION OF T HE DRAWI NGS [0008] FIG. 1 illustrates an example of a memory controller scheduling execution of memory operation commands. [0009] FIG. 2 illustrates an example of a FIFO command queue. [0010] FIG. 3 illustrates an example of generating dynamic priority parameters for commands in a command queue. [0011] FIG. 4 illustrates another example of generating dynamic priority parameters for commands in a command queue. [0012] FIG. 5 illustrates example data fields of commands in a command queue for generating dynamic priority parameters. [0013] FIG. 6 illustrates an example of a memory controller scheduling execution of memory operation commands based on dynamic priority parameters associated with command queues. DETAILED DESCRIPTION [0014] Referring back to FIG. 1, the arbiter component 108 selects one of multiple current commands which has a highest priority parameter to be serviced. Thus, if a current command of a particular command queue has a low priority parameter, then such a current command may need to wait for a long period of time before it can be serviced. Other commands in the command queue are blocked by the current command, even though they may have high priority parameters.[0015] FIG. 2 illustrates an example of a FIFO command queue. Commands with high priority parameters (e.g., command 204) are blocked by a current command 202 with a low priority parameter. As shown in FIG. 2, a memory operation command includes an identification number ("ID") for ordering control, an address ("Addr") indicating a memory location for accessing data in the memory, and a priority parameter ("QoS") indicating how urgent the command is. A memory operation command 202 with a low priority parameter "1" (e.g., QoS) stays at the top of the command queue 200 and is the current command of the command queue 200. Because the current command 202 has a low priority parameter, it may not be serviced for a long time. Thus, even though other commands in the command queue 200 may have high priority parameters, they cannot get serviced. For example, another memory operation command 204 has a very high priority parameter "15" (e.g., QoS). However, the command 204 is in the middle of the command queue 200, and thus it will not have a chance to be serviced until all commands before the command 204 have been serviced. [0016] As an example, a Liquid Crystal Display (LCD) controller sends commands to read data from a memory. At first, a LCD buffer has enough data to be displayed, and the LCD controller sends read commands with low priority parameters (e.g., QoS) to a command queue associated with the LCD. The memory controller does not sendee these read commands in time because commands from other command queues may have higher priority parameters. Later when the buffer does not have enough data to be displayed, the LCD controller sends read commands with high priority parameters to the same command queue associated with the LCD. The previous read commands with low priority parameters are still in the command queue waiting for execution, and block the subsequent read commands with high priority parameters. Then, error may occur when the buffer has no data to be displayed.[0017] A virtual channel approach or a multi-channel approach which often uses multiple physical command queues for a particular system interface port may ameliorate the problem, since commands with different priority parameters may be input into different command queues and the commands with high priority parameters may not be blocked by the commands with low priority parameters. However, the implementation of the virtual channel approach or the multichannel approach is very expensive. In addition, such virtual channel approach or multi-channel approach will typically encounter a different problem. [0018] Often, a source device needs to access a number of consecutive locations of the memory. For each location, the source device usually sends out a command. These commands from the source device share a same identification number. Usually, it is preferred to execute these commands in the order that they are sent out, so that the target locations of the memory can be accessed consecutively. A single FIFO command queue for a particular system interface port can often achieve this without any problem because the commands received first will be serviced first. However, under the virtual channel approach or the multi-channel approach, commands with the same identification number are often sent to different physical command queues. Additional mechanisms are usually needed to execute commands with the same identification number in order, which will increase the complexity and cost of the system. [0019] The present disclosure presents an approach allowing commands in a command queue to be serviced in time according to the status of the command queue. FIG. 3 illustrates an example of generating dynamic priority parameters for commands in a command queue. An arbiter component 302 receives a dynamic priority parameter 304 ("QoS_arb") determined based on the status of a command queue 306. If the dynamic priority parameter 304 is higher than other priority parameters associated with other command queues, the arbiter component 302selects a current command 308 of the command queue 306 to be serviced. When commands with high priority parameters are received into the command queue 306 later than the current command 308, the dynamic priority parameter 304 is increased to speed up the service of the command queue 306. When the commands with high priority parameters are serviced, the dynamic priority parameter 304 is reduced to slow down the service of the command queue 306. [0020] Specifically, an algorithm may be implemented to dynamically determine a highest priority parameter in the command queue 306. How long the command with the highest priority parameter has stayed in the command queue 306 may be taken into account to detennine the dynamic priority parameter 304. As an example, a command 318 is determined to have a highest priority parameter 316 ("QoS Max") in the command queue 306. If the command 318 has stayed in the command queue 306 longer than a wait-time threshold, the dynamic priority parameter 304 is determined to be equal to the highest priority parameter 316 ("QoS_Max"). On the other hand, if the command 318 has stayed in the command queue 306 no longer than the wait- time threshold, the dynamic priority parameter 304 is determined to be equal to half of a sum of the highest priority parameter 316 ("QoS Max") and a current priority parameter 314 of a current command 308. Alternatively, in some circumstances, the dynamic priority parameter 304 is determined to be equal to the highest priority parameter 316 ("QoS_Max") regardless of how long the command 318 has stayed in the command queue 306. [0021] FIG. 4 illustrates another example of generating dynamic priority parameters for commands in a command queue. As shown in FIG. 4, a selection component 610 (e.g., a programmable register) outputs a signal 622 ("QoS_sel") to a multiplexer 612 to select one of three modes for generating a dynamic priority parameter 604 for a command queue 606. Under a first mode, the dynamic priority parameter 604 is always determmed to be equal to a currentpriority parameter 614 of a current command 608 in the command queue 606. Under a second mode, the dynamic priority parameter 604 is always determined to be equal to a highest priority parameter 616 in the command queue 606. Further, under a third mode, the multiplexer 612 outputs a modified priority parameter 620 ("QoS"') as the dynamic priority parameter 604. [0022] For example, the modified priority parameter 620 may be determined based on how long a command 618 with the highest priority parameter 616 has stayed in the command queue 606. if the command 618 has stayed in the command queue 606 longer than a first wait-time threshold, the modified priority parameter 620 is determined to be equal to the maximum priority parameter 616. On the other hand, if the command 618 has stayed in the command queue 606 no longer than the first wait-time threshoid, the modified priority parameter 620 is determined to be equal to half of a sum of the maximum priority parameter 616 and the current priority parameter 614. [0023] Further, how long the current command 608 has stayed in the command queue 606 may also be taken into account to determine the modified priority parameter 620. As an example, if the command 618 has stayed in the command queue 606 longer than the first wait-time threshold and the current command 608 has stayed in the command queue 606 longer than a second wait-time threshold, the modified priority parameter 620 is determined to be equal to a first value. If the command 618 has stayed in the command queue 606 no longer than the first wait-time threshold and the current command 608 has stayed in the command queue 606 longer than a second wait-time threshoid, the modified priority parameter 620 is determined to be equal to a second value. If the command 618 has stayed in the command queue 606 longer than the first wait-time threshold and the current command 608 has stayed in the command queue 606 no longer than a second wait-time threshold, the modified priority parameter 620 is determined tobe equal to a third value, in addition, if the command 618 has stayed in the command queue 606 no longer than the first wait-time threshold and the current command 608 has stayed in the command queue 606 no longer than a second wait-time threshold, the modified priority parameter 620 is determined to be equal to a fourth value. For example, the first value and the third value are equal to the maximum priority parameter 616, and the second value and the fourth value are equal to half of the sum of the maximum priority parameter 616 and the current priority parameter 614. [0024] FIG. 5 illustrates example data fields of commands in a command queue for generating dynamic priority parameters. Each command in a command queue 400 includes three data fields related to generating dynamic priority parameters - a validity factor ("V") indicating whether the command is valid, a wait-time factor ("WT") indicating a wait time of the command (i.e., how long the command stays in the command queue 400), and an original priority parameter ("QoS_org"). For example, when the validity factor of a command is 1, the command is valid, and when the validity factor is 0, the command is invalid. The wait-time factor of a valid command begins to increase when the command is received into the command queue 400 until reaching a maximum value, and is cleared when the command is popped out of the command queue 400. A read pointer 410 ("rd ptr") points to a current command 412, and increases by one when the current command 412 is popped out of the command queue 400. A write pointer 408 ("wr_ptr") points to a next available location in the command queue 400 for receiving a new command, and increases by one when a new command is received. As an example, the command queue 400 is managed in a circular FIFO manner. [0025] A. two-dimensional array, QoS Info[Q Size-1 :Q] [Entry Size- 1 :0], may be defined to store information of the above-noted data fields for generating dynamic priority parameters,where Q__Size indicates how many commands can be stored in the command queue 400, and Entry Size represents a sum of sizes of a validity factor, a wait-time factor and an original priority parameter. [0026] A maximum priority parameter of valid commands in the command queue 400 can be determined as follows: [0027] A. wait-time factor of a command having the maximum priority parameter is determined as follows: A wait-time factor of the current command is determined as follows: [0028] For the first mode as discussed in FIG. 3, the dynamic priority parameter is determined as follows: For the second mode, the dynamic priority parameter is determined as follows: in addition, for the third mode, the dynamic priority parameter is determined based on the first wait-time threshold ("THRl") and the second wait-time threshold ("THR2") as follows: [0029] FIG. 6 illustrates an example of a memory controller scheduling execution of memory operation commands based on dynamic priority parameters associated with command queues. An arbiter component 502 in a memory controller 500 schedules execution of memory operation commands from multiple command queues 504 based on dynamic priority parameters 506 ("QoS_arb") associated with the command queues 504 respectively. The arbiter component 502 compares the dynamic priority parameters 506 ("QoS arb") associated with the command queues 504, and selects, through a multiplexer 510, a current command of a command queue that has a highest dynamic priority parameter. The selected current command is output to a memory command scheduler 512 (e.g., a DDR command scheduler) to be serviced. The command queues 504 correspond to multiple system interface ports (SIPs) 508 respectively. [0030] This written description uses examples to disclose the invention, include the best mode, and also to enable a person skil led in the art to make and use the invention. The patentable scope of the invention may include other examples that occur to those skilled in the art. For example, the systems and methods described herein may be implemented for priority control in any system controller with a single-command-queue structure. As an example, the systems and methods described herein may be implemented for priority control in modules or components of a system- on-a-ehip (SOC), such as SOC fabrics (bus interconnects), PCIe modules, and USB modules in the SOC.[0031] For example, the systems and methods described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. Other implementations may also be used, however, such as firmware or appropriately designed hardware configured to carry out the methods and systems described herein. In another example, the systems and methods described herein may be implemented in an independent processing engine, as a co-processor, or as a hardware accelerator. In yet another example, the systems and methods described herein may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein.
A data processing system includes a memory channel and a data processor coupled to the memory channel. The data processor includes a memory controller coupled to the memory channel and is adapted to access at least one rank of double data rate memory. The memory controller includes a command queue for storing received memory access requests, and an arbiter for picking memory access requests from the command queue, and then providing the memory access requests to the memory channel. The memory access requests are selected based onpredetermined criteria, and in response to a mode register access request to quiesce pending operations. Additionally, the memory controller includes a mode register access controller that in response to the mode register access request, generates at least one corresponding mode register set command to a memory bus. The memory controller then relinquishes control of the memory bus to the arbiter thereafter.
WHAT IS CLAIMED IS:1. A data processing system (100), comprising:a memory channel (130) comprising at least one rank of double data rate memory (134/136/138)comprising mode registers; anda data processor (110) having a memory controller (500) coupled to said memory channel (130) andadapted to access said at least one rank of double data rate memory, wherein said memory controller comprises:a command queue (520) for storing received memory access requests;an arbiter (538) for picking memory access requests from said command queue (520) based on predetermined criteria and providing said memory access requests to said memory channel, and in response to a mode register access request, to quiesce pending operations (606); anda mode register access controller (562) that in response to said mode register access request generates at least one corresponding mode register set command to a memory bus, and relinquishes control of said memory bus to said arbiter (538) thereafter.2. The data processing system (100) of claim 1, further comprising generating said mode register set command (608), wherein said mode register set command is one of a DRAM mode register write command (608) sequence, a register control word command sequence, and a data buffer control word write command ( 14) sequence.3. The data processing system (100) of claim 2, further comprising distributing said DRAM mode register write command sequence to a plurality of DRAMs in at least one rank of DDR memory (610).4. The data processing system of claim 3, further comprising generating (608) said DRAM mode register write command sequence to said at least one rank (134/136/138) of double data rate memory, wherein said DRAM mode register write command sequence enables at least one of a voltage reference parameter, a timing parameter, and a predetermined alternate device parameter associated with said at least one rank (134/136/138) of DDR memory to be updated.5. The data processing system of claim 4, further comprises:receiving at least part of said DRAM mode register write command sequence at a DRAM mode register six of said at least one rank (134/136/138) of double data rate memory; andwaiting a predetermined number of voltage reference memory clock cycles before executing a subsequent DRAM mode register write command sequence (612).6. The data processing system (100) of claim 4, further comprising receiving a subsequent DRAM mode register write command sequence (612) at said at least one rank (134/136/138) of double data rate memory, and in response to receiving said subsequent DRAM mode register write command sequence (612), disabling updates of said voltage reference parameter (812).7. The data processing system (100) of claim 2, wherein in response to generating said data buffer control word write command (714) sequence, said data buffer control word write command (714) sequence is distributed to at least one data buffer of said at least one rank (908) of double data rate memory.8. The data processing system (100) of claim 7, wherein said data buffer control word write command (714) sequence modifies at least one data buffer parameter (908), at a time subsequent to initialization (704/706/708) of said at least one data buffer of said at least one rank (908) of double data rate memory.9. The data processing system (100) of claim 8, wherein said at least one data buffer parameter, is selected from among a voltage reference parameter, a timing control parameter (714/908/912), and a predetermined alternate buffer parameter, and an alternate data buffer parameter.10. The data processing system (100) of claim 9, wherein said timing control parameter is received at a timing control register of said at least one data buffer of said at least one rank (908) of double data rate memory (714).11. The data processing system of claim 1, wherein in response to an immediate mode register access request, bypassing said arbiter, and generating a direct mode register set command to said memory bus to obtain immediate control of said memory bus (602/610).12. The data processing system (100) of claim 1, wherein said memory channel (130) comprises a plurality of ranks (134/136/138) of double rate (DDR) version four (DDR4) memory.13. A data processor (110/200) comprising:a memory accessing agent (210/220); anda memory controller (292/500) coupled to said memory accessing agent (210/212) and adapted to couple to a memory system (120), wherein said memory controller (292/500) comprises: a command queue (520) for storing received memory access requests;an arbiter (538) for selectively picking memory access requests from said command queue (520) andproviding said memory access requests to a memory channel, and in response to a mode register access request, to quiesce pending operations; anda mode register access controller (562) for generating at least one corresponding mode register setcommand to a memory bus, in response to said mode register access request, and relinquishes control of said memory bus to said arbiter (538) thereafter.14. The data processor (110/200) of claim 13, wherein said mode register access controller (562) generates a mode register set command (608), wherein said mode register set command is one of a dynamic random-access memory (DRAM) mode register write command sequence (608), and a buffer control word write command sequence ( 14).15. The data processor (100/200) of claim 14, wherein said mode register access controller distributes said DRAM mode register write command sequence (608) to a plurality of DRAMs in at least one rank of double data rate memory (610).16. The data processor (110/200) of claim 15, wherein said mode register access controller (562) generates (608) said DRAM mode register write command sequence to said at least one rank (134/136/138) of double data rate memory, wherein said DRAM mode register write command sequence updates a voltage parameter, wherein said voltage parameter is associated with said at least one rank (134/ 136/ 138) of double data rate memory.17. The data processor of claim 16, wherein:said mode register access controller (562) generates at least part of said DRAM mode register writecommand sequence (614) to a DRAM mode register six of said at least one rank (134/136/138) of double data rate memory; andwaits a predetermined number of reference voltage memory clock cycles before executing a subsequent DRAM mode register write command sequence (612).18. The data processor (110/200) of claim 17, wherein:said memory bus receives said subsequent DRAM mode register write command sequence (614) at said at least one rank (134/136/138) of double data rate memory, and in response to receiving said subsequent DRAM mode register write command, disables updates of said voltage parameter (812).19. The data processor (110/200) of claim 15, wherein in response to generating said buffer control word write command (714) sequence, said memory bus distributes said buffer control word write command (714) sequence to at least one data buffer of said at least one rank of double data rate memory (908).20. The data processor (110/200) of claim 16, wherein said mode register access controller updates at least one parameter wherein said at least one parameter is a DQ reference voltage value (812).21. The data processor (110/200) of claim 13, wherein said mode register access controller enables a bypass of said arbiter, and generates a direct mode register set command to said memory bus to obtain immediate control of said memory bus, in response to an immediate mode register access request (602/610).22. A method (800/900) for margining and testing a double data rate interface in a memory system via a mode register access controller (292/500), the method comprising:receiving a request to generate double data rate operations at a time subsequent to system initialization; generating a request to quiesce current and pending double data rate operations of a rank (802);receiving a mode register command sequence (806) at a memory bus;in response to receipt of said mode register command sequence, sending a first mode register command sequence to a plurality of banks in said rank (802) to obtain control of a first parameter associated with said rank; andsends a subsequent mode register (806) command sequence to said plurality of banks to update said first parameter associated with said rank (812).23. The method (800) of claim 22, further comprising initiating a wait cycle (810), wherein said wait cycle is a predetermined count of reference voltage memory clock cycles (612), initiated at a time succeeding execution of said subsequent mode register command sequence (810).24. The method (800) of claim 22, further comprising releasing a quiesce of current and pending double data rate operations of said rank, and enabling operative access to said rank (822).25. The method (800) of claim 23, wherein sending a subsequent mode register command sequence, furthercomprises:sending a subsequent mode register six write command to disable control of a reference voltage subsequent to completion of a predetermined number of reference voltage memory clock cycles; and initiating a reference voltage memory clock cycle (612) in response to receipt of said subsequent mode register command sequence (820).26. The method (800) of claim 22, wherein in response to said mode register access controller generating said subsequent mode register command sequence to said plurality of banks, said mode register access controller retrieving contents of a multi-purpose register associated with said double data rate operations, wherein said content corresponds to at least one operational capability of an associated double data rate DRAM (1000).27. A method (900) for margining and testing a double data rate data buffer interface in a memory system via a mode register access controller, the method comprising:receiving a request to generate a buffer control word write command sequence subsequent to initialization of a double data rate data buffer (902);enabling a quiesce of current and pending double data rate data buffer operations (904); anddistributing said buffer control word write command sequence to a double data rate data buffer to modify at least one parameter from amongst a voltage parameter and a data buffer timing parameter (908).28. The method (900) of claim 27, further comprises:initiating a wait cycle that is subsequent to each buffer control word write command sequence, wherein said wait cycle is a first predefined number of wait cycles (910); andin response to a modification of said data buffer timing parameter, initiating a second predefined number of wait cycles (918).29. The method (900) of claim 27, further comprising:sending a subsequent buffer control word write command sequence to disable said quiesce of current and pending double data rate operations of said double data rate data buffer, wherein arbiter access to said double data rate data buffer is enabled (920).30. The method (900) of claim 27, wherein said voltage parameter is a DQ reference voltage level (908/912).
SOFTWARE MODE REGISTER ACCESS FOR PLATFORM MARGINING AND DEBUGKevin M. BrandlScott P. MurphyJames R. MagroParamjit K. LubanaBACKGROUND[0001] Dynamic random access memory (DRAM) chips, formed of large arrays of capacitors with sub-micron features, are utilized for main memory in computer systems. DRAM is relatively inexpensive and high density, thereby enabling large amounts of DRAM to be integrated per device. Most DRAM chips sold today are compatible with various double data rate (DDR) DRAM standards promulgated by the Joint Electron Devices Engineering Council (JEDEC).[0002] Some DDR memory chips can be periodically recalibrated to adjust certain operating parameters for changes in operating conditions such as temperature and voltage. For example, DDR3 and DDR4 allow periodic recalibration of output buffer impedance, known as "ZQ calibration", and DDR4 allows periodic internal reference voltage recalibration, known as "VREFDQ training". Moreover, when the DRAM chips are included in dual inline memory modules (DIMMs) they may optionally include a data buffer that itself has timing parameters that need to be recalibrated.[0003] For example, in DDR4 DRAM chips, the VREFDQ values are configured by a host DDR controller during initialization and may be recalibrated during operation. The VREFDQ values are configured via certain mode register set commands. VREFDQ is preferably retrained during operation as conditions change, such as the board heating up, power supply drift, etc. Retraining can be disruptive and cause poor performance when done through existing software mechanisms. Additionally, in order to update a VREFDQ value on DDR4 DRAM chips, the JEDEC specification requires a specific sequence of multiple mode register set commands, and does not allow other intervening DRAM commands during the sequence. The current JEDEC standard makes it difficult to utilize single-test mode register commands via scripting tools, such as Hardware Debug Tool, for example.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 illustrates in block diagram form a data processing system according to some embodiments;[0005] FIG. 2 illustrates in block diagram form an accelerated processing unit (APU) suitable for use in the data processing system of FIG. 1; [0006] FIG. 3 illustrates in block diagram form a memory controller and associated physical interface (PHY) suitable for use in the APU of FIG. 2 according to some embodiments;[0007] FIG. 4 illustrates in block diagram form another memory controller and associated PHY suitable for use in the APU of FIG. 2 according to some embodiments;[0008] FIG. 5 illustrates in block diagram form a memory controller according to some embodiments;[0009] FIG. 6 illustrates in state diagram form state transitions to enable mode register set write commands according to some embodiments;[0010] FIG. 7 illustrates in state diagram form state transitions for double data rate data buffer operations according to some embodiments;[0011] FIG. 8 illustrates a flow diagram for double data rate operations that may be used by the memory controller of FIG. 5 according to some embodiments;[0012] FIG. 9 illustrates a flow diagram for data buffer control operations that may be used by the memory controller of FIG. 5 according to some embodiments;[0013] FIG. 10 illustrates a flow diagram for content retrieval of a multi-purpose register that may be used by the memory controller of FIG. 5 according to some embodiments; and[0014] FIG. 11 illustrates a 2-dimensional data eye graphically displaying output values of a DRAM in response to a range of input values according to some embodiments.[0015] In the following description, the use of the same reference numerals in different drawings indicates similar or identical items. Unless otherwise noted, the word "coupled" and its associated verb forms include both direct connection and indirect electrical connection by means known in the art, and unless otherwise noted any description of direct connection implies alternate embodiments using suitable forms of indirect electrical connection as well.DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS[0016] As will be described below in one form, a data processing system includes a memory channel and a data processor coupled to the memory channel. The data processor includes a memory controller coupled to the memory channel and is adapted to access at least one rank of double data rate memory. The memory controller includes a command queue for storing received memory access requests, and an arbiter for picking memory access requests from the command queue, and then providing the memory access requests to the memory channel. The memory access requests are selected based on predetermined criteria, and in response to a mode register access request to quiesce pending operations. Additionally, the memory controller includes a mode register access controller that in response to the mode register access request, generates at least one corresponding mode register set command to a memory bus. The memory controller then relinquishes control of the memory bus to the arbiter thereafter.[0017] In another form, a data processor includes a memory accessing agent and a memory controller coupled to the memory accessing agent. The memory controller is adapted to couple to a memory system, and includes a command queue for storing received memory access requests, an arbiter, and a mode register. The arbiter selectively picks memory access requests from the command queue and provides the memory access requests to a memory channel, as well as in response to the mode register access request. The requests are provided to the memory channel to quiesce pending operations. In response to the mode register access request, the mode register access controller generates at least one corresponding mode register set command to a memory bus, and relinquishes control of the memory bus to the arbiter, thereafter.[0018] In still another form, there is described a method for margining and testing a double data rate interface in a memory system via a mode register access controller. A request to generate double data rate operations is received at a time subsequent to system initialization. A request to quiesce current and pending double data rate operations of a rank is generated. A mode register set command sequence is received at a memory bus. In response to receipt of the mode register command sequence, a first mode register command sequence is sent to a plurality of banks in the rank to obtain control of a first parameter associated with the rank. A subsequent mode register command sequence is sent to the plurality of banks to update the first parameter associated with said rank.[0019] Moreover, in yet another form, there is described a method for margining and testing a double data rate data buffer interface in a memory system via a mode register access controller. A request to generate a buffer control word write command sequence subsequent to initialization of a double data rate data buffer is received at a time subsequent initialization. The arbiter enables the quiesce of current and pending double data rate buffer operations. Buffer control word write command sequences are distributed to modify at least one parameter from amongst a voltage parameter and a data buffer timing parameter.[0020] FIG. 1 illustrates in block diagram form a data processing system 100 according to some embodiments. Data processing system 100 includes generally a data processor 110 in the form of an accelerated processing unit (APU), a memory system 120, a peripheral component interconnect express (PCIe) system 150, a universal serial bus (USB) system 160, and a disk drive 170. Data processor 110 operates as the central processing unit (CPU) of data processing system 100 and provides various buses and interfaces useful in modern computer systems. These interfaces include two double data rate (DDRx) memory channels, a PCIe root complex for connection to a PCIe link, a USB controller for connection to a USB network, and an interface to a Serial Advanced Technology Attachment (SATA) mass storage device.[0021] Memory system 120 includes a memory channel 130 and a memory channel 140. Memory channel 130 includes a set of dual inline memory modules (DIMMs) connected to a DDRx bus 132, including representative DIMMs 134, 136, and 138 that in this example correspond to separate ranks. Likewise, memory channel 140 includes a set of DIMMs connected to a DDRx bus 142, including representative DIMMs 144, 146, and 148. [0022] PCIe system 150 includes a PCIe switch 152 connected to the PCIe root complex in data processor 110, a PCIe device 154, a PCIe device 156, and a PCIe device 158. PCIe device 156 in turn is connected to a system basic input/output system (BIOS) memory 157. System BIOS memory 157 can be any of a variety of nonvolatile memory types, such as read-only memory (ROM), flash electrically erasable programmable ROM(EEPROM), and the like.[0023] USB system 160 includes a USB hub 162 connected to a USB master in data processor 110, and representative USB devices 164, 166, and 168 each connected to USB hub 162. USB devices 164, 166, and 168 could be devices such as a keyboard, a mouse, a flash EEPROM port, and the like.[0024] Disk drive 170 is connected to data processor 110 over a SATA bus and provides mass storage for the operating system, application programs, application files, and the like.[0025] Data processing system 100 is suitable for use in modern computing applications by providing a memory channel 130 and a memory channel 140. Each of memory channels 130 and 140 can connect to state-of- the-art DDR memories such as DDR version 4 (DDR4), low power DDR4 (LPDDR4), graphics DDR version five (gDDR5), and high bandwidth memory (HBM), and can be adapted for future memory technologies. These memories provide high bus bandwidth and high speed operation. At the same time, they also provide low power modes to save power for battery-powered applications such as laptop computers, and also provide built-in thermal monitoring.[0026] FIG. 2 illustrates in block diagram form an APU 200 suitable for use in data processing system 100 of FIG. 1. APU 200 includes generally a central processing unit (CPU) core complex 210, a graphics core 220, a set of display engines 230, a memory management hub 240, a data fabric 250, a set of peripheral controllers 260, a set of peripheral bus controllers 270, a system management unit (SMU) 280, and a set of memory controllers 290.[0027] CPU core complex 210 includes a CPU core 212 and a CPU core 214. In this example, CPU core complex 210 includes two CPU cores, but in other embodiments CPU core complex can include an arbitrary number of CPU cores. Each of CPU cores 212 and 214 is bi-directionally connected to a system management network (SMN), which forms a control fabric, and to data fabric 250, and is capable of providing memory access requests to data fabric 250. Each of CPU cores 212 and 214 may be unitary cores, or may further be a core complex with two or more unitary cores sharing certain resources such as caches.[0028] Graphics core 220 is a high performance graphics processing unit (GPU) capable of performing graphics operations such as vertex processing, fragment processing, shading, texture blending, and the like in a highly integrated and parallel fashion. Graphics core 220 is bidirectionally connected to the SMN and to data fabric 250, and is capable of providing memory access requests to data fabric 250. In this regard,[0029] APU 200 may either support a unified memory architecture in which CPU core complex 210 and graphics core 220 share the same memory space, or a memory architecture in which CPU core complex 210 and graphics core 220 share a portion of the memory space, while graphics core 220 also uses a private graphics memory not accessible by CPU core complex 210.[0030] Display engines 230 render and rasterize objects generated by graphics core 220 for display on a monitor. Graphics core 220 and display engines 230 are bi-directionally connected to a common memory management hub 240 for uniform translation into appropriate addresses in memory system 120, and memory management hub 240 is bi-directionally connected to data fabric 250 for generating such memory accesses and receiving read data returned from the memory system.[0031] Data fabric 250 includes a crossbar switch for routing memory access requests and memory responses between any memory accessing agent and memory controllers 290. It also includes a system memory map, defined by BIOS, for determining destinations of memory accesses based on the system configuration, as well as buffers for each virtual connection.[0032] Peripheral controllers 260 include a USB controller 262 and a SATA interface controller 264, each of which is bi-directionally connected to a system hub 266 and to the SMN bus. These two controllers are merely exemplary of peripheral controllers that may be used in APU 200.[0033] Peripheral bus controllers 270 include a system controller or "Southbridge" (SB) 272 and a PCIe controller 274, each of which is bi-directionally connected to an input/output (I/O) hub 276 and to the SMN bus. I/O hub 276 is also bi-directionally connected to system hub 266 and to data fabric 250. Thus for example a CPU core can program registers in USB controller 262, SATA interface controller 264, SB 272, or PCIe controller 274 through accesses that data fabric 250 routes through I/O hub 276.[0034] SMU 280 is a local controller that controls the operation of the resources on APU 200 and synchronizes communication among them. SMU 280 manages power-up sequencing of the various processors on APU 200 and controls multiple off-chip devices via reset, enable and other signals. SMU 280 includes one or more clock sources not shown in FIG. 2, such as a phase locked loop (PLL), to provide clock signals for each of the components of APU 200. SMU 280 also manages power for the various processors and other functional blocks, and may receive measured power consumption values from CPU cores 212 and 214 and graphics core 220 to determine appropriate power states.[0035] APU 200 also implements various system monitoring and power saving functions. In particular one system monitoring function is thermal monitoring. For example, if APU 200 becomes hot, then SMU 280 can reduce the frequency and voltage of CPU cores 212 and 214 and/or graphics core 220. If APU 200 becomes too hot, then it can be shut down entirely. Thermal events can also be received from external sensors by SMU 280 via the SMN bus, and SMU 280 can reduce the clock frequency and/or power supply voltage in response.[0036] FIG. 3 illustrates in block diagram form a memory controller 300 and an associated physical interface (PHY) 330 suitable for use in APU 200 of FIG. 2 according to some embodiments. Memory controller 300 includes a memory channel 310 and a power engine 320. Memory channel 310 includes a host interface 312, a memory channel controller 314, and a physical interface 316. Host interface 312 bi-directionally connects memory channel controller 314 to data fabric 250 over a scalable data port (SDP). Physical interface 316 bi-directionally connects memory channel controller 314 to PHY 330 over a bus that conforms to the DDR-PHY Interface Specification (DFI). Power engine 320 is bi-directionally connected to SMU 280 over the SMN bus, to PHY 330 over the Advanced Peripheral Bus (APB), and is also bi-directionally connected to memory channel controller 314. PHY 330 has a bidirectional connection to a memory channel such as memory channel 130 or memory channel 140 of FIG. 1. Memory controller 300 is an instantiation of a memory controller for a single memory channel using a single memory channel controller 314, and has a power engine 320 to control operation of memory channel controller 314 in a manner that will be described further below.[0037] FIG. 4 illustrates in block diagram form another memory controller 400 and associated PHYs 440 and 450 suitable for use in APU 200 of FIG. 2 according to some embodiments. Memory controller 400 includes memory channels 410 and 420 and a power engine 430. Memory channel 410 includes a host interface 412, a memory channel controller 414, and a physical interface 416. Host interface 412 bi-directionally connects memory channel controller 414 to data fabric 250 over an SDP. Physical interface 416 bi-directionally connects memory channel controller 414 to PHY 440, and conforms to the DFI Specification. Memory channel 420 includes a host interface 422, a memory channel controller 424, and a physical interface 426. Host interface 422 bi-directionally connects memory channel controller 424 to data fabric 250 over another SDP. Physical interface 426 bi- directionally connects memory channel controller 424 to PHY 450, and conforms to the DFI Specification. Power engine 430 is bi-directionally connected to SMU 280 over the SMN bus, to PHYs 440 and 450 over the APB, and is also bi-directionally connected to memory channel controllers 414 and 424. PHY 440 has a bidirectional connection to a memory channel such as memory channel 130 of FIG. 1. PHY 450 has a bidirectional connection to a memory channel such as memory channel 140 of FIG. 1. Memory controller 400 is an instantiation of a memory controller having two memory channel controllers and uses a shared power engine 430 to control operation of both memory channel controller 414 and memory channel controller 424 in a manner that will be described further below.[0038] FIG. 5 illustrates in block diagram form a memory controller 500 according to some embodiments. Memory controller 500 includes generally a memory channel controller 510 and a power controller 550. Memory channel controller 510 includes generally an interface 512, a queue 514, a command queue 520, an address generator 522, a content addressable memory (CAM) 524, a replay queue 530, a refresh logic block 532, a timing block 534, a page table 536, an arbiter 538, an error correction code (ECC) check block 542, an ECC generation block 544, and a data buffer (DB) 546.[0039] Interface 512 has a first bidirectional connection to data fabric 250 over an external bus, and has an output. In memory controller 500, this external bus is compatible with the advanced extensible interface version four specified by ARM Holdings, PLC of Cambridge, England, known as " AXI4", but can be other types of interfaces in other embodiments. Interface 512 translates memory access requests from a first clock domain known as the FCLK (or MEMCLK) domain to a second clock domain internal to memory controller 500 known as the UCLK domain. Similarly, queue 514 provides memory accesses from the UCLK domain to the DFICLK domain associated with the DFI interface.[0040] Address generator 522 decodes addresses of memory access requests received from data fabric 250 over the AXI4 bus. The memory access requests include access addresses in the physical address space represented in a normalized format. Address generator 522 converts the normalized addresses into a format that can be used to address the actual memory devices in memory system 120, as well as to efficiently schedule related accesses. This format includes a region identifier that associates the memory access request with a particular rank, a row address, a column address, a bank address, and a bank group. On startup, the system BIOS queries the memory devices in memory system 120 to determine their size and configuration, and programs a set of configuration registers associated with address generator 522. Address generator 522 uses the configuration stored in the configuration registers to translate the normalized addresses into the appropriate format. Command queue 520 is a queue of memory access requests received from the memory accessing agents in data processing system 100, such as CPU cores 212 and 214 and graphics core 220. Command queue 520 stores the address fields decoded by address generator 522 as well other address information that allows arbiter 538 to select memory accesses efficiently, including access type and quality of service (QoS) identifiers. CAM 524 includes information to enforce ordering rules, such as write after write (WAW) and read after write (RAW) ordering rules.[0041] Replay queue 530 is a temporary queue for storing memory accesses picked by arbiter 538 that are awaiting responses, such as address and command parity responses, write cyclic redundancy check (CRC) responses for DDR4 DRAM or write and read CRC responses for gDDR5 DRAM. Replay queue 530 accesses ECC check block 542 to determine whether the returned ECC is correct or indicates an error. Replay queue 530 allows the accesses to be replayed in the case of a parity or CRC error of one of these cycles.[0042] Refresh logic 532 includes state machines for various power-down, refresh, and termination resistance (ZQ) calibration cycles that are generated separately from normal read and write memory access requests received from memory accessing agents. For example, if a memory rank is in pre-charge power-down, it must be periodically awakened to run refresh cycles. Refresh logic 532 generates refresh commands periodically to prevent data errors caused by leaking of charge off storage capacitors of memory cells in DRAM chips. In addition, refresh logic 532 periodically calibrates ZQ to prevent mismatch in on-die termination resistance due to thermal changes in the system.[0043] Arbiter 538 is bi-directionally connected to command queue 520 and configuration registers 562. Arbiter 538 is the heart of memory channel controller 510. It improves efficiency by intelligent scheduling of accesses to improve the usage of the memory bus. Arbiter 538 uses timing block 534 to enforce proper timing relationships by determining whether certain accesses in command queue 520 are eligible for issuance based on DRAM timing parameters. For example, each DRAM has a minimum specified time between active commands, known as "tRc". Timing block 534 maintains a set of counters that determine eligibility based on this and other timing parameters specified in the JEDEC specification, and is bi-directionally connected to replay queue 530. Page table 536 maintains state information about active pages in each bank and rank of the memory channel for arbiter 538, and is bi-directionally connected to replay queue 530.[0044] In response to write memory access requests received from interface 512, ECC generation block 544 computes an ECC according to the write data. DB 546 stores the write data and ECC for received memory access requests. It outputs the combined write data/ECC to queue 514 when arbiter 538 picks the corresponding write access for dispatch to the memory channel.[0045] Power controller 550 generally includes an interface 552 to an advanced extensible interface, version one (AXI), an APB interface 554, and a power engine 560. Interface 552 has a first bidirectional connection to the SMN, which includes an input for receiving an event signal labeled "EVENT n" shown separately in FIG. 5, and an output. APB interface 554 has an input connected to the output of interface 552, and an output for connection to a PHY over an APB. Power engine 560 has an input connected to the output of interface 552, and an output connected to an input of queue 514. Power engine 560 includes a set of configuration registers 562, amicrocontroller (μ€) 564, a self -refresh controller (SLFREF/PE) 566, and a reliable read/write mode register access (RRW/MRA) controller 568. Configuration registers 562 are bi-directionally connected to queue 514.Configuration registers 562 are programmed over the AXI bus, and store configuration information to control the operation of various blocks in memory controller 500. Accordingly, configuration registers 562 have additional outputs connected to these blocks that are not shown in detail in FIG. 5. Self -refresh controller 566 is an engine that allows the manual generation of refreshes in addition to the automatic generation of refreshes by refresh logic 532. RRW/MRA controller 568 provides a continuous memory access stream to memory or I/O devices for such purposes as DDR interface maximum read latency (MRL) training and loopback testing. RRW/MRA controller 568 additionally provides the logic that controls select operations of interface 552.[0046] Memory channel controller 510 includes circuitry that allows it to pick memory accesses for dispatch to the associated memory channel. In order to make the desired arbitration decisions, address generator 522 decodes the address information into predecoded information including rank, row address, column address, bank address, and bank group in the memory system, and command queue 520 stores the predecoded information. Configuration registers 562 store configuration information to determine how address generator 522 decodes the received address information. Arbiter 538 uses the decoded address information, timing eligibility information indicated by timing block 534, and active page information indicated by page table 536 to efficiently schedule memory accesses while observing other criteria such as QoS requirements. For example, arbiter 538 implements a preference for accesses to open pages to avoid the overhead of precharge and activation commands required to change memory pages, and hides overhead accesses to one bank by interleaving them with read and write accesses to another bank. In particular during normal operation, arbiter 538 normally keeps pages open in different banks until they are required to be precharged prior to selecting a different page.[0047] In operation, a memory controller such as memory controller 500 of FIG. 5 is connected to and receives memory access requests from a memory accessing agent, such as a CPU core in CPU core complex 210 or graphics core 220 of FIG. 2. Memory controller 500 is also adapted to connect to memory system 120 of FIG. 1. As described above, memory system 120 can include multiple ranks of memory implemented as DIMMs 134, 136, and 138 in FIG. 1. Arbiter 538, within memory controller 500, picks memory access requests from command queue 520 based on predetermined criteria. Arbiter 538 performs read and write accesses, as well as refreshes, and is responsive to requests from the RRW/MRA controller 568. In response to at least one DDR memory bus access request, RRW/MRA controller 568 generates a request to arbiter 538 to relinquish control of the DDR memory bus. Arbiter 538 relinquishes control of the memory bus to the RRW/MRA controller 568 to generate a series of specified operations, e.g. particular Mode Register Set (MRS) commands to implement VREFDQ training.[0048] In general, configuration register 562 receives a recalibration write command requests via interface 552. During mission mode high-bandwidth data-transfer, recalibration write command requests are provided through interface 512. In response to the write command requests, RRW/MRA controller 568 submits a request to arbiter 538 to obtain control of the memory bus. In response to the request from RRW/MRA controller 568, arbiter 538 quiesces pending operations. Quiescing the pending operations may include, but is not limited to, completing and halting current and pending operations associated with the target bus. Quiescing the bus may additionally include determining that no urgent refresh commands are pending. Subsequent to arbiter 538 quiescing current and pending operations associated with a target rank, RRW/MRA controller 568 executes the series of MRS commands. After completion of the series of MRS commands, RRW/MRA controller 568 returns control of the memory bus to arbiter 538 to resume normal read and write memory bus operations. By providing a side channel to take control of the memory bus with only a small amount of disruption to the flow of memory access requests and refreshes to the memory, memory controller 500 allows periodic recalibration of parameters without significantly sacrificing performance or increasing access latency.[0049] FIG. 6 illustrates a state diagram 600 that may be used by memory controller 500 of FIG. 5 according to some embodiments. State diagram 600 is a diagram of states that correspond to margining and test commands to be utilized by memory controller 500 to write and read parameters of double data rate memory. State diagram 600 includes a request state 602, a detect state 604, a quiesce state 606, a generate state 610, a MRA control state 608, a wait state 612, a distribution state 614, a disable state 616, and arbiter control state 618. State diagram 600 represents state transitions by arrows, and memory controller 500 performs the state transitions in response to corresponding requests.[0050] State diagram 600 presents states of a state machine that correspond to the previously described memory controller operations. At request state 602, RRW/MRA controller 568 receives a request from the operating system. Request state 602 corresponds to a request to access configuration registers 562, in order to execute at least one MRS command. The request selects the rank of DRAMs to which the state machine will send the MRS command. In one embodiment, DRAM devices provide the support to generate MRS commands to a particular DRAM device of a rank utilizing per DRAM accessibility (PDA) MRS commands. The PDA feature is utilized to program predetermined parameters, for example, on die termination (ODT) values and internal reference voltage values on DRAM devices on a given rank.[0051] At detect state 604, arbiter 538 detects any active system operations. In response to the request to access a mode register, during active system operations, arbiter 538 quiesces pending operations (including normal read and write operations as well as pending refreshes) to the memory channel by waiting for them to complete. The quiesce of current and pending memory rank operations corresponds with quiesce state 606. Quiescing the memory bus temporarily places the DRAM(s) in an idle state. The mode register contents can be changed during normal operation of the operating system when the DRAM is in the idle state, or the DDVIMs are in the precharged state with timing parameters satisfied. Subsequent to quiesce of the current and pending operations, arbiter 538 relinquishes control of the memory bus to configuration registers 562.[0052] At MRA control state 608, the mode register access controller assumes control of the targeted rank, and the mode registers are programmed to execute the modified parameter values. Responsively, at generate state 610, RRW/MRA controller 568 generates at least one corresponding mode register set command to a memory bus associated with DIMMs 134, 136, and 138, for example DDRx bus 132. The MRS command, at generate state 610, is one of a DRAM mode register write command sequence, a register control word command sequence, and a data buffer control word write command sequence. The MRS command cycle time is required to complete the write operation to the mode register and is the minimum time required between MRS commands.[0053] Therefore, at wait state 612 mode register set command cycle times are satisfied for each MRS command. When programming the mode registers, address fields within the accessed mode register are redefined when the RRW/MRA controller 568 issues the MRS commands. MRS commands are distributed and redistributed to the memory bus at distribution state 614. RRW/MRA controller 568 cycles between distribution state 614 and wait state 612 following execution of each MRS command to satisfy the minimum time required between executions of MRS commands. Although some mode registers have default values defined, not all mode registers have default values defined, and therefore contents of the mode registers are also initialized and/or reinitialized (i.e. written) at distribution state 614, when necessary. A predetermined number of clock cycles are executed before more mode register set commands are executed.[0054] In response to execution of the final received mode register set command, wait state 612 executes a determined number of clock cycles, and transitions to disable state 616. At disable state 616 modifications of the target parameter are stopped. RRW/MRA controller 568 returns control of the memory bus to the arbiter at arbiter control state 618. Returning back to request state 602, memory controller waits for the next request to modify and/or read at least one DRAM parameter.[0055] In one embodiment, when the MRS command is a mode register write command sequence, the write command sequence is distributed to all DRAMs in a rank of double data rate memory. The mode register write command sequence enables at least one of a voltage parameter, a timing parameter, and a predetermined alternate device parameter associated with the DRAMS in the rank of double data rate memory to be updated. In one example, when a request is made to update an internal reference voltage, at receipt of the generated MRS command, at least part of the MRS command is received at mode register six (MR6) associated with at least one rank of double data rate memory. In response to receipt of the MRS command, the process waits a predetermined number of voltage reference memory clock cycles at wait state 612 before executing a subsequent MRS command. [0056] In another embodiment, a request is received to take immediate control of a parameter associated with the DDR device. In response to request to access the memory bus immediately, arbiter 538 is bypassed. A direct MRS command is generated to the memory bus (DDRx bus 132), enabling immediate control of the memory bus to be obtained.[0057] FIG. 7 illustrates state diagram 700 that may be used by memory controller 500 of FIG. 5 according to some embodiments. State diagram 700 is a diagram of states that correspond to margining and test commands to be utilized by memory controller 500 to read and update parameters of a data buffer associated with double data rate memory. State diagram 700 includes a request state 702, an initialization state 704, an active state 706, an idle state 708, an arbiter control state 710, a wait state 712, a write state 714, quiesce state 716, enable state 718, wait time state 720, and wait state 722.[0058] In operation, buffer control word (BCW) writes are sent to the data buffer associated with DIMMs 134, 136, and 138 from the registering clock driver (RCD) as a command sequence through a bus associated with the data buffer, for example a buffer control bus, or DDRx bus 132. Configuration registers 562 receive the command sequence. Arbiter 538 picks memory access requests to read or write commands to the data buffer from command queue 520 based on predetermined criteria. The predetermined criteria may include refresh requests, urgent refresh requests, chronological ordered requests, and prescheduled distribution requests. In response to receiving the request to access the data buffer, the RRW MRA controller 568 instructs arbiter 538 to quiesce the current and pending activity of the memory bus. The RRW/MRA controller 568 takes control of the memory bus to modify predetermined parameters, such as internal reference voltage parameters and data buffer timing values, via BCW write command sequences. BCW write command sequences that modify predetermined data buffer parameters are sent to the data buffers associated with DIMMs 134, 136, and 138 from the registering clock driver (RCD) as a command sequence through the buffer control bus. Changes to the data buffer parameters utilizing control words within the BCW write command sequence require time for the device to settle. When executing the BCW write command sequences, the memory controller 500 waits a predetermined number of clock cycles OMRC) after the last control word access before further access to the DRAM can take place. For changes or writes to the clock timing, the settling may take up to ΐϋΐ κ time.[0059] State diagram 700 further presents states of a state machine that correspond to DIMM data buffer training and margining operations previously described for execution via memory controller 500. As further illustrated by state diagram 700, RCD receives a request to access and update a parameter associated with DIMMs 134, 136, and 138 at initialization state 704, active state 706, or idle state 708 of the operating system. Initialization state 704 corresponds to the start-up state of the operating system. Multiple functions may be associated with active state 706. In one embodiment, the activity associated with data processing system 100 may range from full mission mode traffic operation to minimal mission mode traffic operation. Idle state 708 corresponds to an inactive, yet functional data processing system 100. In response to receipt of the request to access the data buffer during initialization state 704, active state 706, or idle state 708, memory controller 500 makes a transition to wait state 712. Wait state 712 is a predefined wait time utilized to prepare the data buffer for execution of the BCW write command sequence. Arbiter 538 enables the RRW/MRA controller 568 to take control of the memory bus at state 718. Subsequent to the completion of predetermined wait cycles associated with wait state 712, arbiter 538 quiesces the current and pending operations associated with the data buffer. A transition is made back to wait state 712.[0060] At state 716, configuration registers 562 request control of the memory bus from arbiter 538. When arbiter 538 has temporarily halted activity associated with the memory bus, arbiter 538 relinquishes control of the memory bus to configuration registers 562. Data buffer write operations are executed via the RRW MRA controller 568 utilizing BCW write command sequences generated at write state 714. In one embodiment, the BCW command sequence corresponds to a BCW read command sequence. In one embodiment, subsequent to each BCW write and read command sequence, five data transfer cycles and a parity data transfer cycle are executed. The generated BCW write command sequences are sent to the DDR data buffers of a rank from the DDR4 registering clock driver, at write state 714, as a command sequence through the buffer control bus. Changes to the buffer control words and parameter settings require time for the device to settle. The BCW time (tBcw) parameter indicates the time utilized to transition from the active BCW write command to a subsequent valid BCW write command. The predetermined number of cycle transitions corresponds to a number of transitions to wait cycle 712 in between BCW write commands, at write state 714.[0061] The DDR BCW write command sequence is distributed to at least one rank of DDR memory subsequent to changes of the buffer control settings, at write state 714, memory controller 500 transitions to wait state 720 for execution of the predetermined number of clock cycles, TMRC- The transition to wait state 720 occurs after the last control word access, before further access to the DRAM can occur. For changes to the clock timing, at write state 714, memory controller 500 transitions to wait state 722. In response to a final transition to wait state 712, a transition is made to an additional cycle time, at least one of wait state 720 and wait state 722. Wait state 722 corresponds to a predetermined number of clock cycles, ΐϋΐ κ, executed following changes to timing parameters, then memory controller 500 enables arbiter 538 to regain control of the memory bus, at state 710.[0062] In one embodiment, at least one data buffer parameter is selected from among a voltage reference parameter, a timing control parameter, a predetermined alternate buffer parameter, and an alternate data buffer parameter. For example, DIMM data buffer parameters may include, but are not limited to, reference voltage operating range, reference voltage step size, reference voltage set tolerance, reference voltage step time, reference voltage valid tolerance, and clock timing. Timing control parameters are received at a timing control register of the data buffer of the associated rank of double data rate memory. Further, the DIMM data buffers support a feature in the buffer control word access transactions, called per buffer addressability (PBA). Each data buffer can be configured independently from each other. PBA allows independent parameter modification and training per buffer or independent on ODT impedance settings for predetermined DIMM data buffers. The PBA feature is enabled by a BCW bit stored in a word that does not contain any registers that need to be programmed in PBA mode; thereby enabling the buffers to get in and out of PBA mode without having to modify BCW bits that have been programmed specifically per buffer. [0063] Any of a number of conditions for switching between states of FIG. 6 and FIG. 7 can be used alone or in various combinations. In the illustrated embodiment, these conditions include the generation of read or write command sequences, timing wait cycles, memory bus access, arbiter control, and quiesce of pending operations. Moreover, while FIG. 6 and FIG. 7 may show the margining and training for a single device in a given rank of memory (i.e., PDA and PBA), the FIG. 6 and FIG. 7 state machines can be extended to larger subsets of the memory system in various ways, such as for a single rank and multiple ranks.[0064] FIG. 8 illustrates a flow diagram of method 800 that may be used by memory controller 500 of FIG. 5. At block 802 a request is received from the operating system to generate at least one DDR training and margining operation at a time subsequent to system initialization. RRW/MRA controller 568 generates a request to arbiter 538 to quiesce current and pending double data rate operation of a rank, at block 804. At block 806 a MRS command is received at a memory bus, for example DDRx bus 132. In response to receipt of the MRS command at the memory bus, at block 808, a first MRS command is sent to a rank, via the RRW/MRA controller 568, to obtain control of the predetermined DDR memory device parameter, for example the VREFDQ parameter, of the rank. A predetermined number of clock cycles (tvREF) are executed at block 810. At block 812, a second mode register command sequence is sent to the rank to update at least one DDR4 memory device parameter associated with the rank. Subsequent to execution of the DDR4 memory device parameter modification, at block 814, a predetermined number of clock cycles are executed. RRW/MRA controller 568 sends a subsequent MRS command to the rank to disable parameter control. An additional predetermined number of clock cycles are executed at block 818. At block 820 the arbiter is enabled to regain access to the memory bus. The process concludes at the end block.[0065] FIG. 9 illustrates a flow diagram of method 900 that may be used by memory controller 500 of FIG. 5. At block 902 a request is received to generate a BCW write command sequence to modify a predetermined data buffer parameter, at a time subsequent to system initialization. The RRW/MRA controller 568 requests control of the memory bus from arbiter 538. Memory controller 500 enables arbiter 538 to quiesce current and pending buffer operations, at block 904. At block 906 a predetermined number of clock cycles are executed. A BCW write command sequence is generated at block 908. In response to generation of the BCW write command sequence, at block 910, the BCW write command sequence is distributed to at least one data buffer to modify a first parameter. At block 912 a predetermined number of clock cycles are executed. A subsequent BCW write command sequence is sent to the data buffer to update a second parameter associated with the rank, at block 914. Subsequent to execution of the parameter update, at block 916, another predetermined number of clock cycles (up to ΐϋΐ κ) are executed. At block 918 a subsequent BCW write command sequence is sent to disable control of the data buffer. An additional number of predetermined clock cycles are executed at block 920. At block 922 the arbiter is enabled to regain access to current and pending data bus operations. The process concludes at the end block.[0066] FIG. 10 illustrates a flow diagram of method 1000 that may be used by memory controller 500 of FIG. 5. At block 1002 a request is received at the RRW/MRA controller 568 to retrieve/read contents of a multi-purpose register. A request is generated by the RRW/MRA controller 568 and received at arbiter 538 to quiesce current and pending DDR memory bus operations of a DDR memory device, at block 1004. Memory controller 500 enables arbiter 538 to quiesce current and pending DDR4 memory bus operations, at block 1006. At block 1008 a MRS command is received at the memory bus. At least one MRS command is distributed to a rank to read predetermined parameter values associated with at least one DRAM device within the rank, at block 1010. At block 1012 a predetermined number of clock cycles are executed. Contents of the associated rank are retrieved/read, at block 1014. Subsequent to a read of the contents, at block 1016, a predetermined number of clock cycles are executed. At block 1018, the memory controller enables the arbiter to regain access to current and pending memory data bus operations. The process concludes at the end block.[0067] FIG. 11 illustrates a 2-dimensional data eye graphically displaying output values of a DRAM in response to a range of input values. Data eye 1100 includes data eye 1106, axis 1102, and axis 1104. Axis 1102 displays a minimum and a maximum y-axis value, and axis 1104 displays a minimum and maximum x-axis value.[0068] In one embodiment, data eye 1100 is a 2-dimensional data eye utilized to optimize predetermined parameters of the DRAM device. Subsequent to arbiter 538 quiescing the memory bus, the RRW/MRA controller 568 enables MRS commands to be distributed to the DRAM device to dynamically move the predetermined parameter values to the center of the data eye. Moving the predetermined parameter values to the center of the eye dynamically optimizes the parameter values associated with the DRAM device during testing and margining.[0069] Some or all of the methods illustrated in FIG. 8, FIG. 9, and FIG. 10 may be governed by instructions that are stored in a computer readable storage medium and that are executed by at least one processor. Each of the operations shown in FIG. 8, FIG. 9, and FIG. 10 may correspond to instructions stored in a non-transitory computer memory or computer readable storage medium. In various embodiments, the non-transitory computer readable storage medium includes a magnetic or optical disk storage device, solid-state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the non- transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted and/or executable by one or more processors.[0070] While particular embodiments have been described, various modifications to these embodiments will be apparent to those skilled in the art. Memory controller 500 may interface to other types of memory besides DDRx memory, such as high bandwidth memory (HBM), types of synchronous DRAM, and the like. While the illustrated embodiment showed each rank of memory corresponding to separate DIMMs, in other embodiments each DIMM can support multiple ranks. Moreover, the memory channel may include a plurality of ranks of DDR4 memory or just a single rank. Also while the operation of the MRA controller was described with reference to particular types of calibration such as ZQ calibration of VREDQ training, it should be apparent that it could be used for other types of calibration and training that are performed during operation.[0071] Accordingly, it is intended by the appended claims to cover all modifications of the disclosed embodiments that fall within the scope of the disclosed embodiments.
A memory device (14) includes a plurality of memory components (24, 26, 28) that store data and a processor (22) communicatively coupled to the plurality of memory components (24, 26, 28). The processor (22) may receive a plurality of packets (30) associated with a plurality of data operations, such that each of the plurality of packets (30) includes a transaction window field (42) indicating a type of memory component associated with a respective data operation of the respective packet (30). The processor (22) may also perform the plurality of data operations in a first order based on the type of memory component indicated in the transaction window field (42) of each of the plurality of packets (30).
CLAIMS1. A memory device comprising:a plurality of memory components configured to store data;a processor communicatively coupled to the plurality of memory components, wherein the processor is configured to:receive a plurality of packets associated with a plurality of data operations, wherein each of the plurality of packets comprises a transaction window field indicating a type of memory component associated with a respective data operation of the respective packet; and perform the plurality of data operations in an order based on the type of memory component indicated in the transaction window field of each of the plurality of packets.2. The memory device of claim 1 , wherein the processor is configured to perform a first portion of the plurality of data operations before a second portion of the plurality of operations, wherein the first portion is associated with a first memory type of the memory types and the second portion is associated with a second memory type of the memory types.3. The memory device of claim 3, wherein the first memory type is associated with a first set of requests having a first latency and the second memory type is associated with a second set of requests having a second latency that is larger than the first latency.4. The memory device of claim 1 , wherein the processor is configured to perform the plurality of data operations in the order by:determining whether a first data operation of the plurality of data operations can be performed, wherein the first data operation corresponds to a first packet of the plurality of packets and a first transaction window;identifying a second data operation of the plurality of data operations, wherein the second data operation corresponds to a second packet of the plurality of packets and a second transaction window; andperforming the second data operation when the first data operation cannot be performed, wherein the first transaction window is different from the second transaction window.5. The memory device of claim 4, wherein the first data operation cannot be performed when a memory address associated with the first data operation is busy.6. The memory device of claim 1 , wherein the memory types comprise a Dynamic Random- Access Memory (DRAM), Static Random- Access Memory (SRAM), a AND memory, or any combination thereof.7. The memory device of claim 1, wherein the transaction window field comprises a minimum transaction size for each of the plurality of packets.8. The memory device of claim 1 , wherein the processor is configured to send a reorder message to another processor that transmitted the plurality of packets, wherein the reorder message indicates the first order, wherein the order is different from an order in which the plurality of packets are transmitted.9. A system, comprising:a memory device comprising a processor;a receiving component communicatively coupled to the processor, wherein the receiving component is configured to:receive a plurality of packets from the processor, wherein the plurality of packets is transmitted in a first order;determine whether a plurality of data operations that corresponds to the plurality of packets should be performed in the first order based on availability of a memory component associated with the plurality of data operations;determine a second order to perform the data operations when the plurality of data operations should not be performed in the first order; andsend a reorder message comprising the second order to the memory processor.10. The system of claim 9, wherein the receiving component determines that the plurality of data operations should not be performed in the first order when at least one of the plurality of data operations cannot be performed due to an unavailable memory address or a busy memory address.11. The system of claim 9, wherein the receiving component determines the second order based on whether at least one of the plurality of data operations is dependent on another one of the plurality of data operations being performed before the at least one of the plurality of data operations.12. The system of claim 9, wherein each of the plurality of packets is associated with a transaction window, and wherein the receiving component determines the second order by identifying a portion of the plurality of packets having a same transaction window.13. The system of claim 9, wherein the receiving component is configured to send a plurality of response packets to the processor according to the second order after sending the reorder message.14. The system of claim 9, wherein the processor is configured to associate each of a plurality of response packets received from the receiving component after receiving the reorder message to a respective packet of the plurality of packets according to an order indicated in the reorder message.15. The system of claim 9, wherein the reorder message comprises a new order number for each packet of a portion of the plurality of the packets that the processor has not received a corresponding response packet from the receiving component.16. The system of claim 15, wherein the new order number is associated with a relative position in a queue of a plurality of response packets expected to be received by the processor.17. A method comprising:transmitting, via a processor, a plurality of packets to a receiving component configured to perform a plurality of data operations based on the plurality of packets; receiving, via the processor, a reorder message regarding a plurality of response packets being transmitted from the receiving component, wherein the reorder message is associated with a portion of the plurality of packets transmitted to the receiving component, wherein the processor has not received a response packet associated with any packet of the portion of plurality of packets; receiving, via the processor, the plurality of response packets from the receiving component; andassociating, via the processor, each of the plurality of response packets with a corresponding packet of the portion of the plurality of packets.18. The method of claim 17, wherein the reorder message comprises an order in which each of the plurality of response packets is associated with the corresponding packet of the portion of the plurality of packets based on a relative order of the portion of the plurality of packets in a queue.19. The method of claim 17, comprising:renaming each packet of the portion of the plurality of packets based on a relative order of the portion of the plurality of packets in a queue;generating a modified order of the relative order based on a preferred order to perform a portion of the plurality of data operations that correspond to the portion of the plurality of packets; andgenerating the reorder message based on the modified order.20. A system, comprising:a processor configured to generate a plurality of packets associated with a plurality of data operations; anda receiving component configured to:receive the plurality of packets from the processor, wherein the plurality of packets is received in a first order that corresponds to an order in which the plurality of data operations are to be performed;send a plurality of reorder messages when the plurality of data operations cannot be performed in the order; append each received packet of a portion of the plurality of packets with a sequence number when the plurality of reorder messages exceeds a threshold;generate a response packet for each received packet of the portion, wherein the response packet for each received packet of the portion comprises a respective sequence number; andtransmit the response packet for each received packet of the portion to the processor.21. The system of claim 20, wherein a respective sequence number is assigned to each received packet of the portion of the plurality of packets according to a round robin fashion based on a type of memory associated with a respective data operation that corresponds to a respective received packet.22. The system of claim 21 , wherein the processor is configured to:receive the response packet for each received packet of the portion of the plurality of packets; andassociate the response packet for each received packet of the portion of the plurality of packets to a respective packet of the portion of the plurality of packets based on the respective sequence number.23. The system of claim 22, wherein the receiving component is configured to transmit a not- acknowledge packet when an error is identified in one of the plurality of packets, wherein the not- acknowledge packet comprises a second sequence number.24. A tangible, non-transitory, machine-readable medium, comprising instructions configured to:receive a plurality of packets in a first order;determine whether a plurality of data operations that corresponds to the plurality of packets should be performed in the first order based on information regarding reordering preferences provided in each packet of the plurality of packets; determine a second order to perform the data operations when the plurality of data operations should not be performed in the first order; andsend a reorder message comprising the second order a processor that transmitted the plurality of packets.25. The tangible, non-transitory, machine -readable medium of claim 24, wherein the information indicates whether reordering is allowed or not.26. The tangible, non-transitory, machine -readable medium of claim 24, comprising instructions configured to generate the second order based on the information, wherein the information indicates a degree in which reordering of the plurality of data operations.27. The tangible, non-transitory, machine -readable medium of claim 24, wherein the information is provided in a 2-bit field of each packet of the plurality of packets.
SYSTEMS AND METHODS FOR REORDERING PACKETTRANSMISSIONS IN A SCALABLE MEMORY SYSTEM PROTOCOLCROSS-REFERENCE TO RELATED APPLICATION[0001] This application is a Non-Provisional Application claiming priority to U.S.Provisional Patent Application No. 62/006,668, entitled "Systems and Methods for a Scalable Memory System Protocol," filed June 2, 2014, which is herein incorporated by reference. This application is also related to U.S. Patent Application No. 14/724,558, entitled "Systems and Methods for Improving Efficiencies of a Memory System," filed May 28, 2015, which is also herein incorporated by reference.BACKGROUND1. Field Of The Invention[0002] The present disclosure is generally related to a memory system protocol used for performing data operations (e.g., read, write) using memory devices. More specifically, the present disclosure is related to a packet-based scalable protocol that enables a number of memory and processing combinations, provides bit-efficient data transfer operations, and is concordant with a variety of bus types (e.g., electrical, optical).2. Description Of The Related Art[0003] This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art. [0004] Conventional protocols generally transmit packets between memory devices with relatively low failure rates as compared with their predecessors. However, as industries aim to minimize the amount of energy involved in moving packets of data between memory devices and other components, it is desirable to use protocols that efficiently move packets of data using a minimal amount of energy, while maintaining the integrity of the packet transmission.BRIEF DESCRIPTION OF THE DRAWINGS[0005] Various aspects of this disclosure may better be understood upon reading the following detailed description and upon reference to the drawings in which:[0006] FIG. 1 illustrates a block diagram of an example of a computing system, in accordance with an embodiment;[0007] FIG. 2 illustrates a block diagram of an example of a memory device which may be part of the computing system of FIG. 1 , in accordance with an embodiment;[0008] FIG. 3 illustrates a packet level view of a packet that may be transmitted within the computing system of FIG. 1, in accordance with an embodiment;[0009] FIG. 4 illustrates a detailed packet level view of the packet that may be transmitted within the computing system of FIG. 1 , in accordance with an embodiment;[0010] FIG. 5 illustrates a flow chart of a method for assigning transaction windows for various types of memories that are part of the memory device of FIG. 2, in accordance with an embodiment;[0011] FIG. 6 illustrates an example of a two-stage response for high latency read operations, in accordance with an embodiment;[0012] FIG. 7 illustrates an example of a one-stage response for high latency direct memory access operation, in accordance with an embodiment;[0013] FIG. 8 illustrates a lane packing example in which a scalable protocol packs two18-bit requests together, in accordance with an embodiment;[0014] FIG. 9 illustrates a flow chart of a method for generating a packet fortransmission, in accordance with an embodiment; [0015] FIG. 10 illustrates a block diagram depicting a number of packets that may be transmitted according to the lane packing scheme, in accordance with an embodiment;[0016] FIG. 11 illustrates a flow chart of a method for receiving packets according to the lane packing scheme, in accordance with an embodiment;[0017] FIG. 12 illustrates a flow chart of a method for reordering operations that are performed by a component receiving packets, in accordance with an embodiment;[0018] FIG. 13 illustrates a block diagram showing how packets are reordered with reference to the method of FIG. 12, in accordance with an embodiment;[0019] FIG. 14 illustrates a flow chart of another method for reordering operations that are performed by a component receiving packets, in accordance with an embodiment;[0020] FIG. 15 illustrates a flow chart of a method for throttling back the transmission rate of requests sent from a transmitting component, in accordance with an embodiment;[0021] FIG. 16 illustrates a graph that depicts a linear throttle-back curve, in accordance with an embodiment; and[0022] FIG. 17 illustrates a graph that depicts a non- linear throttle-back curve, in accordance with an embodiment.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS[0023] One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation- specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. Scalable Memory System Protocol[0024] As will be discussed in detail below, the present disclosure generally relates to scalable memory system protocol. That is, the scalable memory system protocol may adjust certain operations based on characteristics of the data packets (e.g., requests, responses) being transferred. In one embodiment, the scalable memory system protocol ("scalable protocol") may be a packet-based protocol that enables an efficient (e.g., power efficient, bit efficient) transmittal of packets of data between memory devices, computing devices, and the like. The scalable protocol may be implemented in a number of combinations with various types of memory and processors such as Automata processors, a Processor-in-Memory, network devices, storage appliances, hierarchical memory, abstracted memory, and the like. As used herein, processors may include any suitable processor capable of performing executable instructions on a corresponding electrical device. The scalable protocol may also facilitate a broad range of devices including data center switches/routers, network routers, mobile devices, storage devices, Automata processors, Stream processors, processor-in-memory, work-moving-processors, Big Data, Big Graph, secure memory, virtual network, general abstracted memory (e.g., Dynamic Random-Access Memory (DRAM), NAND, and emerging memories), and the like.[0025] In certain embodiments, the scalable protocol may be designed to facilitate communication of data packets between various memory and processors while maintaining a lowest reasonable scalable protocol overhead. In other words, the scalable protocol may be designed to provide a bit efficient transfer of data packets in that most, if not all, bits transferred via the scalable protocol are directly part of a corresponding data packet being transmitted. For instance, as will be discussed in more detail below, the scalable protocol may enable request packets to be packed together without padding a signal with zeros unrelated to the respective packets, thereby maximizing a bit efficiency of data packets being transferred via transmission lanes of a bus.[0026] In addition to providing a bit-efficient mechanism to transfer data packets, the scalable protocol may be concordant with a number of bus types, such as electrical or optical buses. Moreover, the scalable protocol may be capable of providing various operations with regard to the respective bus including encoding, lane counting, channel counting, speed, style, instantiation count of a system, and the like. Scalable Protocol[0027] Keeping the foregoing in mind, the scalable protocol may be optimized to provide for successful transactions such that packet failures are rare (e.g., < le-6). The scalable protocol may also provide a careful tradeoff between packet transmission types, sizes, and a number of different packet sizes that may be handled.[0028] As discussed above, industries are more focused on minimizing data movement energy. That is, the energy consumed moving data packets between memory devices should be minimized. As such, the scalable protocol may, within reason, eliminate certain bits and messages that may be discerned from other bits or messages or may otherwise be unnecessary. For example, the scalable protocol may obviate the need for a device to transmit data related to information that may already be known to the receiver.[0029] Moreover, to provide efficient data movement operations, the scalable protocol may facilitate transactions that are "sent to the memory." The scalable protocol may also transfer local operations, where internal data flow is relatively low as compared to external control operations, with the external control operations. Furthermore, the scalable protocol may implement an error control strategy that minimizes overhead using a dynamic field size that adjusts based on the amount of data (e.g., payload) being transmitted in the respective packet.[0030] The scalable protocol may also be designed to use a minimum number of fields to convey data. As such, the scalable protocol may allow field size tuning and flexibility since every packet may not make use of all available fields.[0031] The scalable protocol may also be designed to facilitate the coexistence of low- latency and high-latency data. For example, the scalable protocol may provide the ability to interlace the transmittal of low- latency data between the transmittal high-latency data.[0032] The design of the scalable protocol may be characterized as simple and generic in that the variable packet size may be determined in a single field of the respective packet.Further, the scalable protocol may maintain simplicity in terms of its operations while remaining capable of performing complex transactions and operations. In addition, the scalable protocol may be flexible enough to enable future functions that it may not currently be designed to provide.[0033] In certain embodiments, the scalable protocol may limit the order in which packets are sent using local ordering schemes. That is, the scalable protocol may not enforce certain global synchronization ordering rules or the like. To stay true to the notion that the scalable protocol remains abstract, the scalable protocol may facilitate operations with a special device or with different types of channel properties.[0034] Keeping the foregoing in mind, the present disclosure describes a number of systems and techniques that may be implemented within the scalable protocol to provide for the aforementioned advantages. Although certain systems or techniques detailed below are described independently with respect to other systems or techniques, it should be noted that each of the systems and techniques described herein may be implemented with various other systems and techniques also described herein.Computing and Memory Systems Using the Scalable Protocol[0035] Turning now to the drawings, FIG. 1 illustrates a block diagram of a computing system 10 that may employ various techniques and systems described herein. The computing system 10 may be any of a variety of computing devices, such as a computer, pager, cellular phone, personal organizer, control circuit, etc. The computing system 10 may include a host system on chip (SoC) 12 that may be coupled to a number of memory devices 14. The host SoC 12 may be an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. As such, the host SoC 12 may include one or more processors, such as a microprocessor, that may control the processing of system functions and requests in the computing system 10.[0036] As mentioned above, the host SoC 12 may be coupled to the memory devices 14.In certain embodiments, the host SoC 12 may be coupled to the memory devices 14 via channels 16. The channels 16 may include buses, electrical wiring, or the like.[0037] FIG. 2 depicts a block diagram of an embodiment of the memory device 14. The memory device 14 may include any storage device designed to retain digital data. The memory device 14 may encompass a wide variety of memory components including volatile memory and non-volatile memory. Volatile memory may include Dynamic Random Access Memory(DRAM) and/or Static Random Access Memory (SRAM). Moreover, the volatile memory may include a number of memory modules, such as single inline memory modules (SIMMs) or dual inline memory modules (DIMMs).[0038] The non-volatile memory may include a read-only memory (ROM), such as anEPROM, and/or flash memory (e.g., NAND) to be used in conjunction with the volatile memory. Additionally, the non-volatile memory may include a high capacity memory such as a tape or disk drive memory. As will be appreciated, the volatile memory or the non-volatile memory may be considered a non-transitory tangible machine-readable medium for storing code (e.g., instructions).[0039] As shown in FIG. 2, in certain embodiments, the memory device 14 may include a system on chip (SoC) 22 that may be any suitable processor, such as a processor-in-memory (PIM) or a computer processor (CPU), tightly coupled to the memory components stored on the memory device 14. Generally, the memory SoC 22 may be on the same silicon chip as the memory components of the memory device 14. By merging the processing and memory components into the memory device 14, the memory SoC 22 may manage the manner in which data requests and responses are transmitted and received between the memory components and the host SoC 12. In certain embodiments, the memory SoC 22 may control the traffic between the memory components to reduce latency and increase bandwidth. As will be appreciated, the host SoC 12 and the memory SoC 22 may employ a scalable memory system protocol when controlling the transmissions between memory components and other devices in accordance with embodiments described herein. As such, the scalable memory system protocol may be operating on the channels 16 between the memory device 14 and the host SoC 12, as well as on channels 29 between the memory components and the memory SoC 22.[0040] In certain embodiments, the memory device 14 may also include a buffer 23. The buffer 23 may store one or more packets received by the memory SoC 22. Additional details with regard to how the memory SoC 22 may use the buffer 23 will be described below with reference to FIGS. 15- 17. By way of example, the memory device 14 may include memory types such as NAND memories 24, Reduced-latency Dynamic random access memory (RLDRAM) 26, double data rate fourth generation synchronous dynamic random-access memory (DDR4) 28, and the like.[0041] In certain embodiments, the host SoC 12 and the memory SoC 22 may perform various operations based on computer-executable instructions provided via memory components, registers, and the like. The memory components or storage may be any suitable articles of manufacture that can serve as media to store processor-executable code, data, or the like. These articles of manufacture may represent computer-readable media (i.e., any suitable form of memory or storage) that may store the processor-executable code used by the host SoC 12 or the memory SoC 22 to perform the presently disclosed techniques. The memory and the storage may also be used to store the data, analysis of the data, and the like. The memory and the storage may represent non-transitory computer-readable media (i.e., any suitable form of memory or storage) that may store the processor-executable code used by the host SoC 12 or the memory SoC 22 to perform various techniques described herein. It should be noted that non- transitory merely indicates that the media is tangible and not a signal.[0042] Although the following description of various aspects related to the scalable protocol is described herein as being performed with respect to the host SoC 12 and the memory SoC 22, it should be noted that all of the systems and techniques described herein may be performed using any suitable device. That is, the scalable protocol may facilitate communication between any two devices, such as communications between two processors, two memory modules, a processor and a memory module, and the like.Packet-Level View of Packets in Scalable Protocol[0043] To employ the scalable memory system protocol when transmitting requests and responses involving the memory components, the memory SoC 22 may send packets of data structured according to a packet level view of a packet 30 illustrated in FIG. 3. As shown in FIG. 3, the packet 30 may include a transaction type field 32, a payload field 34, and an error control code (ECC) field 36. The transaction type field 32 may include data indicative of the type of transmittance, a type of packet being transmitted, or both. The transaction type field 32 may also indicate a packet size to indicate a number of bits in the data payload and the number of bits in the ECC field, thereby indicating the number of bits in the entire packet. In certain embodiments, the transaction type field 32 may indicate the size of the payload field 34 and the ECC field 36 in an indirect manner. For example, the data stored in the transaction type field 32 may serve as an index to a lookup table. The lookup table may provide information regarding the sizes of the payload field 34 and the ECC field 36. As such, the memory SoC 22 may, in one example, may receive the packet 30 and use the data stored in the transaction type field 32 as an index to a lookup table that may be stored within the memory device 14 to determine the sizes of the payload field 34 and the ECC field 36.[0044] In certain embodiments, the transaction type field 32 may specify different types of packets based on whether the packet is being transmitted on a request bus Q or a response bus S, which may include the channels 16, the channels 29, or the like. Generally, the request bus Q and the response bus S may be separate, unidirectional, or common inputs/outputs. The request bus Q generally includes q lanes, and the response bus S generally includes s lanes.[0045] Example transaction type fields 32 for packets 30 transmitted on the request busQ may include read operations (e.g., 8uRead, 8uRead2, varRead, where u might be an 8-bit unit or a 9-bit unit or possibly a non-integer unit size of data), message data (e.g., message), read- modify-write (RMW) operations (e.g., RMW1A, RMW2A, RMW3A, RMW4A), datasets (e.g., 32uData, 64uData, 128uData, 256uData), pattern write operations (e.g., 8uPattern Write, 16uPattern Write), write- with-enable operations (e.g., 8uWriteWithEnables,16uWriteWithEnables), write operations (e.g., 8uWrite, 16u Write, 32Write, 48uWrite, 64 Write, 80uWrite, 96u Write, 1 12u Write, 128Write, 256Write), and the like. Providing 32Write operations and 64Write operations may provide more flexibility to a system designer in picking a maximum packet size. The scalable protocol may, in one embodiment, have a limit of 256Unit, but using a smaller maximum packet size may help with system latency. It should be understood that the difference between 32u Write and 32Write is that 32u Write is a single fixed size and the TransactionSize is not included in the packet. On the other hand, 32Write includes aTransactionSize and thus can involve additional 32U chunks of data, not just the 32U chunk included in the original request packet. Noting the listed transaction type examples above for the request bus Q, the packets 30 transmitted via the request bus Q may include a total of 26 native transactions (e.g., 8uRead, message, RMWl A, etc.), each of which may be represented using a 5-bit field for global (i.e., system that includes numerous CPU modules and/or numerous memory device modules in which packets may be relayed from unit to unit) or local systems (i.e., system that include few modules in which packets move point to point between units without relaying). As such, in one embodiment, the transaction type field 32 for a packet 30 on the request bus Q may be 5 bits.[0046] In the same manner, example transaction type fields 32 for packets 30 transmitted on the response bus S may include message data (e.g., message), datasets (e.g., 8uData, 16uData, 32uData, 48uData, 64uData, 80uData, 96uData, 1 12uData, 128uData, 256uData), and the like. Again, noting the listed transaction type examples above for the response bus S, the packets 30 transmitted via the response bus S may include a total of 1 1 native transactions (e.g., message, 8uData, etc.), each of which may be represented using a 4-bit or 5-bit field for a local system. As such, in one embodiment, the transaction type field 32 for a packet 30 on the response bus S may be 4 bits.[0047] Since the 26 request bus Q transaction types and the 1 1 response bus S transaction types include 5 of the same transaction types (e.g., message, 128uData, 256uData), the total number of transaction types used by the request bus Q and the response bus S may be 32. These 32 transaction types may thus be represented in a 5 -bit field. Additional details regarding the transaction types will be discussed further below.[0048] Referring again to FIG. 3, the packet 30 may also include a payload field 34 and an error control code (ECC) field 36. As mentioned above, the respective size of the payload field 34 and the ECC field 36 may be determined based on the data in the transaction type field 32. By way of examples, the payload field 34 may be approximately between 45 bits and 2093 bits, and the ECC field 36 may be approximately between 6 bits and 37 bits. The payload field 34 may include the data representative of the request or response being sent via the request or response bus, respectively.[0049] The ECC field 36 may include the error control code to determine whether the packet 30 received by the receiving component includes any errors. As such, the error control code may include various algorithms, such as adding redundant data or parity data, to a message, such that the original data may be recovered by the receiving component even when a number of errors were introduced, either during the process of transmission, or on storage. Generally, the error control code may provide the ability to detect an error within the limits of the code and indicate a further action, such as retransmitting the errant packet, when the error is detected.Transaction Type Field[0050] As mentioned above, the scalable protocol may use packets that have a transaction type field to perform various types of operations more efficiently. Generally, the scalable protocol may enable an abstracted memory architecture to employ any memory type and incorporate various types of data processing using a single abstraction protocol. Keeping this in mind, the transaction type field 32 may be a useful piece of data to allow the scalable protocol to perform various types of data processing since the transaction type field 32 provides two distinct pieces of information. That is, the transaction type field 32 combines two data fields (i.e., type and size) into one for a minimum possible bit count occupancy in the protocol.[0051] As will be shown below, the scalable protocol may support variable size packets for transmission efficiency. As such, it may be useful to indicate a size of the packet to the receiving component to prevent the system from becoming unsynchronized. Here, the transaction type field 32 may provide a single field that identifies the type of system transaction being performed and may implicitly define the packet size by virtue of the transaction type. In other words, the transaction type field 32 may indicate a type of transaction being requested by the transmitting component and the receiving component may then determine the size of the corresponding packet (e.g., payload field 34 and ECC field 36) based on the specified transaction type. As such, the transaction type field 32 may be a dual-purpose field employed by the scalable protocol to provide a bit-efficient manner to convey information.[0052] In certain embodiments, the transaction type field 32 may also indicate additional information regarding data that may be provided in the payload field 34. For instance, based on the value of the transaction type field 32, transaction window information (window), address information (address), levels of indirection (levels) information, message type information, raw data, and other types of information may be ascertained to be part of the payload field 34.Details regarding the information that may be part of the payload field 34 will be discussed in greater detail below. [0053] The scalable protocol may be employed in a system having one or more request bus Q transactions and one or more response bus S transactions. Although the request bus Q and the response bus S has been described above as having a 5-bit field and a 4-bit field, respectively, it should be noted that the request bus Q and the response bus S may be designed to have a variety of different bit sizes. By way of example, request bus Q transactions may be indicated using a 5-bit field (e.g., 00000, 00001 , ..., 1 1110, 111 11), such that possible transaction types that may be associated with the 5 -bit field as follows (where data unit u size is 8 bits):01011 - 8uRead - 8B data read operation, provide additional fields (e.g., sub-fields within the payload field 34): Window, Address, Levels (levels of indirection)01101 - varRead - variable data size read operation, provide additional fields: Transactions ize, Window, Address, Levels00000 - Message - general message, provide additional fields Window, MessageType, Data (Data is constrained only by the field size, e.g. data for the Nack message type may include DataSequence, OriginatingTransactionType, OriginatingWindow)OHIO - RMW1 A - read-modify-write request with single address incorporated, provideadditional fields: TransactionSize, Window, Address, OpCode, ImmediateData01100 - 8uRead2 - two 8B data read operations, provide additional fields: First Window, First_Address, First_Levels, Second_Levels, Second_Address10110 - 8uWrite - write request including 8B data, provide additional fields: Window, Address, Levels, 8B data10010 - 8uWriteP - write request including 8B data to be written once or more, provideadditional fields: Window, Address, TransactionSize, Levels, 8B data01111 - RMW2A - read-modify-write request with two addresses incorporated, provide additional fields: TransactionSize, First Window, First Address, OpCode, ImmediateData, Second Window, Second Address10100 - 8uWriteEn - write with WriteEnableBits and 8B data, provide additionalfields: Window, Address, Levels, 8enable bits, 8B data10000 - RMW3A - read-modify-write request with three addresses incorporated, provideadditional fields: TransactionSize, First Window, First Address, OpCode,ImmediateData, Second Window, Second Address, Third Window, Third Address10111 - 16uWrite - write request including 16B data, provide additional fields: Window,Address, Levels, 16B data10011 - 16uWriteP - write request including 16B data to be written once or more, provideadditional fields: Window, Address, TransactionSize, Levels, 16B data10101 - 16uWriteEn - write with WriteEnableBits and 16B data, provide additional fields:Window, Address, Levels, 16 enable bits, 16B data10001 - RMW4A - read-modify-write request with four addresses incorporated, provideadditional fields: TransactionSize, First Window, First Address, OpCode,ImmediateData, Second Window, Second Address, Third Window, Third Address, Fourth Window, Fourth Address00011 - 32uData - extended data packet, provide additional fields: Window, 32B data. Note that a data sequence number is not explicitly transmitted because the extended data packets are transmitted in order, thus, the receiver can append a sequence. If a subsequent NACK is required, the implicit sequence number is used as a reference.11000 - 32 Write - write request including 32B data, provide additional fields: Window,Address, Levels, 32B data, TransactionSize 11001 - 48uWrite - write request including 48B data, provide additional fields: Window, Address, Levels, 48B data00101 - 64uData - extended data packet, provide additional fields: Window, 64B data. Note that a data sequence number is not explicitly transmitted because the extended data packets are transmitted in order, thus, the receiver can append a sequence. If a subsequent NACK is required, the implicit sequence number is used as a reference.11010 - 64Write - write request including 64B data, provide additional fields: Window,Address, Levels, 64B data, TransactionSize11011 - 80u Write - write request including 80B data, provide additional fields: Window,Address, Levels, 80B data11100 - 96u Write - write request including 96B data, provide additional fields: Window,Address, Levels, 96B data11101 - 1 12uWrite - write request including 1 12B data, provide additional fields: Window, Address, Levels, 1 12B data01001 - 128uData - extended data packet, provide additional fields: Window, 128B data. Note that a data sequence number is not explicitly transmitted because the extended data packets are transmitted in order, thus, the receiver can append a sequence. If a subsequent NACK is required, the implicit sequence number is used as a reference.11110 - 128Write - write request including 128B data, provide additional fields: Window,Address, Levels, 128B data, TransactionSize01010 - 256uData - extended data packet, provide additional fields: Window, 256B data. Note that a data sequence number is not explicitly transmitted because the extended data packets are transmitted in order, thus, the receiver can append a sequence. If a subsequent NACK is required, the implicit sequence number is used as a reference.11111 - 256Write - write request including 256B data, provide additional fields: Window,Address, Levels, 256B data, TransactionSize[0054] The listed example transaction types are provided in order of the ensuing packet size (barring any unintentional ordering errors) assuming a 5-bit transaction type, a 4-bit transaction size, a 3-bit window, a 48-bit address, 7-bit data sequence number, and extra bits in the data field which are specifically stated for each transaction type. Moreover, as mentioned above, the packet 30 may include the ECC field 36, which may be a fixed size as in conventional protocols. However, as will be appreciated, in certain embodiments, the ECC field 36 may be a variable size as will be discussed in greater detail below.[0055] Keeping the foregoing in mind, response bus S transactions may be indicated using a 4-bit field (e.g., 0000, 0001 , ... , 1 1 10, 1 1 1 1). If, however, the transaction type field 32 is 5 bits, the transaction type field 32 may simply include an extra leading zero. Example 4-bit transaction types for response bus S transactions may include:0000 - Message - general message, provide additional fields: Window, MessageType, Data (note that there are numerous message types such as Completion, ReOrder, NACK, and others)0001 - 8uData - 8B data response, provide additional fields: Window, 8B data0010 - 16uData - 16B data response, provide additional fields: Window, 16B data0011 - 32uData - 32B data response, provide additional fields: Window, 32B data0100 - 48uData - 48B data response, provide additional fields: Window, 48B data0101 - 64uData - 64B data response, provide additional fields: Window, 64B data 0110 - 80uData - 80B data response, provide additional fields: Window, 80B data0111 - 96uData - 96B data response, provide additional fields: Window, 96B data1000 - 112uData - 112B data response, provide additional fields: Window, 112B data1001 - 128uData - 128B data response, provide additional fields: Window, 128B data 1010 - 256uData - 256B data response, provide additional fields: Window, 256B data[0056] Like the example transaction types listed above for the request bus Q transactions, the example response bus S transactions above are listed in order of the ensuing packet size assuming a 5-bit transaction type on the request bus Q, a 4-bit transaction type on response bus S, a 4-bit transaction size, a 3-bit window, a 48-bit address, a 7-bit data sequence number, and extra bits in the data field which are stated specifically for each transaction type.[0057] As shown above, each transaction type may be associated with a different length packet depending on individual field size assumptions. As a result, the scalable protocol may avoid using an additional field to indicate a packet size. Conversely, in a protocol having 8-bit flits, the flit count of the request bus Q packets would be, in order of transaction type, as follows: 8, 8, 9, 11 , 13, 16, 16, 17, 18, 21 , 24, 25, 26, 27, 41, 57, 73, 89, 105, 121 , 132, 138, 260, 266. This protocol may then include a packet size field that may be 9 bits in size to indicate the flit count of each packet. Alternatively, the packet size field may be 5 bits in size to differentiate each of the 24 different lengths and then a translation function may be used to determine an exact flit count. Unlike conventional protocols, the scalable protocol may not employ a packet size field. Instead, the system may use a translation function to determine a packet's size based on the transaction type and may then save the protocol bits.Transaction Windows[0058] In addition to providing improved bit-efficiency with regard to error control codes, the scalable protocol may organize packets according to their respective transaction types and transmit the organized packets according a particular order based on their respective transaction types. In conventional protocols, requests may be ordered according to a time at which they have been transmitted. In this case, if the first request involves a high latency and the following request (i.e., second request) involves a low latency, the second request may have to wait for the first request to finish even though it may be completed more quickly than the first request. As a result, the first request may choke the bus. In other words, the first request may prevent the bus from responding to relatively low latency requests, even though the low latency requests may be resolved more quickly than the higher latency requests.[0059] To provide a more efficient manner in which to mix different types of transaction requests within the bus, the scalable protocol may use transaction windows to determine an order in which requests are serviced. A transaction window may be a virtual channel implemented using a virtual address space. Each transaction window may be associated with a respective memory device, such as NAND and DRAM. As such, a single transaction window may be associated with a memory or memories having the same characteristics, such as latency, bandwidth, granularity, persistence, and the like.[0060] Generally, the transaction window may provide information related to a certain set of rules of engagement for each particular transaction. As mentioned above, the transaction window data may specify a set of lanes of a physical bus (e.g., channels 29) being used to transmit and receive packets for particular transactions. The set of lanes specified by the transaction window may be referred to as a virtual channel accessible to the memory device 14. It should be noted that the channels 29 described herein includes one or more lanes in which data may be transferred. Using the transaction window data to characterize certain features (e.g., ordering) related to the transmission or reception of packets, the scalable protocol may better manage the transmission of packets between processors.[0061] For instance, since each type of memory device has a different latency, it may be beneficial to manage the flow of bus traffic between various types of memory devices 14 and the host SoC 12 based on respective latencies of the respective memory devices. By way of example, DRAM devices generally have fast latencies (e.g. 50ns from a random request), while NAND devices generally have slow latencies (e.g. 500us) with error correction after a random request. SRAM buffers have faster latency of 10ns. Keeping this in mind, the scalable protocol may designate a transaction window for each memory device. In one embodiment, the scalable protocol may use two fields to designate each transaction window: a 48-bit Address and a 3-bit Window (i.e., addressing Windows 0 through 7). FIG. 4 illustrates a block diagram that depicts the two fields that designate the transaction window in the packet 30. As shown in FIG. 4, a transaction window field 42 and an address window field 44 may be part of the payload field 34. The transaction window field 42 may specify a designated transaction window and the address window field 44 may specify the 48-bit address associated with the specified transaction window. The 48-bit address may be a virtual address assigned to a virtual channel (i.e., window). In one embodiment, the virtual address space may reference a physical address located on a hard disk drive or some other storage device. As such, the memory device may have the ability to store more data than physically available.[0062] In addition to the transaction window field 42 and the address window field 44, the packet may include a start bit 46 and a level of indirection field 48. The start bit 46 may indicate the beginning of a packet in a stream of bits. The level of indirection field 48 may be part of the payload field 34 and may provide a value that indicates a number of levels of indirection the respective transaction may include. Additional details regarding the start bit field 46 and the level of indirection field 48 will be discussed in greater detail in other sections below.[0063] Generally, each type of memory device may be assigned to a different transaction window. By way of examples, DRAMO may be assigned into WindowO, DRAM1 intoWindow 1 , DRAM2 into Window2, NANDO into Window3, NANDl into Window4, and SRAM buffers and control registers into Window7. With this in mind, an example set of transactions may be sent according to the following sequence:(1) Read.WindowO.AddressA(2) Read.Window3.AddressB(3) Read.WindowO.AddressC(4) Read.WindowO.AddressD(5) Read. WindowO . AddressE(6) Read.WindowO.AddressF (7) Read.Window3.AddressG(8) Read.WindowO.AddressH(9) Read.WindowO.AddressI[0064] As shown above, transactions 1 , 3-6, 8, and 9 are part of WindowO, which corresponds to a DRAM memory device. Transactions 2 and 7, on the other hand, are part of Window3, which corresponds to a AND memory device. Upon receiving the above requests, the receiving component may respond to the received requests using ordering rules established according to the respective transaction windows specified for each transaction. As such, the receiving component may use the transaction windows to provide a local ordering protocol between the transmitting component and the receiving component.[0065] In one embodiment, the ordering rules specified for a particular transaction window may be based on the respective latency associated with the respective transaction window. That is, the receiving component may respond to the requests involving lower latencies first before responding to the requests having longer latencies. Since the receiving component may be aware of the latency differences between each transaction window, the receiving component may decide to receive the transactions according to their window designations. As such, referring again to the example transactions described above, the receiving component implementing the scalable protocol may respond to the above requests as follows:(1) Data.WindowO.AddressA(3) Data.WindowO.AddressC(4) Data.WindowO.AddressD(5) Data.WindowO.AddressE(6) Data.WindowO.AddressF(8) Data.WindowO.AddressH(9) Data.WindowO.AddressI(2) Data. Window3. AddressB(7) Data. Window3. AddressG [0066] As shown above, the receiving component may first respond to the low-latency requests of WindowO before responding to the higher latency requests of Window3. That is, the long latency requests may be transmitted later than the short latency requests. As a result, the system bus servicing the requests is not hampered by the presence of different classes of memory on the same bus without adding various elaborate protocol complications, such as adding a field with REQUEST PRIORITY. In this way, the scalable protocol provides a complex system operation using a minimal number of bits in a relatively simple manner.[0067] In another example, the receiving component may employ a local ordering scheme based on a corresponding transaction window specified for each transaction. For the following transaction:( 1 ) Read8b .Window 1.Address A(2) Read8b.Window2.AddressB(3) Read8b.Windowl .AddressCThe receiving component may first receive transaction (1) and determine whether Address A is available. If Address A is busy, the receiving component may store transaction (1) in a queue and wait for AddressA to become available. In the meantime, the receiving component may then receive transaction (2) and perform the read operation if AddressB is available. The receiving component may then receive transaction (3) and since it is associated with the same window as transaction (1), the receiving component may determine whether there are any ordering conflicts with regard to performing transaction (3) before transaction (1) because they are part of the same transaction window. In the same manner, the receiving component may disregard any potential ordering conflict or the determination of any potential ordering conflict with transaction (2) because it is part of a different transaction window. As such, the transaction windows may provide a more efficient way for data operations to be performed while different transactions are being performed. That is, since the transaction windows allow operations to be logically grouped with related operations or memory devices, operations may be performed in a variety of orders, thereby providing a flexible way to complete transactions. In contrast, conventional protocols typically enforce a strict order of data operations to be performed according to the order in which the transactions were sent even though different transactions may be performed in a variety of orders or may process transactions based on the inclusion of priority information sent in a dedicated protocol field.[0068] In one embodiment, the scalable protocol may provide an ability to assign a minimum transaction size for each window (e.g., WindowO. Size = 8Bytes,Window3.Size=128B). For example, if a minimum transfer size for WindowO is 8 bytes, for a 48b address field, WindowO may store 2A48 * 8bytes = -2.25x1015bytes. In the same manner, if a minimum transfer size for Window3 is a 128 bytes, Window3 may support ~3.6xl 016bytes. As such, both WindowO and Window3 support considerably more bytes than the address space implies.[0069] Another feature associated with the transaction window includes a simple system- level addressability of other spaces such as WindowO SRAM and system control registers without creating additional commands in the protocol. That is, SRAM and system control registers may be addressed by simply using WindowO. Prior protocols, on the other hand, may use additional commands such as register.read and register.write to interact with these types of memories. With the designated transaction window for these memory types, the same read and write commands used for other memory devices may also be used for SRAM and system control registers. That is, the read and write commands may simply point to an appropriate window. As such, the scalable protocol may employ fewer commands, thereby reducing the number of bits used in the protocol.[0070] By organizing data transactions according to transaction types, multiple transaction windows may provide multiple avenues of access to the same memory type. For example, a typical DDR3 DRAM may include eight banks, and an internal bus may include eight such DRAMs. With this in mind, the eight DRAMS may be organized such that Window 1 represents bank 0 of a group of eight DDR3 DRAMs and Window2 provides access to bank 1 of this same group. In this way, each window may specify a particular virtual address space of each DRAM. With this in mind, it is clear that a number of suitable grouping methods are available since there could be any number of DRAMs grouped in a lock-step operation, each with pages, banks and ranks. In the same manner, NANDs may also be grouped with pages, planes, and blocks. Furthermore, multichannel devices can be further separated per channel and various aggregations thereof. Generally, the grouping options may be determined based on a complexity of logic chip design. [0071] By supporting multiple transaction windows having multiple virtual address spaces and virtual channels, the scalable protocol may use the transaction windows to establish predictable data ordering in a system that contains memories that have different latencies. As a result, the scalable protocol may support high and low priority requests without having an explicit protocol field that specified how the high and low priority requests are ordered.[0072] With the foregoing in mind, FIG. 5 illustrates a flow chart of a method 50 for assigning transaction windows for various types of memories that are part of the memory device 14. Although the method 50 is depicted in a particular order, it should be noted that the method 50 may be performed in any suitable order, and thus, is not limited to the order depicted in the figure. Additionally, the following description of the method 50 will be described as being performed by the memory SoC 22 for discussion purposes. As such, any suitable processor that is communicatively coupled to various types of memories may perform the operations described in the method 50.[0073] Referring now to FIG. 5, at block 52, the memory SoC 22 may receive an initialization signal from registers or other memory components stored within the memory SoC 22 itself. In one embodiment, the initialization signal may be received by the memory SoC 22 upon power up or when the memory device 14 initially receives power.[0074] At block 54, the memory SoC 22 may determine the memory types that it may be able to access. That is, the memory SoC 22 may scan its communication lanes (e.g., channels 29) and identify the different types of memories that may be communicatively coupled to the memory SoC 22. Referring back to the example memory device 14 depicted in FIG. 2, the memory SoC 22 may determine that the RLDRAM 26, the DDR4 28, and the NAND 24 memory types are coupled to the memory SoC 22.[0075] At block 56, the memory SoC 22 may determine the capabilities of each of the memory types identified at block 54. The capabilities of the memory types may include a capacity of the memory type, an expected latency for a read operation using the memory type, an expected latency for a write operation using the memory type, and the like. Other capabilities that may be identified by the memory SoC 22 for use in assigning transaction windows may include read latency, write latency, bandwidth, minimum read transaction size, minimum write transaction size, device cycle time, writeable in place or not, byte write capability or not, and the like. In certain embodiments, each different type of memory may be associated with a different set of capabilities. The associations between the different types of memories and the different sets of capabilities may be stored in a register of the memory SoC 22 or may be provided by each respective memory type.[0076] After determining the capabilities of the memory types, the memory SoC 22 may, at block 58, assign a transaction window to each memory type identified at block 54 based on the respective capabilities of each memory type. Generally, the memory SoC 22 may assign each similar memory type to the same transaction window. That is, since each similar memory type has similar capabilities, the memory SoC 22 may assign the memory type to the same transaction window. For example, referring again to the example memory device 14 of FIG. 2, the memory SoC 22 may assign the two DDR4 28 memories to the same transaction window because they are identical memory types. In the same manner, if two different memory types have a certain number of similar capabilities, the memory SoC 22 may also assign the two memory types to the same transaction window.[0077] In one embodiment, the memory SoC 22 may assign a memory type to a corresponding transaction window based on desired operations of the memory SoC 22. For instance, if the memory SoC 22 desires that all read operations have at least a particular latency, the memory SoC 22 may assign each identified memory type into a first transaction window that meets this latency threshold or into a second transaction window that does not meet this latency threshold.[0078] After assigning a transaction window to each identified memory type, the memorySoC 22 may proceed to block 60 store properties of each transaction window in a storage device. The storage device may include any suitable device capable of storing data. As such, the storage device may include a local register, a table, or some other information storage unit. In this way, the memory SoC 22 may perform operations for each memory type according to ordering rules as described above. In some cases, the stored properties may detail certain capabilities of each transaction window along with other relevant information regarding the operation of each transaction window. Programmable Number of Levels of Indirection[0079] Although the packet 30 has been described above as having the transaction type field 32, the payload field 34, and the ECC field 36, in certain embodiments, the scalable protocol may include other optional fields into the packet 30 to condition a request, such as a read, write, move, read-modify-write, and the like. One such condition may include indicating a number of levels of indirection to apply to a request.[0080] Levels of indirection may indicate a number of pointers between the request and the data being requested. Given the sheer amount of data available in computing systems (e.g., Big Data), data is often indexed via multiple tables and stored in one location. That is, in a Big Data system, a request for a particular dataset may include a pointer that points to a second pointer (e.g., link list), which points to a third pointer, etc. Eventually, the last pointer in the pointer sequence may point to an address of the requested dataset. Each pointer-to-pointer link may be referred to as a level of indirection. The process of identifying the requested dataset through each level of indirection is often referred to as "pointer chasing."[0081] From the perspective of the requesting component, the requesting component may initially send a request for the particular dataset with a first pointer. In response to the request with the first pointer, the requesting component may receive the second pointer. As such, the requesting component may then send a second request for the particular dataset with the second pointer. This process may continue until the requesting component receives the particular dataset. Accordingly, the traffic on the request bus Q may involve multiple requests before actually receiving the dataset requested by one single initial request.[0082] To reduce the amount of bus traffic with regard to various levels of indirection type request, the scalable protocol may specify within a design of an application-specific integrated circuit (ASIC), the memory SoC 22, the host SoC 12, or the like that implements the scalable protocol an indication of a number of pointers that the requesting component may receive before actually receiving the requested data. As such, the memory system implementing the scalable protocol may identify the pointer chain between the original request and the location of the data and may service the request to the requested data based on the initial request from the requesting component. That is, one request, involving any number of levels of indirection from the requesting component may result in receiving just one response that includes the requested data.[0083] Keeping this in mind, the optional field indicating the number of levels of indirection may include 2 bits. In one embodiment, binary 00 may indicate no levels of indirection or that the supplied address in the request is the actual address of the intended operand. Binary 01 may indicate 1 level of indirection or that the data at the location specified by the address within the request is actually the address (e.g., final address) of a pointer and the intended operand address is contained in that pointer. For example, in a read request having 1 level of indirection, the actual function performed by the requesting component may first include reading the contents of an address contained in the request. In this example, the content of the address may be Address2. The memory system implementing the scalable protocol may then read the contents at the memory location of Address2, and the content of the memory location of Address2 is supplied as the result of the read request.[0084] In the same manner, binary 10 may indicate 2 levels of indirection. Here, the supplied address may point to Address2, which may be a pointer. That is, the Address2 may include a pointer that points to Address3. The data content at Address3 may then be supplied to the requesting component as the result of the read request.[0085] Binary 1 1 may indicate 3 levels of indirection. As such, the supplied address may point to Address2, which may point to Address3, which may point to Address4, which may include the data content. The memory system implementing the scalable protocol may provide the data content to the requesting component as the result of the read request.[0086] In the instance of a write request, the process performed by the memory system implementing the scalable protocol may be the same as the described read example. For instance, with an indirection level field set to binary 1 1 , the memory system may perform a write operation by first reading an address of the write request (e.g., Address2). Knowing that the indirection level field is 1 1 , the memory system may continue to read the content of Address2, which may refer to Address3. The memory system may then read the content of Address3, which may refer to Address4. The memory system may then write the data of the write request into the memory of Address 4. As such, in this example, the write request may include 3 reads before the write, but each of the 3 reads were initiated by a single write request. Although the indirection field has been described as having two bits, it should be noted that the indirection field may include any number of bits, to indicate any number of levels of indirection.[0087] As mentioned above, the levels of indirection may be specified within the level of indirection field 48 of the payload field 34, as illustrated in FIG. 4. The number of levels of indirection specified within the level of indirection field 48 corresponds to a number of levels of indirection that the memory system may expect to encounter when retrieving the contents of the memory location.[0088] In one embodiment, the number of bits (e.g., size) used by the level of indirection field 48 may be determined based on a preference provided by the host SoC 12. For instance, upon power up, the host SoC 12 may discover the memory SoC 22 and determine that the memory SoC 22 is operating using the scalable protocol described herein. As such, the host SoC 12 may determine a maximum number of levels of indirection that it may be able toaccommodate without compromising its performance. The maximum number of levels of indirection may be determined based on the write and/or read latencies of the host SoC 12 or other operating parameters of the host SoC 12. If, for example, the host SoC 12 determines that the maximum number of levels of indirection is 3, it may specify to the memory SoC 22 to use a 2-bit field for the level of indirection field 48. In some instances, the host SoC 12 may not have a preference with regard to operations involving any number of levels of indirection. In this case, the host SoC 12 may specify to the memory SoC 22 not to include the level of indirection field 48.[0089] When preparing the packet 30 to transmit, the memory SoC 22 may determine the cause for the packet 30 to be transmitted. As such, the memory SoC 22 may determine what software command was used for the transfer of the packet 30. The software command that generates the packet may correspond to a command to look up a pointer of a pointer, for example. The memory SoC 22 may interpret this command as having two levels of indirection and thus may provide a 10 binary value in the level of indirection field 48 when preparing the packet 30 for transmission.[0090] The levels of indirection may be useful for various types of operations. By way of example, arrays of arbitrary dimensions may use levels of indirection to assist requesting components identify the content of their respective requests without adding unnecessary traffic to the respective bus. For instance, a 3-dimensional array may use three pointers to access data. Records of some defined structures may use pointers. One example of such a record may include link lists that have a head and tail pointer for every structure in the list. For linked lists, the abstraction of levels of indirection may enable the parsing of the link list to occur more efficiently. That is, by knowing an address in which to start and that the requested data is located at a destination that is the 8thelement of the list or involving 8 levels of indirection, the memory system may retrieve the requested data or the 8thelement of the list using the single request provided by the requesting component. Here, the memory system may parse each of the 8 levels of indirection to determine the location of the requested data. Upon identifying the location of the requested data, the memory system may provide the requesting component the requested data, thus limiting the bus traffic to one request from the requesting component and one response from the location of the requested data.Not Acknowledging Received Packets[0091] Another technique for reducing bus traffic may include not acknowledging received packets. That is, in conventional protocols, each packet that has been received by a recipient component may send an acknowledgment packet back to the transmitting component. Since the vast majority of transmitted packets are received by the corresponding recipient component, sending acknowledgment packets may add to the traffic on the respective bus without providing much of a benefit.[0092] For instance, if an acknowledge bit is sent in response to receiving every successful packet, and considering that the transmissions have a Bit Error Rate (BER) of le-12, which is common in very high speed interfaces, a large number of unnecessary bits are transmitted to indicate that each packet has been received. Keeping this in mind, and assuming that an average packet includes 100 bits and that the average packet error rate is approximately le-10, the recipient component may transmit an acknowledge bit indicating success for lxlO10packets and 1 packet indicating an error. Effectively, the recipient component may have sent about lxl 010bits to indicate one error.[0093] To reduce the amount of bits flowing within a bus, the recipient component may not send an acknowledgment packet for every received packet. Instead, the transmitting component may assume that the packet sent has been received unless otherwise notified by the recipient component. Examples of not sending acknowledgement packets for each received packet are illustrated in FIGS. 6 and 7. Referring to FIG. 6, the request bus Q may send a read request of 2 kilobytes. Upon receiving the read request, the response bus S may transmit a packet indicating that the 2KB message is ready for reading. The request bus Q may then retransmit the read request, which may cause the response bus S to send the requested data in different packets. As shown in FIG. 6, upon receiving each packet of the data, the request bus Q does not send an acknowledgement packet indicating that the packet was received successfully. Here, since the request bus Q may be operating with high latency read operations, the response bus S may include two stages for the operations. That is, the response bus S may indicate that the message is ready and then the response bus S may send the corresponding data related to the read request.[0094] In the same manner, high latency direct memory access subsystems may employ a one stage response for various write operations. For instance, FIG. 7 illustrates an example in which a read-modify -write request is transmitted on the request bus Q and responded with a message that the read-modify-write request is complete.[0095] Keeping the foregoing in mind, the recipient component may still receive packets that have errors. As such, the recipient component may notify the transmitting component that the packet has not been received or that the received packet contains an error by sending a NOT ACKNOWLEDGE packet to the transmitting component. In addition to indicating that the sent packet has not been received, the NOT ACKNOWLEDGE packet may indicate a most recent known-to-be-good bus transaction. As such, when an error is detected via an ECC subsystem, the packet having the error should be re-transmitted. The recipient component may identity the transmitting component of the most recent successful bus transaction as a reference to so that a retransmission can occur.[0096] In certain embodiments, the scalable protocol may use 4 relevant fields to indicate to a transmitting component the identity of the last known-to-be-good bus transaction. The relevant fields include a window, an address, a transaction, and an optional data sequence number. These four fields may identify any request/response in the system. In certain embodiments, an additional ECC field may be used to detect an error in the transmission (e.g., a code which is guaranteed to detect the presence of 1 , 2, 3, 4, or 5 random errors in the transmission packet, also known as an HD6 code, as will be described in more detail below).[0097] Upon detecting an error, the recipient component may send aNOT ACKNOWLEDGE message to the transmitting component. The size of this packet may be many possible field sizes. For instance, the NOT ACKNOWLEDGE message may be a 4-bit transaction type, a 3-bit window, a 48-bit address, a 7-bit data sequence number, and a 5-bit original transaction type for a sum of 67 bits. Then a 15-bit ECC field may be added, thereby bringing the total to 82 bits. Referring back to the example above, 82 bits is significantly lower than the lxlO10bits sent for indicating one error in lxlO10packets, and thus is a more efficient way to indicate address error packets. It should be noted that the data sequence number mentioned above may identify the erroneous packet. Additional details regarding the data sequence number and how it may be generated will be discussed below with reference to FIGS. 12-14.[0098] Upon detecting the error in the system, the transmitter component should retransmit the data. However, since there is some latency in detecting the error, the transmitting component may have already transmitted other packets before the recipient component determined that an error was present in a received packet. Since the scalable protocol includes variable packet sizes sent using data packing techniques described above, a previoustransmission error could cause the recipient component to have a wrong packet length, and hence misinterpret every data packet after the packet containing the error. As such, the receiving component may indicate to the transmitting component an identity of the most recent known-to- be-good bus transaction to the recipient component. The transmitting component and receiving component may then return to a point at which the packet in error has been received and prevent any action from occurring on the potentially erroneous packet and packets which follow it.[0099] Due to this rule of referencing the last known good bus transaction, the recipient component may accurately indicate to the transmitting component the correct point at which a retransmission may occur. However, the recipient component may incorporate one exception for the above rule when there has been no good transaction (e.g., the first transaction since power-on or reset was unsuccessful). In this case, the recipient component may populate all fields with 0's, such that all elements of the system will interpret the field of 0's as a "first transaction." [00100] As mentioned above, the scalable protocol may include an optional data sequence number field. This field may support transactions that are desired to be larger than a largest response packet supported by the protocol. For example, consider a minimum transaction in a Window as being 128 bytes and another field called Size that dictates a size of a transaction, the total transaction size may be determined as 2ASize * windowMinTransactionSize. If Size is a 3- bit field, the maximum transaction could be 2A7 * 128 = 16,384 bytes. To prevent any bus from being tied up too long by one request, the largest single packet supported by the protocol may be 128B of data. Hence, the 16,384 byte transaction may be satisfied by 128 data packets of 128B each. In one embodiment, the optional data sequence number field may include 7 bits that reference any one of these 128 data packets. In this manner, if a NOT ACKNOWLEDGE message is issued, the NOT_ACKNOWLEDGE message may correctly identify an exact point at which the transmission became unsuccessful. In another embodiment, the minimumTransactionSize of 8B, for TransactionSize 0 through 15, may be 8 bytes, 16 bytes, 32 bytes, 48 bytes, 64 bytes, 80 bytes, 96 bytes, 1 12 bytes, and 128 bytes, as opposed to 2Nbytes to conserve bits on the lower end.Data Packing[00101] Keeping the foregoing in mind, to provide flexible communication buses, the scalable protocol may employ data packing techniques when transmitting packets using any type of bus communication. Generally, since packet sizes are determined based on the type of request or response being sent, the data being sent, the operations being requested, etc., it may be difficult to anticipate what type of data channels to use before knowing more details regarding the packet. As such, the scalable protocol may be designed to maximize the use of the available channels by packing the data packets being transmitted together without padding each individual packet with zeros, as done with conventional protocols. As used herein, the term "without padding" means that between the transmission of data packets, zeros (i.e., bits having the value of zero) are not transmitted across a respective channel. Instead, the next scheduled packet ready to be transmitted will be transmitted on the clock cycle immediately after the previous packet is transmitted. [00102] For example, consider a request bus Q that includes 10 signal lanes and a response bus S that includes 8 signal lanes. The present example assumes that there is no data encoding and that the transactions include only simple bit transmissions (i.e., no symbol transmissions). If the sizes of occupancy on the Q bus are: 4.3, 7.3, 9.7, 13.5, 14.3, 14.9, 20.0, 20.1 , 21.6, 33.0, 36.2, 58.8, 65.2, 105.4, 1 10.5, and 123.0, a conventional protocol may pad the values having fractional components associated with them. That is, the conventional protocol may add zeros to the remaining portion of each fractional value such that the sizes of occupancy on the Q bus become 5, 8, 10, 14, 15, 15, 20, 21 , 22, 33, 37, 59, 66, 106, 1 1 1 , and 123, respectively. In some cases as many as 9 zeros may be added to the transmission, which may adversely impact an overall bus utilization efficiency because the transmitted zeros are not truly representative of data being transmitted. In this manner, these zeros utilize the bus without conveying information, thereby reducing the bus utilization efficiency.[00103] In one embodiment, instead of padding the data being transmitted, the scalable protocol may allow requests to be packed together. The bus signal is thus left without padded zeros. For example, FIG. 8 illustrates a lane packing example 61 in which the scalable protocol packs two 18-bit requests together. Referring to FIG. 8 the scalable protocol may regard transmissions as symbols instead of bits. In the example of FIG. 8, one bit may represent one symbol. Since the bus 62 in FIG. 8 includes 12 lanes (i.e. may transmit 12 bits in one flit), the scalable protocol may transmit the two 18-bit requests by packing the requests together. That is, a second 18-bit request 66 may be transmitted immediately after a first 18-bit request 64. As such, the transmission bus includes no wasted bits (e.g., padded zeros).[00104] In certain embodiments, to ensure that the receiving component can identify the start of a new packet in the packed lane, the transmitting component may start each new packet 30 with a start bit, which may be specified in the start bit field 46, as mentioned above. As such, when the receiving component receives the packed data packets as a stream of bits, it may identify the start of each packet based on when the start bit is detected. With this in mind, each packet that is transmitted may include a start bit (e.g., value of 1) to indicate the presence of a new packet. In this way, when a receiving component receives the packets packed together, it may identify the beginning of each new packet, determine the transaction type of the packet based on the transaction type field 32, the transaction window based on the transaction window field 42, the address for the operation based on the address field 44, the number of levels of indirection based on the level of indirection field 48, and the error checking code based on the ECC field 36.[00105] With this in mind, FIG. 9 illustrates a flow chart of a method 70 for generating a packet for transmission, such that the packet can be transmitted using the lane-packing scheme described above. For the purposes of discussion, the following description of the method 70 will be discussed as being performed by the memory SoC 22 (i.e., transmitting/requesting component), but it should be understood that any processor that is part of the memory device 14 may perform the operations described in the method 70.[00106] Referring now to FIG. 9, at block 72, the memory SoC 22 may receive an indication of a data operation to be transmitted. The data operation may include a message to be sent, a read operation, a write operation, or the like. At block 74, the memory SoC 22 may identify a transaction type that corresponds to the data operation. In certain embodiments, the software requesting that the data operation be performed may specify the transaction type.Alternatively, the memory SoC 22 may receive a command from the software and determine the corresponding transaction type from a look-up table or a storage unit locally accessible by the memory SoC 22. That is, the memory SoC 22 may consult a look-up table that may include a number of transaction types indexed according to a number of possible data operations that may be requested.[00107] At block 76, the memory SoC 22 may determine a transaction window based on the memory type associated with the requested data operation. That is, the memory SoC 22 may determine what type of memory will be accessed when performing the data operation and determine a corresponding transaction window based on the type of memory using a look-up table or the like. In addition to the transaction window, the memory SoC 22 may determine a memory address that refers to a location of data related to the data operation and the transaction window. For example, for a read operation, the address may refer to the location of the data that is to be read from a specified memory.[00108] At block 78, the memory SoC 22 may determine a number of levels of indirection that corresponds to the requested data operation. As discussed above, the number of levels of indirection may be specified by the data operation itself or by the software requesting that the data operation be performed.[00109] At block 80, the memory SoC 22 may generate an error control code (ECC) value for the packet 30. The ECC value may be used by the receiving component to ensure that the packet 30 is received without error. As such, the memory SoC 22 may first determine an appropriate error control code (ECC) algorithm to use to encode the packet 30. In one embodiment, the software application requesting the transmission may specify the ECC to algorithm use. Alternatively, the host SoC 12 or the memory SoC 22 may specify a particular ECC algorithm to use to encode and decode all of the transmitted and received packets. In any case, the ECC value for the packet 30 may be determined based on the bits provided in the transaction type field 32 and the payload field 34.[00110] After determining bit values that represent the transaction type, the transaction window, the number of levels of indirection, and the ECC value mentioned above, the memory SoC 22 may, at block 82, generate the packet 30 according to the values determined at blocks 72, 74, 76, and 80. When generating the packet 30, the memory SoC 22 may initially provide a 1 for the start bit field 46 to indicate to a receiving component that a new packet is being transmitted. After inserting the 1 in the start bit field 46, the memory SoC 22 may provide a value that represents the transaction type identified at 74 in the transaction type field 32.[00111] The memory SoC 22 may then generate the payload field 34 of the packet 30 using the transaction window and address determined at block 76 and the number of levels of indirection determined at block 78. That is, the memory SoC 22 may enter the transaction window value after the transaction type field 32 and into the transaction window field 42. The memory SoC 22 may then enter the address for the data operation into the address field 44 and the number of levels of indirection into the level of indirection field 48.[00112] After the packet 30 is generated, the memory SoC 22 may, at block 84, transmit the packet 30 via the channels 16, the channels 29, or the like depending on the destination of the packet 30. After the generated packet 30 is transmitted, the memory SoC 22 may proceed to block 86 and determine whether the next packet to be transmitted is ready for transmission. Generally, the next packet for transmission may be generated according to the process described above with regard to blocks 72-82. If the next packet is ready for transmission, the memory SoC 22 may proceed to block 84 again and transmit the next packet immediately after the previous packet is transmitted. By transmitting each subsequent packet immediately after another packet is transmitted, the memory SoC 22 may transmit packets according to a packed lane scheme, which does not involve padding zeros on a bus when all of the lanes of a bus are not utilized.[00113] To better illustrate how packets may be transmitted according to the packed lane scheme, FIG. 10 illustrates a number of packets that may be transmitted according to the packed lane scheme described herein. As shown in FIG. 10, the first packet 92 being transmitted on the bus 62 includes a start bit (1), 5 bits for the transaction type field 32, 45 bits for the payload field 34, and 6 bits for the ECC field 36. Immediately after the first packet 94 is transmitted, the second packet 94 is transmitted on the bus 62. As such, in bit lane 9 at bit time 3, immediately after the last bit of the ECC field 36 of the first packet 92, a start bit (1) is present. Moreover, the remaining bit lanes (i.e., bit lanes 10-15) include data associated with the second packet 94.[00114] In contrast to other packet transmission schemes, none of the bit lanes of the bus62 are padded with zeros or not utilized for the transmission of a packet. That is, in other packet transmission schemes, since the first packet 92 occupied just 9 bit lanes of the available 16, the remaining bit lanes (i.e., bit lanes 10-15) would be padded with zeros and the second packet 94 would be transmitted beginning at bit time 4. In this way, the memory SoC 22 may maximize the efficiency of the bus utilized for sending packets.[00115] It should be noted that there are still instances when the memory SoC 22 may still transmit zeros between sending packets. For instance, referring back to block 86 of FIG. 9, if the next packet is not ready for transmission, the memory SoC 22 may proceed to block 88 and transmit a zero in the next available bit lane. That is, since the bus 62 operates continuously, the memory SoC 22 may not be able to stall the bus 62 and thus may transmit zeros on the bus 62 until the next packet is ready for transmission. As such, after the memory SoC 22 transmits a zero along the bus in the next available bit lane, the memory SoC 22 may return to block 86 and again determine whether a next packet is ready for transmission. This scenario is also illustrated in FIG. 10.[00116] Referring again to FIG. 10, after the second packet 94 is transmitted, the memorySoC 22 may not have another packet ready for transmission. As such, at bit time 8, the memory SoC 22 may begin transmitting zeros until the third packet 96 is ready for transmission. As such, the memory SoC 22 may transmit zeros on bit lanes 6-15 at bit time 8 until the third packet 96 is ready for transmission at bit time 9. To ensure that the receiving component may not misinterpret the zeros padded in the bus as data, the receiving component may continuously receive the bits from the memory SoC 22 and determine that a valid packet is being transmitted after receiving a one or the start bit of the next packet.[00117] In certain embodiments, if another packet is not ready for transmission, the memory SoC 22 may power down the bus 62 until the next packet is ready for transmission. In this case, the memory SoC 22 may conserve energy used to power the bus 62 when the bus 62 is not being utilized to transmit packets.[00118] To illustrate the efficiency in transmitting packets using the lane -packing scheme, the following example is presented. A transmission sequence on a 10-lane bus may include the following bus activity: 73 bits, then 652 bits, then 73 bits, then 652 bits. This group of 4 requests includes a total of 1450 bits, which includes exactly 145 signal intervals (formally called Unit Intervals or UI) on the bus with no wasted bits. A UI may refer to one clocked group of data including a certain number of bits. For instance, on an 8-bit bus or an 8-lane link, one flit of data transmitted via the 8-lane link may correspond to one flit. The one flit may then be referred to as one UI including 8 bits of data. As such, the UI may be used to evaluate an efficiency in which a bus is being utilized. That is, the UI occupancy of a packet is calculated by dividing the packet bit counts (including StartBit, transaction type field 32, payload field 34, and ECC field 36) by the bus width of 8b. As such, if the 8-lane link is used to send 6 bits of data, the UI is 0.75 (6/8).[00119] Keeping the foregoing in mind, the example presented below assumes the following conditions are present: a ECC Hamming Distance 3, the transaction type field 32 includes 5 bits on both the request bus Q and the response bus S, the dataSequenceNumber is 7 bits, a 8-bit unit size, a 4-bit transactionSize, a 3-bit Window, a 48-bit address, 2-bitlevelsOflndirection, a 24-bit RMWopcode+data, a 4-bit messageType. With these sizing assumptions, 1 1 sample transaction types, which may appear on the response bus S, may include packet sizes of 79b, 83b, 144b, 273b, 401b, 530b, 658b, 786b, 914b, 1043b and 2067b. These packet sizes include the transaction type field 32, the payload field 34, and the ECC field 36, but excludes the StartBit mentioned above. In a conventional 8b bus, zero paddings would be added to bring each packet up to an even 8b boundary, and no StartBit would be required. As such, the number of bus flits, or the number of Unit Intervals, used to transmit these 1 1 transaction types after adding the zero padding will respectively be 10 (79/8), 1 1 (83/8), 18 (144/8), 35 (273/8), 51 (401/8), 67 (530/8), 83 (658/8), 99 (786/8), 1 15 (914/8), 131 (1043/8), and 259 (2067/8). That is, for the first packet of 79 bits, one zero will be padded onto the last 8 bits of the packet, such that 10 8-lane links will be employed to send the 79-bit packet.[00120] However, using the techniques described herein, such as adding the StartBit and packing the responses together, the number of UIs used to transmit the same packets is respectively 10 (80/8), 10.5 (84/8), 18.125 (145/8), 34.25 (274/8), 50.25 (402/8), 66.375 (531/8), 82.375 (659/8), 98.375 (787/8), 1 14.375 (915/8), 130.5 (1044/8), and 258.5 (2068/8). As such, the average savings for randomly selected packet sizes is 0.5 UI per transaction, hence the bit savings grows as the number of lanes is increased. This example is indicative of any width of the request bus Q or the response bus S, whether they are equal or unequal widths on the two buses. To enable the scalable protocol to pack the lanes as described above, the host SoC 12 or any other receiver may use the following transmission/receiving scheme: receive the packet 30, parse contents of the packet 30 to identify the transaction type, size of the payload, and a location of the ECC field 36 within the packet 30, verify a correctness of the packet 30 based on the ECC, and then act upon the transmission with certitude.[00121] In this manner, a received transmission packet may be captured in its entirety into a receiver buffer (e.g., the buffer 23) before its contents are parsed. Moreover, the receiver may not use the received packet unless the packet is verified as error- free. The buffer 23 may be operated as a first-in- first-out (FIFO) with an added ability for selective flushing in the event that a transmission error is detected. The scalable protocol may include a variable bit length ability for pulling data out of the buffer 23 and for packet bit shifting. As discussed above with reference to FIG. 3, the beginning of the packet 30 may include the transaction type field 32, which may specify a packet size based on the transaction type indicated in the transaction type field 32. As such, the transaction type field 32 includes information that the scalable protocol may use to determine a packet size including the size and relative location of the ECC field 36 within the packet 30. After the ECC is checked, the receiver employing the scalable protocol may determine whether the packet 30 is error-free. If the packet is deemed error-free, then the receiver may know that the transaction type was properly decoded and that the packet size was interpreted correctly. The receiver may then proceed onward to the next packet received immediately after the recently parsed packet. This scalable protocol may be used with any bus variations, whether full or half duplex, regardless of sizes, lengths, encoding/decoding methods, and the like. Additional details of a process that occurs after the receiving component receives the packets packed according to the lane packing scheme will be discussed with reference to FIG. 1 1 below.[00122] For reference, the scalable protocol may include transmissions that vary in length.That is, on the request bus Q, the scalable protocol may use 16 different lengths. For example, the request bus may include length bit counts of 43, 73, 97, 135, 143, 149, 200, 201, 216, 330, 362, 588, 652, 1054, 1105, and 1230 with no padding to create any particular optimized length, such as all being increments of 8 or such. In the same manner, the response bus S may include 8 different lengths, such as length bit counts of 33, 42, 85, 101, 167, 297, 555, and 1069, again with no padding.Parsing Packets for Data Packing[00123] As mentioned above, the scalable protocol may be designed to facilitate a maximum bit efficiency. As such, in certain embodiments, the packet 30 may have an arbitrary size that does not correspond to an integer multiple of the utilized physical bus. Thetransmission of arbitrarily sized packets maintains bit efficiency by packing the packets tightly together, such that each succeeding packet is transmitted immediately after the preceding packet without padding either packet with zeros. However, for the receiver (e.g., host SoC 12) to determine where the first packet ends and the second packet begins, the receiver may implement certain techniques described herein for parsing the received packets. In certain embodiments, the scalable protocol may specify a parsing method for the receiver to employ on received packets. This parsing method may include shift operations, error detection, and buffer management as pipelined operations at the head of the logical operations utilized in a system implementation.[00124] Keeping the foregoing in mind, an example of a physical bus of 8 bitsunidirectional in the ingress directions and 8 bits in the egress directions, full duplex, is described below to clarify certain aspects of the parsing method. In this example, one flit is considered to be one unit interval of data being present on a bus. That is, one flit may include 8 bits of data being transferred via the bus. Moreover, the smallest packet with Address 36b, Window 3b, and Hamming Density (HD6) error coverage of 59 bits may include a 5-bit Transaction Type, a 41-bit data payload, and a 13-bit ECC. Assuming that an endless stream of similarly sized small packets may be packed together, leaving no bit gaps, the transmission may reflect the following sequence, starting from lane 0 and going to lane 7 for a first packet being transmitted: (name.O means bit 0 of that field)flit 1 TT.O TT. l TT.2 TT.3 TT.4 D.O D. l D.2 flit 2 D.3 D.4 D.5 D.6 D.7 D.8 D.9 D.10 flit 3 D. l l D.12 D.13 D.14 D.15 D.16 D.17 D.18 flit 4 D.19 D.20 D.21 D.22 D.23 D.24 D.25 D.26 flit 5 D.27 D.28 D.29 D.30 D.31 D.32 D.33 D.34 flit 6 D.35 D.36 D.37 D.38 D.39 D.40 ECC.O ECC. l flit 7 ECC.2 ECC.3 ECC.4 ECC.5 ECC.6 ECC.7 ECC.8 ECC.9flit 8 ECC.10 ECC. i l ECC.12[00125] The second packet may then be set starting with flit 8, lane 3, as follows: flit 9 TT.O TT. l TT.2 TT.3 TT.4 flit 10 D.O D. l D.2 D.3 D.4 D.5 D.6 D.7flit 1 1 D.8 D.9 D.10 D. l l D.12 D.13 D.14 D.15flit 12 D.16 D.17 D.18 D.19 D.20 D.21 D.22 D.23flit 13 D.24 D.25 D.26 D.27 D.28 D.29 D.30 D.31 flit 14 D.32 D.33 D.34 D.35 D.36 D.37 D.38 D.39flit 15 D.40 ECC.O ECC. l ECC.2 ECC. 3 ECC.4 ECC.5 ECC.6flit 16 ECC.7 ECC.8 ECC.9 ECC.10 ECC. i l ECC.12] The third packet may then start in flit 16, lane 6, as followsflit 16 TT.O TT. l flit 17 TT.2 TT.3 TT.4 D.O D. l D.2 D.3 D.4 flit 18 D.5 D.6 D.7 D.8 D.9 D.10 D. l l D.12 flit 19 D.13 D.14 D.15 D.16 D.17 D.18 D.19 D.20flit 20 D.21 D.22 D.23 D.24 D.25 D.26 D.27 D.28flit 21 D.29 D.30 D.31 D.32 D.33 D.34 D.35 D.36flit 22 D.37 D.38 D.39 D.40 ECC.O ECC. l ECC.2 ECC.3 flit 23 ECC.4 ECC.5 ECC.6 ECC.7 ECC.8 ECC.9 ECC.10 ECC. i lflit 24 ECC.12[00127] Keeping the three example packets illustrated above in mind, incoming bits may be placed into a receive FIFO once received by the receiver. Since in the above example there are 8 lanes, the bits may be moved 8 at a time. However, since the incoming bus may be extremely fast (e.g., too fast to cycle the FIFO), the FIFO may also be made to be considerably wider and the data may be sent to each successive 8b width of FIFO in succession until reaching the last unit of width. At that time, the FIFO address is incremented in accordance with usual FIFO operations and the fill begins again at FIFO lanes 0-7, then 8-15, etc. until the last unit of width is received again. This allows slower logic to keep up with very fast serializer/deserializer (SERDES) components (e.g., 40Gb/s SERDES has a unit interval of 25ps). If a logical clock of 2GHz is used, the FIFO may be 20 x the 8-bit lane width or 160 bits wide. As such, the ECC logic could naturally be built in 160-bit blocks using XOR gates for each block (e.g., block 0 processes bits 0 through 159, block 1 processes bits 160 through 319, etc., such that the total number of ECC blocks may be 14, where each ECC block may include a differentinterconnection of 2-input XOR gates).[00128] Since each of the three packets described above are transmitted successively, and since the arrival of bits to a receiver does not include any framing information, it is the responsibility of the receiving circuitry (e.g., host SoC 12) to first determine the length of the packet so that the packet can be properly framed. Referring again to the example above, the receiver may first receive the 160-bit value immediately available from the FIFO. In the particular example described above, the entire first packet resides within that 160-bit zone. [00129] As mentioned above, the first part of the packet 30 may include the start bit field46 indicating the beginning of the packet 30. The next part of the packet 30 may include the transaction type field 32, which may include a value of 0 through 31. The value of the transaction type field 32 may be used to index a table that indicates a size of the data payload and the size of the ECC (in bits). In certain embodiments, the receiver may use a simple logic function for the same purpose. Although it is not known immediately that all of the received bits are error free, the receiver may initially assume that they are to use the transaction type specified in the transaction type field 32. The receiver may then, in a pipeline stage, check the ECC to determine whether the received packet is error free. In one embodiment, to check the ECC, the transaction type of the transaction type field 32 and the data payload of the payload field 34 may be examined in the ECC block(s), such that the incoming ECC bits are provided to all ECC blocks. In one embodiment, the ECC block may check the ECC using a scalable error control code algorithm that employs a Hamming Distance algorithm, for example. For example, the ECC block may employ an error control code algorithm having a Hamming Distance of 6 (HD6). As such, the ECC block may provide an error coverage of 59 bits (5b TransactionType, 41b data payload, 13b ECC). That is, the ECC block may provide 59 known-to-be-correct bits.Additional details regarding the scalable error control algorithm and algorithms using aHamming Distance will be described in greater detail below.[00130] After the receiver verifies that the packet is error-free, the receiver may then know with certainty that the transaction type value was correct and hence the receiver may have the proper framing of the received packet. The 59 known-to-be-correct bits may then be forwarded to the next pipeline stage for further packet processing (i.e., determine the exact request being made and process the request.) After determining that the 59-bit first packet is correct and after forwarding the 59-bit first packet for further processing, the receiver may then barrel-shift the remaining 101 bits of the 160-bit wide FIFO to align to bit 0 and repeat the above process.[00131] In some circumstances, the receiver may have too little data available to parse(i.e., everything from transaction type field 32, through payload field 34, and ECC field 36 should be available). Here, the receiver may continue fetching information until it is all available. Although large packets may exceed a single 160-bit section, since the receiver knows where ECC starts and ends from the transaction type, the receiver may forward the ECC bits to the appropriate ECC logical blocks. Moreover, since the transaction type is at the head of the packet, the receiver easily knows to look for it. Further, the receiver may determine that the payload field 34 includes everything between the transaction type field 32 and the ECC field 36. Upon identifying the payload field 34, the receiver may send the data payload to appropriate ECC logical blocks. In certain embodiments, instead of a physical MOVE, the ECC logic may be implemented in situ at register bits that temporarily store the data, depending on physical layout optimization uses.[00132] An advantage of the above-described technique includes supporting fast generation of an error message. As such, if the ECC detects an error, a logic signal is passed on to an egress queue manager and an error message is formulated and transmitted on the appropriate channel.[00133] With the foregoing in mind, FIG. 11 illustrates a flow chart of a method 100 that may be employed by a receiving component (e.g., host SoC 12) that receives packets according to the lane-packing scheme mentioned above. Although the following description of the method 100 is described as being performed by the host SoC 12, it should be noted that the method 100 may be performed by any suitable receiving component that receives packets that have been lane packed according to the embodiments described herein.[00134] Referring now to FIG. 1 1, at block 102, the host SoC 12 may receive a stream of bits via the bus 62, the channels 16, or the like. As depicted in FIG. 10, the host SoC 12 may receive a number of bits at a time based on the number of bit lanes available on the bus 62.[00135] Upon receiving the stream of bits, at block 104, the host SoC 12 may identify a start bit of a new packet. As such, the host SoC 12 may monitor the stream of bits until it recieves a 1. For example, at bit time 0, the host SoC 12 may detect a the start bit and begin parsing the first packet 92.[00136] At block 106, the host SoC 12 may determine the transaction type of the first packet 92 based on the five bits following the start bit. As discussed above, the host SoC 12 may use a look-up table or consult a key stored in a local storage component to determine the transaction type associated with the first packet 92 based on the binary value received in the transaction type field 32. [00137] After determining the corresponding transaction type for a respective packet, at block 108, the host SoC 12 may identify the payload field 34 and the ECC field 36 of the respective packet. That is, the transaction type of the respective packet may indicate to the host SoC 12 a number of bits to expect in the payload field 34 and the ECC field 36. As such, the host SoC 12 may designate a first number of bits after transaction type field 32 to be the payload field 34 and a second number of bits after the payload field 34 to be the ECC field 36.[00138] After receiving the ECC field 36 for a packet, the host SoC 12 may, at block 110, verify whether the received packet is free of errors based on the data provided in the ECC field 36. That is, the host SoC 12 may use the data provided in the ECC field 36 to check the accuracy of the data provided in the transaction type field 32 and the data provided in the payload field 34.[00139] At block 1 12, the host SoC 12 may determine whether the respective packet is free of errors. If the host SoC 12 verifies that the respective packet is error free, the host SoC 12 returns to block 102 and continues receiving the stream of bits. However, if the host SoC 12 determines that the respective packet is not error free, the host SoC 12 may proceed to block 114 and send a NOT ACKNOWLEDGE packet back to the component that transmitted the respective packet. As discussed above, the NOT ACKNOWLEDGE packet may indicate a most recent known-to-be-good bus transaction. As such, the NOT ACKNOWLEDGE packet may indicate the transaction type and the address of the last successfully received packet. Since the transmitting component knows the order in which each packet was transmitted, the transmitting packet may then resend the packet immediately following the packet referenced in theNOT ACKNOWLEDGE packet.[00140] To ensure that the transmitter component is able to resend a certain number of packets upon receiving the NOT ACKNOWLEDGE packet, in certain embodiments, the transmitting component may not disregard, delete, erase, or write over sent packets from its buffer until a certain amount of time has passed after a respective packet has been transmitted. In other words, after a packet has been transmitted, the transmitting component (e.g., memory SoC 22) may wait a certain amount of time before it deletes the transmitted packet from its buffer component. [00141] The amount of time that the transmitting component may wait after transmitting each packet before deleting it from its buffer may vary from packet to packet. Since each packet may include a different number of bits, the amount of time involved for transmitting the packet and receiving a NOT ACKNOWLEDGE packet in response may be different for each packet. Generally, the amount of time that the transmitting component may wait may depend on a worst- case lag time for the packet to be transmitted across the bus 62, the worst-case lag time for the receiving component to detect the error on the packet, and the worst-case lag time for the transmitting component to receive the NOT ACK OWLEDGMENT packet. The worst-case lag time for each situation mentioned above may be determined based on an expected time for the operation to be performed and by adding some percentage of the expected time to the expected time to provide for a margin of error in the expected time calculation.[00142] Some of the factors involved in determining the expected time for the various operations described above to be performed include the size of the packet being transmitted, the number of lanes on the request bus Q and the response bus S, an amount of time for a UI of data to be transmitted across each bus, a number of pipeline delays that are expected in the receiving component before the receiving component verifies that the received packet is error free, a maximum depth of queues in the transmitting component, information related to a policy of the transmitting component for sending urgent messages (e.g., are urgent messages placed in the front of the queue), and the like. It should be noted that the factors listed above are provided as examples and do not limit the scope of the factors that may be used to determine the expected time for the various operations to be performed.Data Reordering Operations[00143] Although the transaction windows may be used to indicate an order for a given transaction window, in some instances, performing the transaction operations according to the order of the respective transaction windows may be undesirable. For example, a DRAM might involve a refresh operation, which cannot be postponed by other DRAM operations. Another example may include when a NAND memory may be shuffling data to prepare for an erase operation. Here, a range of addresses associated with the data being shuffled may be temporarily unavailable if a transaction operation is trying to access the same range of addresses. As such, it may be beneficial for the scalable protocol to reorder the operations despite a specified order according to the transaction windows.[00144] In conventional systems, various techniques are used to allow ordering. For instance, the system may send a transaction identification with a request operation. The response operation may then include the same transaction identification. The transaction identification may be 8 bits, which means that an additional 8 bits is sent with every request and again with every response. As such, the overhead bits on both the request bus Q and the response bus S may be relatively large as compared to not sending the transaction identification with every request and response.[00145] Keeping the foregoing in mind, in certain embodiments, the scalable protocol may preserve the order specified according to the transaction windows unless it is determined that the transaction operations may be performed more efficiently if reordered. Once the scalable protocol (e.g., receiving component) makes this determination, it may send a reorder message that may give a new relative order to a particular transaction zone. The transaction zone may include a subset of all of the transaction operations being sent. Upon receiving the reorder message, the transmitting component may reorder the transaction operations according to a new relative order provided by the reorder message. The new relative order may indicate an order in which each transaction operation may be performed with respect to other transaction operations being performed. The respective transaction zone that includes the reordered transaction operations may then maintain the new order until otherwise reordered.[00146] As mentioned above, the receiving component may send a data reorder message when it is desirable to depart from the natural response sequence. In one embodiment, the receiving component may determine that reordering may be preferred based on the transaction type indicated in the transaction type field 32. That is, the transaction type field 32 may inherently indicate that a reordering is preferred. Accompanying the transaction type field 32 may be a 64 bit message that includes 16 x 4-bit order identifiers. These identifiers may indicate the order of the next 16 responses, if there are 16 responses pending.[00147] When operating under a normal flow, the receiving component may transmit responses in order of the commands according to a given transaction window. When the receiving component determines that reordering the received requests may be preferred, the receiving component may wait until all of the responses, which can remain in order, are first sent before sending a reorder message. If the system was expecting the next group of responses in the sequence 0, 1 , 2, 3, 4, 5, 6, 7, 8, 9, 10, 1 1 , 12, 13, 14, and 15, the reorder message may alter anything within that sequence. For example, a new order of 1 , 2, 3, 4, 5, 6, 7, 0, 8, 9, 10, 1 1 , 12, 13, 14, and 15 may be preferred, such that each value is represented with a respective 4-bit value. If there are fewer than 16 responses pending, the non-existent future responses may be listed in order. That is, referring again to the example above, if 0 through 7 were pending and response 0 was preferred to be delayed until after all of the others, then the order of bits 8 through 15 may remain at the end so long as 0 was provided after all of the others.[00148] In one embodiment, the reorder message may be sent any time that a new ordering is preferred. Referring again to the example above, if responses are sent in the order 1 , 2, 3, 4, 5, 6, 7, and 0 and then it is determined that the remaining items cannot be sent in the anticipated order, a new reorder message may be sent. Here, the very next response would be response 0, not response 8, because an order counter is reset to zero any time a reorder message is sent. As such, upon sending the new reorder message, the new relative order of 0 through 15 may be determined according to the most advantageous ordering. In the absence of any reorder messages, all data may be in a "natural" order of the requests received per window. In any case, by supporting data reordering in the system without routinely transmitting request identifications or response identifications, the scalable protocol may save a large amount of overhead that is otherwise used in conventional protocols.[00149] With the foregoing in mind, FIG. 12 illustrates a flow chart of a method 120 that may be employed by the receiving component (e.g., host SoC 12) for reordering packets to be transmitted to the receiving component as compared to an original order in which the packets are intended to be transmitted by the transmitting component (e.g., memory SoC 22). The following description of the method 120 will be discussed with reference to a diagram 140 of FIG. 13. The diagram 140 is provided to help illustrate the operations that occur at various stages of the method 120. For the purposes of discussion, the following description of the method 120 will be described as being performed by the host SoC 12, but it should be understood that any suitable receiving component may perform the operations described herein. [00150] Referring first to FIG. 12, at block 122, the host SoC 12 may receive a number of packets from the transmitting component (e.g., memory SoC 22). The received packets may generally include operations requested to be performed by the host SoC 12 in a preferred order. The transmitting component (e.g., memory SoC 22) may send packets that correspond to data operations in a particular order, which may reflect a preferred order of operations. The diagram 140 of FIG. 13 illustrates an example original order of packets received by the host SoC 12 in row 142. As shown in FIG. 13, ten packets transmitted by the transmitting component may be initially numbered 1-10.[00151] At block 124, the host SoC 12 may determine whether the operations indicated in the received packets should be performed in a different order. That is, for example, if the host SoC 12 is unable to perform a particular operation for some reason (e.g., requested memory address is busy, unavailable, etc.), the host SoC 12 may instead perform a later operation before performing the previously requested operation. If the host SoC 12 determines that the operations should not be performed in a different order, the host SoC 12 may proceed to block 126 and perform the operations of the received packets in the preferred order (e.g., as transmitted by the transmitting component).[00152] If the host SoC 12 determines that the operations should not be performed in the preferred order, at block 128, the host SoC 22 may determine a new order to perform the requested operations. To perform operations in a different order, the host SoC 12 may identify a particular packet that corresponds to an operation that may not be performed in the requested order. The host SoC 12 may then determine whether any subsequent operation is dependent on the results of the identified operation. That is, the host SoC 12 may determine whether performing the identified operation at a later time may cause an error in any remaining operations to be performed. In certain embodiments, the host SoC 12 may evaluate the transaction windows of each packet to determine whether operations may be reordered. For instance, if an order of have the transaction windows is as follows: Win2, Win2, Win2, Win3, Win3, Win2, and Win3, the host SoC 12 may delay the third Win2 request to perform the first Win3 request because they refer to different transaction windows and thus likely operate on different memory types. Using the transaction windows of each packet, the host SoC 12 may then determine a new order to perform the requested operations. [00153] After determining the new order to perform the operations, at block 130, the hostSoC 12 may rename a number of packets that are received after a packet immediately preceding the packet that corresponds with the identified operation. In one embodiment, the host SoC 12 may rename the packets according to their current position in the queue. For instance, referring again to FIG. 13, if the host SoC 12 identifies original packet 5 as a packet containing an operation that should be performed at a later time, the host SoC 12 may rename the packets after packet 4 according to their current position in the queue. As such, packets 5-10 may be renamed to packets 0-5 as illustrated in row 144 of the diagram 140. In this manner, the remaining packets may be renamed according to their relative position in the queue.[00154] After renaming the remaining packets, at block 132, the host SoC 12 may generate a reorder message that indicates a new order in which the remaining packets will be addressed by the host SoC 12 or according to the order of corresponding operations that will be performed by the host SoC 12. The reorder message may be determined based on the new order determined at block 128 and according to the renamed packets, as provided in block 130. For instance, referring to the example in FIG. 13 again, if the host SoC 12 determined that the original 5thpacket operation should be performed after the original 7thpacket operation, the reorder message may be presented as 1, 2, 3, 0, 4, 5, as shown in row 146. Row 146 indicates the new order of operation according to the renamed packets. For illustrative purposes, row 148 indicates the order in which the reorder message specifies that the remaining packet operations will be according to their original packet numbers.[00155] At block 134, the host SoC 12 may transmit the reorder message to the transmitting component. As such, the transmitting component may use the reorder message to adjust the order in which the response packets transmitted from the host SoC 12 are associated with a respective request packet. That is, the transmitting component may associate each response packet received after the reorder message according to the renamed relative order indicated in the reorder message.[00156] By renaming the packets after the packet that corresponds to the last implemented operation, the host SoC 12 may provide a reference order to the transmitting component that is relative to the remaining response packets that are to be received by the transmitting component. As such, since the host SoC 12 and the transmitting component may know the order in which packets have already been sent, the packets renamed according to their relative order enables the host SoC 12 to associate the response packets without having to send a packet identification number with each packet, thereby providing a more bit-efficient communication scheme.[00157] In circumstances where there are multiple request and response buses, the scalable protocol may determine the order in which transaction operations are performed, as follows. If there are 4 request buses associated with 4 respective response buses, an associated pair of request and response buses may be named by the scalable protocol as a channel. As such, in one embodiment, a transaction operation may be defined as "channel.window.address." Here, the ordering may then be defined as "channel.window.dataSequenceNumber." Often times, just one datum may be part of the transaction operation, such that the data sequence number is often unimportant to save for transaction requests larger than a largest supported packet size.Otherwise, the scalable protocol may follow an ordering within the channel.window. Even when two channels are using the same window, the scalable protocol may not incorporate any ordering between them. Instead, the scalable protocol may provide an order within each channel.window combination. As a result, the scalable protocol may greatly simplify the operation of the system because channels have the possibility of asynchronous timing inter-relationships. By ordering the transaction operations according to the channel.window, the scalable protocol keeps the ordering simple and also reduces a number of times arbitration may be performed. Moreover, this ordering technique may also reduce a number of reorder messages that have otherwise been sent.Data Reordering Operations - High Frequency[00158] Although scalable protocol has been described as being capable of providing a new relative order for transaction operations being sent, it may be difficult to incorporate this type of reordering scheme in large systems that may have a high frequency of reordering requests. That is, if reorder messages are sent at some high frequency (i.e., above a certain threshold), it may no longer be an efficient use of time and resources to send reorder messages and reorder the transaction operations. In other words, for some types of systems the frequency of data reordering could become so high that the amount of communications between the transmitting component and the receiving component may become inefficient. For such systems, the scalable protocol may reduce bit traffic of transaction identifications even when large numbers of reorder events are preferred.[00159] In one embodiment, the receiving component may determine whether the current reorder technique is operating inefficiently. For instance, the transmitting component may determine a frequency at which the reorder messages are being received from the receiving component. If the frequency is above some threshold, the transmitting component may determine that the current reorder scheme is operating inefficiently. At this time, thetransmitting component may append each transaction identification (ID) of each transaction operation to include a new field: a request bus Q sequence number. Since the receiving component may know the order that requests are received, the receiving component may assign a round-robin sequence number to each received request (i.e., request bus Q sequence number, Qsequence or Qseq). The request bus Q sequence number may apply to the combination of the respective channel and the respective window of each request. As such, the request bus Q sequence number may be denoted as "channel.window.Qseq," such that Qseq may be assigned in round robin order for each respective channel and respective window, thereby preserving bandwidth by not transmitting known data. For instance, if an order of requests (all on channel 0) is as follows: Win2, Win2, Win2, Win3, Win3, Win2, and Win3 and these are the first transactions, the assigned Qseq numbers appended by the receiver would be: 0, 1 , 2, 0, 1 , 3, and 2 respectively. That is, each window may be associated with a round robin Qseq sequence based on the receipt of each type (i.e., channel/window) of request.[00160] After receiving the requests and when a response is planning to be sent on the response bus S, the receiving component may tag each respective response with itscorresponding Qseq value. As such, the transmitting component may associate each received response with its respective request. As shown above, the technique described above avoids transmitting a Qseq value on the request bus Q. By not sending the Qseq value on the Q bus, the scalable protocol provides an additional way in which to provide bit-efficient transfer.[00161] Keeping this in mind, FIG. 14 illustrates a method 160 for reordering operations performed by a receiving component. Again, as mentioned above with regard to the method 120, the following method 160 will be described as being performed by the host SoC 12. However, it should be understood that the following method 160 may be performed by any suitable receiving component.[00162] Referring now to FIG. 14, at block 162, the host SoC 12 may determine whether a number of reordering messages transmitted to the transmitting component over some period of time exceeds some threshold. The threshold may be related to a declining performance of the memory device 14, an average number of cycles involved when performing an operation, an average queue depth for each requested operation, or the like.[00163] If the number of reordering requests is not greater than the threshold, the host SoC12 may continue sending reorder messages according to the method 120 described above.However, if the host SoC 12 determines that the number of reordering requests is greater than the threshold, the host SoC 12 may proceed to block 164. At block 164, the host SoC 12 may add a sequence value to each received packet in a round robin fashion according to the transaction window of each packet. The transmitting component may store an order in which each packet has been transmitted, such that the order of transmission may correspond to the order in which each packet was received.[00164] At block 166, the host SoC 12 may send response packets in an order in which their respective operations have been performed. The response packets may include the sequence value added to the received packet at block 164. Since the transmitting component is aware of the order in which each packet has been sent, it may use the added sequence value to apply the response packet to the appropriate request packet. Using the method 160 to transmit response packets, the host SoC 12 and the transmitting component may add a sequence number to the packets that are transmitted once across the bus 62, as opposed to keeping the sequence number on both transmissions. In this way, the scalable protocol provides bit efficient data transfers by leveraging information known by the transmitting component, such as the order in which packets were transmitted.[00165] In certain embodiments, in an event such as a long transaction requiring multiple packets, the receiving component may use a request bus Q sequence number (Qseq) and a data sequence number (DataSequence) to identify each packet when an error occurred and the pipeline may be flushed and the corresponding packets within the pipeline may be resent. For instance, if the error occurred in a packet on the response bus S bus, a last known-to-be-good packet received by the transmitting component may include a Qseq number in it to use as reference. As a result of employing this technique, some of the messages are actually now shorter since a transaction type is not referenced to indicate a transaction. That is, to otherwise indicate the transaction type, the transaction type, window, and address within a packet, up to 52 bits may be used to include this information. In contrast, sending the Qseq value and the DataSequence value may involve 23 bits (e.g., 16+7=23 bits), thereby further improving the bit efficiency in transfers.[00166] As compared to the re-order message techniques described earlier, appending packets with a Qseq value may result in a lower overall number of bits transmitted when the number of times that a re-order is performed is above some frequency threshold. Although the option of providing a Qseq value has been described as being incorporated within the scalable protocol dynamically, in certain embodiments, the ability of the scalable protocol to provide the Qseq value may be a static choice built into the scalable protocol at the time the SoC that implements the scalable protocol is designed. The type of system using the scalable protocol may provide information to indicate which ordering method may provide more bit-efficient transfers.[00167] Keeping the foregoing in mind, in one embodiment, the request bus Q sequence number field may be an 18-bit field that may be used to identify each transaction operation of a 4 kilobyte transaction. Although the request bus Q sequence number field has been described as an 18-bit field, the size of the request bus Q sequence number field may be any suitable value. Generally, the size of the request bus Q sequence number field may be large enough to identify each transaction operation of a particular transaction and may be used to indicate an order in which the request or response may be performed. Although the addition of the request bus Q sequence number field to a respective packet may increase a respective size of the respective packet, the increase in packet sizes is still more efficient than sending a transaction identification with every request and response operation, as performed in conventional protocols. Moreover, since the addition of the request bus Q sequence number field may be done after determining that sending reordering messages is inefficient, the present technique is limited for use in specific instances, as opposed to being used for every transaction operation as in conventional protocols. [00168] In some embodiments, when requests have an implied sequence number (e.g., for a given channel. window, the first request is 0, next is 1 , next is 2, etc.), the scalable protocol may not add a request bus Q sequence number field to the transaction operation. That is, since the transaction operations are in a natural implied order, the scalable protocol may save bits from being sent by not transmitting the sequence numbers.[00169] However, when responses are preferred to flow in a different order other than that natural implied order, as mentioned above, the scalable protocol may append each received transaction operation with a corresponding sequence number in the request bus Q sequence number field). In some cases, the sequence number may potentially use a large bit field. For example, in a window that supports NAND, a response could require 0.01 seconds. Here, if the packet rate is 5x10"9, there could be 5x107responses in flight, which may use 26 bits to identify each of the responses. A more practical scenario anticipates larger transactions of approximately 4 kilobytes where there may be approximately 100,000 outstanding transactions. Here, each transaction may be identified in just under 17 bits. To allow better performance with small transactions and also to ensure there is no identification aliasing, the bit count may be rounded up to 18 bits. That is, the numbers may modulo wrap around to zero and so there may be an obvious gap in the sequence that is "alive" at any time to avoid confusion.[00170] In any case, when providing a reordering sequence, the scalable protocol may add a request bus Q sequence number field to a corresponding packet. As such, some of the fields described above may change. For example, on the request bus Q, the not-acknowledge command may change such that it has the same transaction type and the same transaction window. Previously, the not-acknowledge command may have included an address, a data sequence number, and an original transaction type. In one embodiment, the not-acknowledge command may now have a request bus Q sequence number and a data sequence number. As a result, the not-acknowledge command may be a smaller packet than previously described.[00171] On the response bus S, the general message transaction type may be unchanged.However, the remaining items of the packet may change as follows:• "Complete" message may have a transaction type, a window, a request sequence number, and an ECC.• "Not- Acknowledged" (NACK) message may have a transaction type, a window, a request sequence number, a data sequence number, and an ECC.• "Message" may be unchanged, and thus may include a transaction type, a window, 8B data, and an ECC.• 8uData may include a transaction type, a window, a request sequence number, and 8B data, and an ECC.• 16uData may include a transaction type, a window, a request sequence number, and 16B data, and an ECC.• 32uData may include a transaction type, a window, a request sequence number, and 32B data, and an ECC.• 48uData may include a transaction type, a window, a request sequence number, and 48B data, and an ECC.• 64uData may include a transaction type, a window, a request sequence number, and 64B data, and an ECC.• 80uData may include a transaction type, a window, a request sequence number, and 80B data, and an ECC.• 96uData may include a transaction type, a window, a request sequence number, and 96B data, and an ECC.• 1 12uData may include a transaction type, a window, a request sequence number, and 1 12B data, and an ECC.• 128uData include a transaction type, a window, a request sequence number, and 128B data, and an ECC.• 256uData may include a transaction type, a window, a request sequence number, and 256B data, and an ECC.[00172] As mentioned above, although the data transaction types may have increased in packet sizes by the amount of the request sequence number, even in systems with high performance NAND, the resulting sequence number may be just 16b. As such, the presently disclosed technique to reorder transaction operations for transaction operations that are reordered at a high frequency, or designed as such, may still be economical as compared with conventional protocols, which may add 16 bits to every response. Moreover, since the presently disclosed technique includes a sequence number for each response, the scalable protocol may not issue reorder messages or packets. Further, since each transaction operation is associated with a particular sequence number, the transaction operation may be transmitted in a round robin order to ensure that known data is not transmitted.Ordering Effort Field[00173] As discussed above, situations arise when transaction operations in one transaction window are preferred in order, but it may be beneficial to deviate from that order. Keeping this in mind, in addition to the two techniques for reordering transaction operations described above, in one embodiment, the scalable protocol may provide a flexible programming option for ordering transaction operations or packets in a system. The flexible programming option (e.g., ordering effort field) may set a degree of effort that the scalable protocol should use in maintaining the original order of transactions. That is, the flexible ordering effort field may indicate to the scalable protocol how hard it should work to ensure that the packets are transmitted in order. As such, the flexible ordering effort field may be associated with a range of values between a first value that corresponds to keeping every pack in order and a second value that corresponds to allowing anything to be reordered.[00174] Keeping this in mind, transaction window 0 may be used as a general purpose control area for memory SoC 22. As such, transaction window 0 may reside in registers, SRAM buffers, cache SRAM, and other addressable control features. For each transaction window, the scalable protocol may enable configurable information that can be user programmed. As mentioned above, one type of the configurable information (e.g., ordering effort field) may include a degree of effort in maintaining original order (i.e., ordering effort). The ordering effort field may have a large variation in implementations. For instance, in a 2-bit field, the ordering effort may be characterized as follows:00 - allow re-ordering at every opportunity01 - allow considerable re-ordering10 - allow some re-ordering11 - allow no re-ordering, wait until resources are available[00175] In certain embodiments, the scalable protocol may associate certain packets with specific ordering zones. The ordering zone may indicate that the corresponding packets are to be treated similarly. For example, requests in the same ordering zone may be expected to be in order, and if not possible to be in order, then the transmitting component (e.g., memory SoC 22) may apply the ordering effort, as specified by the ordering effort field, to determine a degree in which the requests may be transmitted out of order.[00176] The ordering zone may be related to a combination of a channel, a system window, and a transaction window (e.g., channel.syswin.window). Channel may be a channel number from which the request was received. System window may be an optional pair of fields that, for example, specifies which SoC in the system originated the request.[00177] Keeping the foregoing in mind, a reasonable implementation of specifying the ordering effort in a 2-bit field assuming that a queue depth is 16 for an ordering zone may be as follows:00 - allow re-ordering at every opportunity: allow result slots to beswapped anywhere in the queue depth of 1601 - allow considerable re-ordering: allow result slots to be swapped anywhere in the queue depth of 1110 - allow some re-ordering: allow result slots to be swapped anywhere in the queue depth of 611 - no re-ordering: allow no swapping, allow resources to idle[00178] In certain embodiments, an ordering effort function that defines the ordering effort may include additional variables such as an age of the request. For example:00 - allow re-ordering at every opportunity: allow result slots to beswapped anywhere in the queue depth of 1601 - allow considerable re-ordering: allow result slots to be swapped anywhere in the queue depth of 8 if the request is old and 14 if the request is young10 - allow some re-ordering: allow result slots to be swapped anywhere in the queue depth of 4 if the request is old and 8 if the request is young11 - no re-ordering: allow no swapping, allow resources to idle [00179] Here, the scalable protocol may enable the requests to be designated as being old or young. For instance, a request may be considered to be old if the request has existed for 7 or more request slots, while the request may be considered to be young if the request has existed for 6 or fewer request slots.[00180] The above-listed examples illustrate a small subset of possible ways in which an ordering effort may be quantified in a 2-bit field. Additional degrees of ordering effort may be specified using a larger sized ordering effort field. In any case, the ordering effort field may provide the capability of simple programmability that makes ordering effort a function that may be useful in tuning overall system performance. In certain embodiments, the ordering effort employed by the host SoC 12 may be determined or specified when the host SoC 12 is powered on. That is, the host SoC 12 may determine the type of device it is connected to or the type of industry it is designed for and determine an ordering effort accordingly.Backpressure Function for Bus Traffic Throttling[00181] Backpressure may refer to an amount of bus traffic on a respective bus with respect to an available capacity of the buffer 23 (e.g., first-in-first-out (FIFO) buffer) receiving the bus traffic. As such, the backpressure of a respective bus may be considered to be high when the buffer 23 receiving the bus traffic is close to its depth limit. Once the buffer 23 becomes full, the receiving component in conventional systems may either ignore future incoming packets or accept the incoming packet and delete a packet presently in the buffer 23. In either of these cases, packets may not be processed and thus the integrity of the communication link may be compromised.[00182] Keeping this in mind, FIG. 15 illustrates a flow chart of a method 180 for throttling back the transmission rate of requests sent from a transmitter. Again the following method 180 is described as being performed by the host SoC 12 for illustrative purposes but may be performed by any suitable receiving component.[00183] At block 182, the host SoC 12 (e.g., receiving component) may monitor the capacity of the buffer 23 and determine whether the capacity of the buffer 23 of the receiver is less than or equal to some threshold. If the capacity of the buffer 23 is above the threshold, the host SoC 12 may proceed to block 184 and continue receiving packets at the present transmission rate from the transmitting component.[00184] If, however, the capacity of the buffer 23 is less than or equal to the threshold, the host SoC 12 may then proceed to block 186. At block 186, the host SoC 12 may send a message to the transmitting component to decrease the rate at which it is sending packets. At this time, both the host SoC 12 and the transmitting component may use the same backpressure function to throttle the transmittal and receipt of packets according to the same known mathematical function. As a result, the backpressure of the bus traffic may be reduced to accommodate for the processing of the data packets currently in the buffer 23, while reducing the likelihood of losing a packet.[00185] In one embodiment, the bus traffic may be throttled back as the outstanding transaction count approaches a maximum window value (windowMax) and a maximum channel value (channelMax). The channelMax and windowMax fields may be independently set by a user or the scalable protocol. The channelMax field may correspond to a defined maximum transmission rate. For instance, the channelMax may be set to lxlO9requests per second. The windowMax field may correspond to a number of outstanding transaction operations. An example backpressure function may include linearly reducing a request rate after thewindowMax or channelMax is at 90% capacity. At that point, the transmittal rate may be 100% at 0.900*Max and vary linearly to 0% at 0.995*Max. FIG. 16 graphically illustrates how the transmittal rate may be scaled back according to the above-described linear function.[00186] In addition to linearly scaling back the transmission rate, the transmitting component may also scale back its transmissions according to a non-linear function. FIG. 17, for example, illustrates one possible non-linear curve that may be employed by the transmitting component when scaling back its transmission rate. It should be understood that the transmitting component is not limited to employing a non-linear transmission rate according to the curve depicted in FIG. 17. In another example, the non-linear curve may include a step down curve that incrementally scales back the transmission rate by finite steps.[00187] In cases where just one transaction window is present on a channel, the windowMax field may not be relevant or may be considered to be equal to the channelMax field. In the case where there are multiple transaction windows, different backpressure functions may be defined for each respective transaction window. For instance, consider the following 4 examples of transaction windows that use a variety of different memory types as described below. windowO - control and registry window 1 - lowest latency DRAMwindow2 - regular DRAM window 3 - NAND[00188] Keeping this in mind, an example of how the backpressure function may be throttled based on the traffic of a channel may include defining a channel max (e.g., lxl 09requests per second), defining when the backpressure function may begin (e.g., RollbackStart 0.9 p.u.), and defining when the backpressure function may end (e.g., RoUbackEnd 1 p.u.). In this example, the Rollback function may apply to the variable called Max, which may correspond to the channel max. Generally, the channel max corresponds to the rate at which requests (or transaction orders) are sent while the channel request rate is less than or equal to 0.9 * channel max (e.g., up to RollbackStart).[00189] In the same manner, each respective transaction window may employ a respective backpressure function. For instance, the backpressure functions of the four example transaction windows defined above may be implemented as follows: windowOwindowOmax 0.05 p.u. of max windowORollbackStart 0.045 p.u. of max windowORollbackEnd 0.05 p.u. of max windowlwindowlmax 0.9 p.u. of max windowl RollbackStart 0.81 p.u. of max windowl RoUbackEnd 0.9 p.u. of maxwindow2 window2max 0.3 p. u. of maxwindow2RollbackStart 0.27 p.u. of maxwindow2RollbackEnd 0.3 p.u. of maxwindow3window3max 0.1 p.u. of maxwindow3RollbackStart 0.09 p.u. of maxwindow3RollbackEnd 0.1 p.u. of max[00190] As shown above, the backpressure function may gradually roll back request rates when there are many transaction windows (i.e., many simultaneous processes) interacting. In any case, by performing the throttling operations according to a function, as opposed to using transmitted signals, the scalable protocol may not be concerned with whether transmitted signals are in-band or out of band. Moreover, since the receiving component and the transmitting component may implement the same mathematical function without having to communicate when to implement the function, the scalable protocol may further reduce the amount of bits transferred across each respective bus.[00191] In certain embodiments, the backpressure function may also account for the age of each request. For instance, if older requests are pooling in a transaction window, the receiving component may adjust the value the windowMax or modify the Rollback limits for that particular transaction window.[00192] In yet another embodiment, the backpressure function may also account queue depth. That is, at power up, the memory SoC 22 may have the ability to discover the capability of the module(s) connected to the memory SoC 22 based on information provided in the transaction window or the like. Part of the capabilities may include observing a queue depth of the receiver(s) connected to memory SoC 22 and perhaps can also discover the nominal packet- processing rate of a connected channel. Although the memory SoC 23 may not be able to track a receiver's queues, the memory SoC 22 may make some determinations regarding the status of the receiver's queues. For example, if the memory SoC 22 sends many packets in rapid succession exceeding a packet-processing rate of the receiving component, the memory SoC 22 may predict that a queue in the receiver component will grow. As such, if the memory SoC 22 determines that the packets are being sent faster than the packet-processing rate of the receiver, the memory SoC 22 may begin to apply the backpressure functions described above without receiving explicit feedback from the receiver. In other words, if the packet transmission rate exceeds the packet-packet processing rate, the memory SoC 22 may begin to reduce the packet transmission rate. In this way, the transmission rate may be reduced without adding messages to the channels. In some embodiments, the receiving component may send a message to the memory SoC 22 as a failsafe when the receiving component is not processing packets at its expected rate.[00193] In another embodiment, the receiving component may include a system failsafe mechanism to indicate to the transmitting component that the buffer 23 is about to be overrun or exceed its capacity. Here, the receiving component may send a message similar to the not- acknowledged message described above. This message may have the same effect as the not- acknowledged message except that it may create an entry in a data log of the transmitting component to note that a message was rejected due to the buffer 23 being unable to accept the packet. As such, the transmitting component may determine a reason for the delay in bus traffic.[00194] While the embodiments described herein may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the disclosure is not intended to be limited to the particular forms disclosed. Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the techniques and system described in the disclosure as defined by the following appended claims.
Software for use on a client device that is configured for communications with at least one remote source via a communications network instantiates a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software, and an ad donwload function that donwloads advertisements from the at least one remote source, via the communications network.
CLAIMS 1. Software for use on a client device that is configured for communications with at least one remote source via a communications network, the software comprising: a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software; and an ad download function that downloads advertisements from the at least one remote source, via the communications network. 2. The software as set forth in Claim 1, wherein the ad download function downloads the advertisements over one or more advertisement download sessions. 3. The software as set forth in Claim 2, wherein each advertisement download session is limited to a prescribed maximum time duration. 4. The software as set forth in Claim 1, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device. 5. The software as set forth in Claim 1, wherein the at least one remote source includes a plurality of ad servers. 6. The software as set forth in Claim 1, wherein the software is subsidized by revenues attributable to the downloaded advertisements. 7. The software as set forth in Claim 1, wherein the at least one remote source includes at least one ad server. 8. The software as set forth in Claim 1, wherein the software is e-mail software. 9. The software as set forth in Claim 2, wherein the ad download function includes an ad fetch timer function that limits each advertisement download session to the prescribed maximum time duration. 10. The software as set forth in Claim 1, wherein the communications network is the Internet. 11. The software as set forth in Claim 1, further comprising a client information transmit function that transmits the distributor identifier to a prescribed server to be used in apportioning advertising revenue attributable to the software distributed by that distributor. 12. The software as set forth in Claim 11, wherein the prescribed server is associated with the at least one remote source. 13. The software as set forth in Claim 12, wherein the at least one remote source includes a plurality of ad servers. 14. The software as set forth in Claim 13, wherein the plurality of ad servers are controlled by the producer of the software. 15. The software as set forth in Claim 1, further comprising a client information transmit function that transmits the distributor identifier to a prescribed server operated by the producer of the software to be used by the producer of the software in apportioning advertising revenue attributable to copies of the software distributed by that distributor. 16. The software as set forth in Claim 1, wherein the distributor is anInternet Service Provider. 17. The software as set forth in Claim 1, wherein the distributor is an email service provider. 18. Software for use on a client device that is configured for communications with a multiplicity of other client devices via a communications network, the software comprising: a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software; an ad download function that downloads advertisements from at least one remote source, via the communications network; an e-mail composition function for enabling a user of the client device to compose e-mail messages; an e-mail send function that enables the user to send e-mail messages to other client devices via the communications network; and an e-mail receive function that enables the user to receive e-mail messages from other client devices via the communications network. 19. The software as set forth in Claim 18, wherein the ad download function downloads the advertisements over one or more advertisement download sessions. 20. The software as set forth in Claim 19, wherein each advertisement download session is limited to a prescribed maximum time duration. 21. The software as set forth in Claim 18, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device. 22. The software as set forth in Claim 18, wherein the at least one remote source includes a plurality of ad servers. 23. The software as set forth in Claim 18, wherein the software is subsidized by revenues attributable to the downloaded advertisements. 24. The software as set forth in Claim 18, wherein the at least one remote source includes at least one ad server. 25. The software as set forth in Claim 19, wherein the ad download function includes an ad fetch timer function that limits each advertisement download session to the prescribed maximum time duration. 26. The software as set forth in Claim 18, wherein the communications network is the Internet. 27. The software as set forth in Claim 18, further comprising a client information transmit function that transmits the distributor identifier to a prescribed server to be used in apportioning advertising revenue attributable to the software distributed by that distributor. 28. The software as set forth in Claim 27, wherein the prescribed server is associated with the at least one remote source. 29. The software as set forth in Claim 28, wherein the at least one remote source includes a plurality of ad servers. 30. The software as set forth in Claim 29, wherein the plurality of ad servers are controlled by the producer of the software. 31. The software as set forth in Claim 18, further comprising a client information transmit function that transmits the distributor identifier to a prescribed server operated by the producer of the software to be used by the producer of the software in apportioning advertising revenue attributable to copies of the software distributed by that distributor. 32. The software as set forth in Claim 18, wherein the distributor is anInternet Service Provider. 33. The software as set forth in Claim 18, wherein the distributor is an email service provider. 34. Software for use on a client device that is configured for communications via a communications network, comprising: a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software; a playlist request function that generates a playlist request that includes the distributor identifier, and that transmits the playlist request to at least one playlist server, via the communications network; a playlist response handling function that receives and processes a playlist response transmitted to the client device by the at least one playlist server in response to the playlist request, wherein the playlist response includes a playlist (s) that identifies advertisements to be downloaded; an ad download function that downloads at least selected ones of the advertisements identified in the playlist (s) from at least one remote source, via the communications network, over one or more advertisement download sessions ; an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device. 35. The software as set forth in Claim 34, wherein the playlist (s) contains a list of ad identifiers that identify respective ones of the advertisements to be downloaded. 36. The software as set forth in Claim 35, wherein the playlist (s) further contains a list of source addresses where respective ones of the advertisements to be downloaded can be fetched. 37. The software as set forth in Claim 34, wherein the at least one remote source includes at least one ad server, each of which stores at least one of the advertisements to be downloaded. 38. The software as set forth in Claim 34, wherein the software is subsidized by revenues attributable to the downloaded advertisements. 39. The software as set forth in Claim 37, wherein the at least one ad server comprises a plurality of ad servers that each store at least one of the advertisements to be downloaded. 40. The software as set forth in Claim 37, wherein: the at least one playlist server is controlled by a vendor of the software; and the at least one ad server comprises a plurality of ad servers that each store one or more advertisements to be distributed to clients of the vendor of the software; and at least one of the plurality of ad servers is controlled by the vendor of the software. 41. The software as set forth in Claim 37, wherein: the at least one playlist server is controlled by a vendor of the software; and the at least one ad server comprises a plurality of ad servers that each store one or more advertisements to be distributed to clients of the vendor of the software; and at least one of the plurality of ad servers is controlled by an entity other than the vendor of the software that has granted the vendor of the software and its clients access to its ad server (s). 42. The software as set forth in Claim 34, wherein the at least one remote source includes a plurality of ad servers, each of which stores one or more of the advertisements to be downloaded, each advertisement being stored in a storage location designated by a URI. 43. The software as set forth in Claim 42, wherein the playlist (s) contains a list of ad identifiers and corresponding URIs that identify respective ones of the advertisements to be downloaded, and the corresponding storage location from which each respective advertisement can be fetched. 44. The software as set forth in Claim 43, wherein the playlist request function transmits the playlist request to the at least one playlist server at prescribed playlist check intervals. 45. The software as set forth in Claim 34, wherein the playlist request function transmits the playlist request to the at least one playlist server at prescribed playlist check intervals. 46. The software as set forth in Claim 44, wherein the playlist check intervals and the advertisement download sessions are scheduled independently. 47. The software as set forth in Claim 45, wherein the playlist check intervals and the advertisement download sessions are scheduled independently. 48. The software as set forth in Claim 34, wherein each advertisement download session is limited to a prescribed maximum time duration. 49. The software as set forth in Claim 34, wherein the software is e-mail software. 50. The software as set forth in Claim 48, wherein the ad download function includes an ad fetch timer function that limits each advertisement download session to the prescribed maximum time duration. 51. The software as set forth in Claim 34, wherein the communications network is the Internet. 52. The software as set forth in Claim 37, wherein: the at least one playlist server is controlled by a producer of the software; and the at least one ad server comprises a plurality of ad servers that each store one or more advertisements to be distributed to clients of the producer of the software; and at least one of the plurality of ad servers is controlled by the producer of the software. 53. The software as set forth in Claim 37, wherein: the at least one playlist server is controlled by a producer of the software; and the at least one ad server comprises a plurality of ad servers that each store one or more advertisements to be distributed to clients of the producer of the software; and at least one of the plurality of ad servers is controlled by an entity other than the producer of the software that has granted the producer of the software and its clients access to its ad server (s). 54. The software as set forth in Claim 53, wherein the entity other than the producer of the software comprises a distributor of the software. 55. The software as set forth in Claim 34, wherein the distributor is anInternet Service Provider. 56. The software as set forth in Claim 34, wherein the distributor is an email service provider. 57. The software as set forth in Claim 34, wherein the distributor identifier is used in apportioning advertising revenue attributable to the software distributed by that distributor. 58. The software as set forth in Claim 1, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device when the client device is offline. 59. The software as set forth in Claim 18, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device when the client device is offline. 60. The software as set forth in Claim 34, wherein the advertisement display function effects display of the at least selected ones of the stored advertisements when the client device is offline. 61. The software as set forth in Claim 1, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device while the user is composing and/or reading e-mail messages. 62. The software as set forth in Claim 18, further comprising: an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device; and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device while the user is composing and/or reading e-mail messages. 63. The software as set forth in Claim 34, wherein the advertisement display function effects display of the at least selected ones of the stored advertisements while the user is composing and/or reading e-mail messages.
METHOD AND SYSTEM FOR DISTRIBUTING ADVERTISEMENTS TO CLIENT DEVICES COPYRIGHT NOTICE A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. BACKGROUND OF THE INVENTION The present invention relates generally to the field of electronic mail ("email") software and systems. More particularly, the present invention is related to advertiser-supported e-mail software for delivering advertisements to client computers having this advertiser-supported e-mail software installed thereon. This application is based on Provisional Patent Application No.60/169,622, which was filed on December 8,1999. This Provisional PatentApplication is incorporated herein by reference in its entirety. Electronic mail ("e-mail") has become a ubiquitous form of communication in recent years. In general, e-mail works as follows. E-mail software is installed on a client device, e. g., a personal computer (PC), equipped or configured for communications with a multiplicity of other client devices via a communications network. Access to the communications network can be provided by a communications network service provider, e. g., an InternetService Provider (ISP) and/or a proprietary network e-mail service provider, with whom the user establishes one or more e-mail accounts, each identified by a unique e-mail address, e. g., president@whitehouse. gov. The e-mail software, e. g., the e-mail client, enables a user of the client device to compose e-mail messages, to send e-mail messages to other client devices via the communications network, and to read e-mail messages received from other client devices via the communications network. A user can send e-mail messages to multiple recipients at a time, which capability is sometimes referred to using a mailing list or, in extreme cases, bulk mailing. The typical email client supports Post Office Protocol Version 3 (POP3), Simple MailTransfer Protocol (SMTP), Internet Mail Access Protocol, Version 4 (IMAP4), and/or Multipurpose Internet Mail Extensions (MIME). Each ISP and each proprietary network e-mail service provider independently operates and controls an e-mail communication system (or, simply,"e-mail system"). These independently-operated e-mail systems are bidirectional store-and-forward communication systems that are interconnected to one another via the Internet. Each e-mail system generally includes a number of e-mail servers that store inbound and outbound e-mail messages and then forward them, route them, or simply make them available to the users/intended recipients. Different e-mail systems are operated and controlled by independent control entities. With the advent of the Internet, the user is not restricted to a single system providing both an incoming e-mail server (or server cluster) and an outgoing e-mail server (cluster), i. e., both the incoming and outgoing e-mail servers under the control of a single entity. Most e-mail clients, other than proprietary e-mail systems such as AOL and JUNO, can be configured to receive e-mail from an incoming e-mail server (cluster) controlled by a first entity and an outgoing email server (cluster) controlled by a second, totally independent entity. It will be appreciated that most casual email users download from and upload to respective servers operated by a single entity. Generally, when a user desires to send e-mail messages, or to check for received messages (which operations can occur automatically according to a prescribed schedule), the e-mail software is activated. Upon being activated, the e-mail software: effects a connection or communications session with the host ISP or e mail service provider via a prescribed communication link by invoking a prescribed communications mechanism, e. g., a dial-up modem, an ISDN connection, a DSL or ADSL connection, etc.; (electronically transmits or transports any e-mail messages desired to be sent to the e-mail server system operated by the host ISP or e-mail service provider, e. g., via an SMTP server; (receives any inbound e-mail messages forwarded to the client device by the host ISP or e-mail service provider, e. g., via a POP3 or IMAP4 server; and (stores any received e-mail messages in a prescribed memory location within the client device, e. g., at either the default location established by the e-mail client or a user-selected location. Exemplary e-mail software is the commercially available e-mail software marketed by the present assignee, QUALCOMM INCORPORATED, under the registered trademarks EUDORA PRO@ and EUDORA LIGHT (hereinafter sometimes referred to generically as"Eudora"). In general, the EUDORA PRO e-mail software provides the user with a"full feature set,"and the EUDORALIGHT e-mail software provides the user with a"reduced feature set"that is a subset of the"full feature set"provided by the EUDORA PRO e-mail software.The EUDORA PRO e-mail software (the previous version of which is referred to as"EP4"in this document) must be paid for by the user (or by someone else on behalf of the user), and can thus be regarded as"Payware", whereas theEUDORA LIGHT e-mail software is provided free of charge to registered users, and thus, can be regarded as"Freeware."Each of the client devices that has any version of Eudora installed thereon can be regarded as a"Eudora client."Presently, there is a very large installed base of Eudora clients. The present assignee, QUALCOMM INCORPORATED, has recently released a new version of its popular EUDORA e-mail software that is popularly known as EUDORA Adware (hereinafter sometimes referred to simply as"Adware"). This new Adware version of Eudora is contained within, i. e., is an integral part of, a new Eudora software product that contains the previously-referenced Payware and Freeware versions of Eudora. In general, each version of Eudora contained within this Eudora product release constitutes a separate operating mode of a single software product. Advantageously, theAdware Version of Eudora Pro@ can be activated or switched between modes either automatically, in accordance with prescribed criteria or conditions, or manually, in accordance with prescribed user actions, e. g., registration, payment, selection, etc. This new Adware version of Eudora and the multimoded Eudora e-mail software product that contains the same were motivated by a desire on the part of the present assignee to provide users with the"full feature set"afforded by the Payware version of Eudora free of charge to the users, by means of distributing advertisements paid for by advertisers toEudora clients, thereby effectively shifting the source of payment/revenue from the users to the advertisers. Thus, this new Eudora software product can be regarded as"advertiser-supported"or"advertiser-subsidized"or simply "sponsored"software. Most Internet service providers (ISPs) and e-mail service providers charge users a flat monthly subscription fee, although some providers still charge users based on usage, e. g., additional charges for on-line time beyond a prescribed level. However, there exists a population of users who desire to have basic e-mail service, but who do not require or want to pay for Internet access. A few companies have addressed the needs of this market segment by providing free e-mail service to users/subscribers who agree to receive advertisements along with their received e-mail messages. In this way, the advertisers support or sponsor the free e-mail service. Based upon the relevant literature, it appears that the first company to propose and offer such a free e-mail service was FreeMark Communications (a. k. a."ProductView Interactive"). The FreeMark system and method for providing free e-mail service is disclosed in PCT published patent applicationInternational Publication Number WO 96/24213, having a priority date ofFebruary 1,1995, based on U. S. Application Serial Number 08/382,118, naming as inventors Marv Goldschmitt and Robert A. Young. The disclosure of this published PCT patent application is expressly incorporated herein by reference.In short, this free e-mail system was subsidized by advertisers that appended advertisements as attachments, e. g., graphical interchange format (GIF) image file attachments, to e-mail messages transmitted to subscribers. The advertisements were stored on the subscriber's computer for viewing while the subscriber was off-line reading the received e-mail messages. In some of their promotional literature, FreeMark referred to the appended advertisements as "postage stamps". In FreeMark's literature, each message received by the subscriber was depicted as an envelope bearing a postage stamp; the postage stamp was the advertisement. Subsequently, a company by the name of Juno Online Services, L. P.(hereinafter simply"JUNO") introduced a free e-mail service. The JUNO system and method for providing free e-mail service is disclosed in U. S. Patent Number 5,809,242, which issued to Marsh et al. on December 8,1998, the disclosure of which is also expressly incorporated herein by reference. With the proprietaryJUNO e-mail system, a plurality of advertisements are downloaded to subscribers when they connect to the proprietary JUNO e-mail server system to send and/or receive e-mail messages, with the advertisements being stored locally on the subscriber's computer for display when the subscriber is off-line composing or reading e-mail messages, i. e., when the subscriber activates Juno e-mail software previously installed on the subscriber's computer. The locally stored advertisements are displayed under the control of a display scheduler resident on the subscriber's computer, to thereby enable the advertisements to be rotated or changed in a dynamic manner. This results in a continuouslychanging display of advertisements being presented to the subscriber. Various other aspects and features of the proprietary JUNO e-mail system are disclosed in U. S. Patent Number 5,838,790, which issued to McAuliffe et al on November 17,1998, and in U. S. Patent Number 5,848,397, which issued to Marsh et al onDecember 8,1998; the disclosures of both of these patents are also expressly incorporated herein by reference. With both the FreeMark and JUNO proprietary free e-mail systems, both the advertisements and the e-mail messages are stored on a single e-mail system (e. g., JUNO stores both on a single, unique server which is assigned (bound) to the user when he/she first signs up for service), and are distributed to subscribers under the direction of a common control entity that is controlling all part of the e-mail system. While this may be a desirable system architecture for providing free e-mail service, it is not a suitable system architecture for a system whose purpose is to distribute advertiser-supported e-mail software that is e- mail system-independent, i. e., which is not tied to a particular proprietary email service provider but, rather, supports public standards, e. g., POP3, SMTP,IMAP4, etc. Moreover, the free e-mail system architecture is not suitable for the many people who maintain multiple e-mail accounts, e. g., business and personal e-mail accounts. As mentioned previously, the present inventors were motivated by a desire to provide a system and method for distributing advertisements to Eudora clients in order to generate advertising revenues that would allow a fully-featured version of the Eudora e-mail software to be widely distributed free of charge to end-users. Moreover, the present inventors were motivated by a desire to provide e-mail software that is both universal and email system-independent, i. e., it is not tied to any particular proprietary e-mail service or service provider. Accordingly, the present inventors have developed a novel multi-modedEudora e-mail software product that contains the Payware, Freeware andAdware, and have also devised a novel system and method for distributing advertisements to clients equipped with this new software product. As will become fully apparent hereinafter, the purpose and architecture of this novel system are radically different than that of the proprietary FreeMark and JUNO e-mail systems. In this regard, the multi-moded Eudora e-mail software product, and the novel system and method for distributing advertisements to clients equipped with this new software product, embraces a number of different inventions that will become fully apparent from the following disclosure and the documents referenced therein. SUMMARY OF THE INVENTION Based on the above and foregoing, it can be appreciated that there presently exists a need in the art for a subsidized e-mail client which overcomes the above-described deficiencies. The present invention was motivated by a desire to overcome the drawbacks and shortcomings of the presently available technology, and thereby fulfill this need in the art. In one of its aspects, the present invention encompasses e-mail software which incorporates an automatic advertisement download function for automatically downloading advertisements to be displayed when the e-mail software is activated, for the purpose of subsidizing the full e-mail software product (e. g., to provide a"Freeware"version of the e-mail software product to end-users), wherein the e-mail software is e-mail system-independent.Preferably, the e-mail software is a stand-alone product which is universal, i. e., works in conjunction with virtually any e-mail service provider or e-mail system, including those service which comply with open standards. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices which have this email software installed thereon. According to one aspect, the present invention provides an e-mail client for receiving and sending e-mail messages to at least one of a plurality of e-mail servers operated by respective e-mail operators, wherein the e-mail client receives at least one ad from an ad server operated by a control entity different than the control entity operating the one or more e-mail systems. According to another aspect, the present invention provides a recording medium storing e-mail client software for instantiating an e-mail client which receives e-mail messages from and sends e-mail messages to at least one of a plurality of e-mail servers operated by their respective e-mail operators, wherein the e-mail client automatically receives ads from an ad server which operates independent of the e-mail servers. According to still another aspect, the present invention encompasses a method of operating an e-mail client, provided by an ad server operator, compatible with a plurality of independently operated e-mail servers, including ones based on open e-mail standards. Preferably, the method includes steps for periodically at least one of sending and receiving e-mail from selected ones of the e-mail servers, periodically receiving ads from the ad server operator, and displaying the received ads responsive to instructions provided by the ad server operator. According to a still further aspect, the present invention provides an email system including an incoming e-mail server storing incoming e-mail messages addressed to a plurality of users, an outgoing e-mail server for forwarding or routing outgoing e-mail messages generated by the users, and an ad server operating independently of the e-mail server, and a plurality of e-mail clients operated by respective users. Preferably, each of the e-mail clients checks for respective e-mail messages stored on the incoming e-mail server, transmits any outgoing e-mail messages stored on the e-mail client to the outgoing e-mail server, and downloads available ads from the ad server while the e-mail client is online. In one aspect, the present invention provides software for use on a client device that is configured for communications with at least one remote source via a communications network, the software instantiating a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software, and an ad download function that downloads advertisements from the at least one remote source, via the communications network. In another aspect, the present invention provides software for use on a client device that is configured for communications with a multiplicity of other client devices via a communications network. Preferably, the software instantiates a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software, an ad download function that downloads advertisements from at least one remote source, via the communications network, an e-mail composition function for enabling a user of the client device to compose e-mail messages, an e-mail send function that enables the user to send e-mail messages to other client devices via the communications network, and an e-mail receive function that enables the user to receive e-mail messages from other client devices via the communications network. In a further aspect, the present invention provides software, for use on a client device that is configured for communications via a communications network, instantiating a custom installer function that installs the software on the client device, and that generates a distributor identifier that identifies a distributor that distributed the software, a playlist request function that generates a playlist request that includes the distributor identifier, and that transmits the playlist request to at least one playlist server, via the communications network, a playlist response handling function that receives and processes a playlist response transmitted to the client device by the at least one playlist server in response to the playlist request, wherein the playlist response includes a playlist (s) that identifies advertisements to be downloaded, an ad download function that downloads at least selected ones of the advertisements identified in the playlist (s) from at least one remote source, via the communications network, over one or more advertisement download sessions, an advertisement storage function for storing the downloaded advertisements on a storage medium associated with the client device, and an advertisement display function that effects display of at least selected ones of the stored advertisements on a display associated with the client device. Many other features, aspects, uses, applications, advantages, modifications, variations, and alternative embodiments of the foregoing inventive concepts will become apparent from the technical documentation that follows. This technical documentation constitutes an integral part of this application for all purposes. Moreover, additional inventive concepts that have not been discussed above are disclosed in this technical documentation, and it is intended that this application cover such additional inventive concepts. Furthermore, certain terms that have been used in the foregoing and following descriptions of the present invention are defined as follows: <tb> TERM <SEP> DESCRIPTION<tb> Advertisement <SEP> (s) <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> secondary<tb> content <SEP> that <SEP> is <SEP> delivered <SEP> or <SEP> distributed <SEP> to <SEP> client <SEP> devices <SEP> in<tb> addition <SEP> to <SEP> the <SEP> primary <SEP> content, <SEP> e. <SEP> g., <SEP> e-mail <SEP> messages, <SEP> which<tb> the <SEP> software <SEP> product <SEP> instantiated <SEP> by <SEP> the <SEP> client <SEP> device <SEP> is<tb> designed <SEP> to <SEP> receive, <SEP> transmit, <SEP> process, <SEP> display, <SEP> and/or<tb> utilize. <SEP> For <SEP> example, <SEP> this <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> cover, <SEP> without<tb> limitation, <SEP> paid <SEP> advertisements, <SEP> community <SEP> service<tb> messages, <SEP> public <SEP> service <SEP> announcements, <SEP> system<tb> information <SEP> messages <SEP> or <SEP> announcements, <SEP> cross-promo <SEP> spots,<tb> artwork, <SEP> and <SEP> any <SEP> other <SEP> graphical, <SEP> multimedia, <SEP> audio, <SEP> video,<tb> text, <SEP> or <SEP> other <SEP> secondary <SEP> digital <SEP> content. <SEP> Nevertheless, <SEP> it <SEP> will<tb> be <SEP> recognized <SEP> that <SEP> the <SEP> primary <SEP> purpose <SEP> of <SEP> the <SEP> presently<tb> contemplated <SEP> commercial <SEP> embodiment <SEP> of <SEP> the <SEP> present<tb> invention <SEP> is <SEP> to <SEP> distribute <SEP> paid <SEP> advertisements, <SEP> and <SEP> thus, <SEP> in<tb> accordance <SEP> with <SEP> the <SEP> preferred <SEP> embodiment <SEP> of <SEP> the <SEP> present<tb> invention, <SEP> the <SEP> advertisements <SEP> will <SEP> be <SEP> exclusively, <SEP> or <SEP> at <SEP> least<tb> primarily, <SEP> paid <SEP> advertisements.<tb>Client <SEP> Device <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> device <SEP> that<tb> has <SEP> digital <SEP> data <SEP> processing <SEP> and <SEP> output, <SEP> e. <SEP> g., <SEP> display,<tb> capabilities, <SEP> including, <SEP> but <SEP> not <SEP> limited <SEP> to, <SEP> desktop<tb> computers, <SEP> laptop <SEP> computers, <SEP> hand-held <SEP> computers,<tb> notebook <SEP> computers, <SEP> Personal <SEP> Digital <SEP> Assistants <SEP> (PDAs),<tb> palm-top <SEP> computing <SEP> devices, <SEP> intelligent <SEP> devices, <SEP> information<tb> appliances, <SEP> video <SEP> game <SEP> consoles, <SEP> information <SEP> kiosks, <SEP> wired<tb> and <SEP> wireless <SEP> Personal <SEP> Communications <SEP> Systems <SEP> (PCS)<tb> devices, <SEP> smart <SEP> phones, <SEP> intelligent <SEP> cellular <SEP> telephones <SEP> with<tb> built-in <SEP> web <SEP> browsers, <SEP> intelligent <SEP> remote <SEP> controllers <SEP> for<tb> cable, <SEP> satellite, <SEP> and/or <SEP> terrestrial <SEP> broadcast <SEP> television, <SEP> and<tb> any <SEP> other <SEP> device <SEP> that <SEP> has <SEP> the <SEP> requisite <SEP> capabilities.<tb> <tb>Information <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> intelligible<tb> form <SEP> of <SEP> information <SEP> which <SEP> can <SEP> be <SEP> presented <SEP> by <SEP> a <SEP> client<tb> device, <SEP> i. <SEP> e., <SEP> an <SEP> information <SEP> client <SEP> device, <SEP> including, <SEP> without<tb> limitation, <SEP> text, <SEP> documents, <SEP> files, <SEP> graphical <SEP> objects, <SEP> data<tb> objects, <SEP> multimedia <SEP> content, <SEP> audio/sound <SEP> files, <SEP> video <SEP> files,<tb> MPEG <SEP> files, <SEP> JPEG <SEP> files, <SEP> GIF <SEP> files, <SEP> PNG <SEP> files, <SEP> HTML<tb> documents, <SEP> applications, <SEP> formatted <SEP> documents <SEP> (e. <SEP> g., <SEP> word<tb> processor <SEP> and/or <SEP> spreadsheet <SEP> documents <SEP> or <SEP> files), <SEP> MP3 <SEP> files,<tb> animations, <SEP> photographs, <SEP> and <SEP> any <SEP> other <SEP> document, <SEP> file,<tb> digital, <SEP> or <SEP> multimedia <SEP> content <SEP> that <SEP> can <SEP> be <SEP> transmitted <SEP> over <SEP> a<tb> communications <SEP> network <SEP> such <SEP> as <SEP> the <SEP> Internet.<tb>E-mail <SEP> Messages <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> the <SEP> e-mail<tb> message <SEP> and <SEP> any <SEP> attachments <SEP> thereto, <SEP> including, <SEP> without<tb> limitation, <SEP> text, <SEP> documents, <SEP> files, <SEP> graphical <SEP> objects, <SEP> data<tb> objects, <SEP> multimedia <SEP> content, <SEP> audio/sound <SEP> files, <SEP> video <SEP> files,<tb> MPEG <SEP> files, <SEP> JPEG <SEP> files, <SEP> GIF <SEP> files, <SEP> PNG <SEP> files, <SEP> HTML<tb> documents, <SEP> applications, <SEP> formatted <SEP> documents <SEP> (e. <SEP> g., <SEP> word<tb> processor <SEP> and/or <SEP> spreadsheet <SEP> documents <SEP> or <SEP> files), <SEP> MP3 <SEP> files,<tb> animations, <SEP> photographs, <SEP> and <SEP> any <SEP> other <SEP> document, <SEP> file,<tb> digital, <SEP> or <SEP> multimedia <SEP> content <SEP> that <SEP> can <SEP> be <SEP> transmitted <SEP> over <SEP> a<tb> communications <SEP> network <SEP> such <SEP> as <SEP> the <SEP> Internet.<tb>Software <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> the <SEP> developer<tb> Provider <SEP> (or <SEP> developers), <SEP> sellers, <SEP> distributors, <SEP> etc., <SEP> of <SEP> the <SEP> multi-mode<tb> software <SEP> products <SEP> (s) <SEP> installed <SEP> on <SEP> the <SEP> client <SEP> device.<tb>Memory <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> device<tb> capable <SEP> of <SEP> storing <SEP> and/or <SEP> incorporating <SEP> computer <SEP> readable<tb> code <SEP> for <SEP> instantiating <SEP> the <SEP> client <SEP> device <SEP> referred <SEP> to<tb> immediately <SEP> above. <SEP> Thus, <SEP> the <SEP> term <SEP> encompasses <SEP> all <SEP> types <SEP> of<tb> recording <SEP> medium, <SEP> e. <SEP> g., <SEP> a <SEP> CD-ROM, <SEP> a <SEP> disk <SEP> drive <SEP> (hard <SEP> or<tb> soft), <SEP> magnetic <SEP> tape, <SEP> and <SEP> recording <SEP> devices, <SEP> e. <SEP> g., <SEP> memory<tb> devices <SEP> including <SEP> DRAM, <SEP> SRAM, <SEP> EEPROM, <SEP> FRAM, <SEP> and<tb> Flash <SEP> memory. <SEP> It <SEP> should <SEP> be <SEP> noted <SEP> that <SEP> the <SEP> term <SEP> is <SEP> intended<tb> to <SEP> include <SEP> any <SEP> type <SEP> of <SEP> device <SEP> which <SEP> could <SEP> be <SEP> deemed<tb> persistent <SEP> storage. <SEP> To <SEP> the <SEP> extent <SEP> that <SEP> an <SEP> Application <SEP> Specific<tb> Integrated <SEP> Circuit <SEP> (ASIC) <SEP> can <SEP> be <SEP> considered <SEP> to <SEP> incorporate<tb> instructions <SEP> for <SEP> instantiating <SEP> a <SEP> client <SEP> device, <SEP> an <SEP> ASIC <SEP> is <SEP> also<tb> <tb> considered <SEP> to <SEP> be <SEP> within <SEP> the <SEP> scope <SEP> of <SEP> the <SEP> term"memory."<tb> BRIEF DESCRIPTION OF THE DRAWINGS These and various other features and aspects of the present invention will be readily understood with reference to the following detailed description taken in conjunction with the accompanying drawings, in which like or similar numbers are used throughout, and in which: Fig. 1 is a high-level diagram of a computer system including a plurality of client devices connected to a plurality of independently-operated server devices via a network, which computer system is suitable for implementing various functions according to the present invention; Fig. 2 is a high-level diagram of a representative one of the client devices illustrated in Fig. 1; Figs. 3A and 3B illustrate alternative and non-limiting placement of ads in the main navigation screen of an exemplary e-mail software application according to the present invention; Fig. 4A depicts state transitions when a version of the software is installed by one of a new user, an old user, and an EP4 user; Fig. 4B illustrates a dialog box associated with the state flow diagram illustrated in Fig. 4A; Fig. 5A illustrates an exemplary state flow diagram of a process by which the Ad user becomes a registered Ad user while Figs. 5B through 5G illustrate several dialog boxes associated with Fig. 5A; Fig. 6A illustrates an exemplary state flow diagram of a process by which a Free user can become a registered Free user while Fig. 6B illustrates an additional dialog box associated with Fig. 6A; Fig. 7A illustrates an exemplary state flow diagram of a process by which all users are reminded to update the software according to the present invention while Fig. 7B depicts an exemplary dialog box corresponding to anUpdate Nag; Fig. 8 illustrates an exemplary state flow diagram of a process by which aBox user can become a Paid user; Fig. 9 illustrates an exemplary state flow diagram of a process by which the Paid User becomes an Unpaid user; Fig. 10 illustrates an exemplary Nag Window display timeline for MacOS versions of the Eudora e-mail software according to an exemplary embodiment of the present invention; Fig. 11 illustrates a Nag Schedule employed by the software according to the present invention; Fig. 12A is a simulated screen capture of a link history window employed in an exemplary software embodiment of the present invention whileFig. 12B is a dialog box reminding the user that the e-mail client according to the present invention is off-line; Fig. 13A illustrates the assumptions used in determining the impact of ad transmission on e-mail program operations while Fig. 13B is a table listing the bandwidth requirements in terms of subscriber base versus the number of new ads to be downloaded each day; Fig. 14 is a state flow diagram of an exemplary ad fetch process according to the present invention; Figs. 15A-15H collectively illustrate an algorithm controlling ad scheduling in an exemplary embodiment according to the present invention; Figs. 16A and 16B illustrate parameter variations in alternative modes of ad display possible in an exemplary embodiment according to the present invention; Figs. 17A through 17C illustrate additional dialog boxes which advantageously can be generated by the e-mail client software according to one aspect of the present invention; Fig. 18A illustrates an exemplary dialog box associated with auditing the operation of the Adware software according to the present invention while Figs.18B through 18E list useful parameters for auditing the software's performance ; Fig. 19 is a table summarizing the features of a plurality of web pages that advantageously can be employed in conjunction with an exemplary e-mail system according to one aspect of the present invention; Fig. 20 is a class diagram illustrating the mapping of XML code to objects and the task flow when another exemplary embodiment according to the present invention is operating in accordance with doPost methodology; Figs. 21A and 21B collectively constitute a pseudo code listing which can be employed by the server 302 in Fig. 1 in generating a PlayList in accordance with the present invention; Fig. 22 is another class diagram illustrating handling of requests and writes between a server and at least one of the client computers depicted in Fig.1; and Fig. 23 illustrates database accesses in accordance with another aspect of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Illustrative embodiments and exemplary applications will now be described with reference to the accompanying drawings to disclose the advantageous teachings of the present invention. While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those having ordinary skill in the art and access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the present invention would be of significant utility. Referring now to specific drawings, Fig. 1 illustrates an exemplary system configuration 10 which is suitable for carrying out the functions according to representative embodiments of the present invention. Although the representative embodiment will be generally described with respect to an electronic mail (e-mail) system where a number of users can create, send, receive and read e-mail messages, the present invention is not so limited. For example, the present invention is equally applicable to a personal digital assistant (PDA) incorporating specialized software for receiving stock quotations via a wireless network. Thus, the principles of the present invention should not be regarded as limited solely to e-mail systems; the principles of the present invention apply to on-line services where a provider, e. g., a software provider, desires to make its software available to users using a variety of payment options for a core set of software functions. As shown in Fig. 1, the system 10 includes a plurality of client computers 100a, 100b,..., 100n, where n denotes any positive integer. Preferably, each of the client computers generally denoted 100 can be either a workstation or a personal computer executing a client program according to the present invention. In an exemplary case, the client computers 100a, 100b,..., 100n advantageously can be connected to a plurality of servers 301-304, which servers will be described in greater detail below, via a network 200, e. g., theInternet. Alternatively, the network 200 can be one of a local area network (LAN), a wide area network (WAN), an Intranet, or a wireless network, or some combination thereof. It will be appreciated that Fig. 1 illustrates a non-limiting exemplary system; and number of clients can be connected to any number of servers. Fig. 2 illustrates in further detail the hardware configuration of an exemplary one of the client computers 100a, 100b,..., 100n illustrated in Fig. 1.In the representative embodiment, the client computer 100a includes a central processing unit 209 for executing computer programs (including the client program according to one exemplary embodiment of the present invention) and managing and controlling the operation of the client computer 100a. A storage device 205, such as a floppy disk drive, is coupled to the central processing unit 209 for, e. g., reading and writing data and computer programs to and from removable storage media such as floppy disks. Storage device 206, coupled to the central processing unit 209, also provides a mechanism for storing computer programs and data. Storage device 206 is preferably a hard disk having a high storage capacity. A dynamic memory device 207 such as a RAM, is also coupled to the central processing unit 209. It will be noted that storage devices 205 and 206, as well as dynamic memory device 207, are non-limiting examples of a memory, which term was defined previously. The client computer 100a includes typical input/output devices, such as, for example, a keyboard 203, a mouse 204, a monitor 208, and a communications device 201. It will be appreciated that the communications device advantageously can be a modem, an ethernet interface card, etc. Referring again to Fig. 1, each of the client computers 100a, 100b,..., 100n can selectively communicate with any of the servers, e. g., servers 301-304, via the network 200. In the computer system 10 depicted in Fig. 1, each of the servers performs a specialized function. In an exemplary case, server 301 performs a registration function, i. e., accepts registration information from each client computer (as discussed in greater detail below), server 302 providesPlayLists to the client computers 100a, 100b,..., 100n, server 303 provides the advertisements designated in the PlayLists, and server 304 acts as a conventional e-mail system server system, i. e., provides both the incoming e-mail server and the outgoing e-mail server. It should be mentioned that only servers 301 and 302 need actually be under the direct control of the software provider, e. g.,QUALCOMM INCORPORATED in the preferred embodiment, although server 303 advantageously may be under the control of the software provider as well.It should also be mentioned that the reference to software should not be construed as limited to disk based software; the term"software"should be broadly interpreted as instructions carried out by a processor, whether these instructions are read from a dynamic memory or stored as firmware in an read only memory (ROM) or other variants of such a device. According to one aspect of the present invention, the"software" advantageously can be provided as a single binary (per client device) file containing the software, e. g., the Eudora software, which can be employed by all users. This binary file will operate in one of three major modes of operation:Payware; Freeware; and Adware. In the Payware mode of operation, the user must pay the software provider to use the software. Freeware is free for all to use, but has fewer features than either Payware or Adware. Preferably, Payware users will prove their payment by a registration code that the software provider will provide to them at time of payment. This code will be self-validating, and contain enough data to identify what version (s) the user is entitled to operate.It should be noted that users of the Payware version of Eudora will be entitled to all versions of Eudora that are produced during the calendar year following their payment. The software preferably polls a predetermined site, e. g., a site maintained by QUALCOMM INCORPORATED, on a periodic basis in order to determine if an update for the software is available; if an update is available, the software advantageously can present the user with a small web page of options for obtaining the software update, as discussed in greater detail below. It will be noted that Adware has all the features of Payware, but does not require payment from the user. What Adware does require is that the user display and view ads, which the user will download from the software provider's site and/or one or more sites designated by the software provider. It will also be noted that the initial state of the software is Adware. In an exemplary preferred embodiment, each client computer downloads ads from the ad server 303 unobtrusively and without drawing significant bandwidth, as discussed in greater detail below. Moreover, the ads advantageously can be displayed in a manner that doesn't significantly detract from the use of the software, e. g., Eudora. Figs. 3A and 3B illustrate advertisements integrated into the main screen of the exemplary Eudora e-mail software. Some of the terminology employed in describing the functions and novel features of exemplary embodiments of the present invention was presented above. Additional terminology which facilitates a full understanding of the present invention in terms of the Eudora software is presented immediately below. Applications <SEP> QUALCOMM <SEP> INCORPORATED <SEP> has <SEP> several <SEP> versions <SEP> of <SEP> the<tb> Eudora <SEP> software, <SEP> including:<tb> EP4 <SEP> Eudora <SEP> Pro <SEP> 4. <SEP> x, <SEP> either <SEP> Windows <SEP> or <SEP> Macintosh.<tb>Eudora <SEP> The <SEP> new <SEP> three-modal <SEP> version <SEP> of <SEP> Eudora,<tb> running <SEP> in <SEP> any <SEP> of <SEP> its <SEP> modes.<tb>Payware <SEP> Eudora <SEP> running <SEP> in <SEP> full-feature <SEP> mode, <SEP> after <SEP> the<tb> user <SEP> has <SEP> paid.<tb>Freeware <SEP> Eudora <SEP> running <SEP> in <SEP> reduced-feature <SEP> mode.<tb> Adware <SEP> Eudora <SEP> running <SEP> in <SEP> full-feature <SEP> mode <SEP> with <SEP> ads.<tb>Paid <SEP> App <SEP> Any <SEP> version <SEP> of <SEP> Payware <SEP> to <SEP> which <SEP> the <SEP> user's<tb> registration <SEP> entitles <SEP> him/her.<tb>Unpaid <SEP> App <SEP> Any <SEP> version <SEP> of <SEP> Payware <SEP> newer <SEP> than <SEP> that <SEP> to<tb> which <SEP> the <SEP> user <SEP> is <SEP> registered <SEP> and <SEP> entitled <SEP> to.<tb>Old <SEP> Eudora <SEP> Eudora <SEP> versions <SEP> prior <SEP> to <SEP> Eudora <SEP> Pro <SEP> 4. <SEP> x.<tb>User <SEP> States <SEP> A <SEP> user <SEP> state <SEP> is <SEP> the <SEP> most <SEP> basic <SEP> concept <SEP> to <SEP> understanding <SEP> how<tb> the <SEP> various <SEP> modes <SEP> of <SEP> the <SEP> application <SEP> are <SEP> interrelated. <SEP> The<tb> user <SEP> state <SEP> determines <SEP> how <SEP> the <SEP> program <SEP> treats <SEP> the <SEP> user. <SEP> The<tb> states <SEP> are <SEP> defined <SEP> as <SEP> follows:<tb> EP4 <SEP> User <SEP> A <SEP> user <SEP> of <SEP> EP4 <SEP> who <SEP> has <SEP> not <SEP> registered <SEP> via <SEP> the<tb> old <SEP> (non-Adware) <SEP> registration <SEP> process.<tb> <tb>Registered <SEP> A <SEP> registered <SEP> user <SEP> of <SEP> EP4.<tb>EP4 <SEP> User<tb> New <SEP> User <SEP> A <SEP> user <SEP> using <SEP> Eudora <SEP> for <SEP> the <SEP> first <SEP> time, <SEP> but<tb> who <SEP> has <SEP> not <SEP> obtained <SEP> a <SEP> boxed <SEP> copy, <SEP> e. <SEP> g.,<tb> bundled <SEP> with <SEP> a <SEP> newly <SEP> purchased <SEP> computer<tb> system, <SEP> etc.<tb>Payware <SEP> A <SEP> user <SEP> who <SEP> has <SEP> paid <SEP> for <SEP> Eudora, <SEP> entered<tb> User <SEP> his/her <SEP> registration <SEP> code, <SEP> and <SEP> is <SEP> using <SEP> a<tb> version <SEP> of <SEP> Eudora <SEP> to <SEP> which <SEP> he/she <SEP> is <SEP> entitled.<tb>Box <SEP> User <SEP> This <SEP> is <SEP> a <SEP> user <SEP> who <SEP> has <SEP> been <SEP> given <SEP> their<tb> RegCode <SEP> by <SEP> an <SEP> installer, <SEP> either <SEP> from <SEP> the <SEP> box<tb> product <SEP> or <SEP> from <SEP> an <SEP> EP4 <SEP> updater, <SEP> and <SEP> whose<tb> registration <SEP> information <SEP> is <SEP> therefore<tb> unknown.<tb>Free <SEP> User <SEP> A <SEP> user <SEP> who <SEP> has <SEP> chosen <SEP> to <SEP> use <SEP> Freeware <SEP> but<tb> who <SEP> has <SEP> not <SEP> entered <SEP> a <SEP> Freeware <SEP> registration<tb> code.<tb>Adware <SEP> A <SEP> user <SEP> who <SEP> is <SEP> using <SEP> the <SEP> Adware <SEP> version <SEP> that<tb> User <SEP> displays <SEP> ads.<tb>Registered <SEP> A <SEP> Freeware <SEP> ("Free") <SEP> user <SEP> who <SEP> has <SEP> entered <SEP> a<tb> Freeware <SEP> Freeware <SEP> registration <SEP> code.<tb>User<tb> Registered <SEP> An <SEP> Adware <SEP> user <SEP> who <SEP> has <SEP> entered <SEP> an <SEP> Ad<tb> Adware <SEP> registration <SEP> code.<tb>User<tb> Deadbeat <SEP> A <SEP> former <SEP> Adware <SEP> user <SEP> who <SEP> has <SEP> been <SEP> shut <SEP> off<tb> User <SEP> due <SEP> to <SEP> Eudora's <SEP> failure <SEP> to <SEP> receive <SEP> ads <SEP> (or <SEP> less<tb> than <SEP> a <SEP> prescribed <SEP> minimum <SEP> number <SEP> of <SEP> ads).<tb> <tb>Windows <SEP> and <SEP> Several <SEP> windows <SEP> and <SEP> dialogs <SEP> are <SEP> used <SEP> in <SEP> the <SEP> process. <SEP> A <SEP> fuller<tb> Dialogs <SEP> description <SEP> of <SEP> these <SEP> will <SEP> be <SEP> given <SEP> later, <SEP> but <SEP> the <SEP> major <SEP> ones <SEP> are<tb> briefly <SEP> described <SEP> immediately <SEP> below:<tb> Intro <SEP> Dialog <SEP> A <SEP> dialog <SEP> presented <SEP> to <SEP> new <SEP> users <SEP> explaining<tb> the <SEP> software <SEP> options <SEP> to <SEP> new <SEP> users.<tb>Registration <SEP> A <SEP> window <SEP> presented <SEP> to <SEP> the <SEP> user <SEP> every <SEP> so<tb> Nag <SEP> often <SEP> to <SEP> suggest <SEP> that <SEP> the <SEP> user <SEP> register <SEP> his/her<tb> software.<tb>Full-Feature <SEP> A <SEP> window <SEP> presented <SEP> to <SEP> Freeware <SEP> users<tb> Nag <SEP> requesting <SEP> them <SEP> to <SEP> try <SEP> Eudora <SEP> Pro <SEP> again.<tb>Free <SEP> A <SEP> dialog <SEP> that <SEP> tells <SEP> the <SEP> user <SEP> the <SEP> features <SEP> that<tb> Downgrade <SEP> will <SEP> no <SEP> longer <SEP> be <SEP> available <SEP> to <SEP> him/her <SEP> if <SEP> they<tb> switch <SEP> to <SEP> Freeware, <SEP> but <SEP> allows <SEP> them <SEP> to <SEP> do <SEP> so<tb> if <SEP> they <SEP> really <SEP> wish.<tb>Code <SEP> Entry <SEP> A <SEP> dialog <SEP> allowing <SEP> the <SEP> user <SEP> to <SEP> enter <SEP> their<tb> Dialog <SEP> registration <SEP> code.<tb>Ad <SEP> Window <SEP> A <SEP> window <SEP> or <SEP> portion <SEP> of <SEP> a <SEP> screen <SEP> displaying<tb> an <SEP> ad. <SEP> See <SEP> Figs. <SEP> 3A <SEP> and <SEP> 3B.<tb>Link <SEP> A <SEP> window <SEP> that <SEP> will <SEP> display <SEP> links <SEP> the <SEP> user <SEP> has<tb> History <SEP> clicked <SEP> on, <SEP> i. <SEP> e., <SEP> ads <SEP> the <SEP> user <SEP> has <SEP> seen.<tb>Window<tb> Web <SEP> Pages <SEP> The <SEP> software <SEP> provider <SEP> advantageously <SEP> can <SEP> elect <SEP> to <SEP> restrict<tb> interactions <SEP> between <SEP> the <SEP> user <SEP> and <SEP> the <SEP> software <SEP> provider <SEP> to<tb> the <SEP> Internet <SEP> to <SEP> the <SEP> maximum <SEP> extent <SEP> possible. <SEP> This <SEP> will <SEP> allow<tb> the <SEP> software <SEP> provider <SEP> the <SEP> most <SEP> flexibility <SEP> in <SEP> how <SEP> the <SEP> software<tb> provider <SEP> deals <SEP> with <SEP> actual <SEP> users. <SEP> One <SEP> potential <SEP> list <SEP> of <SEP> the<tb> major <SEP> pages <SEP> is <SEP> provided <SEP> immediately <SEP> below, <SEP> although <SEP> these<tb> <tb> "pages"advantageously <SEP> may <SEP> be <SEP> groups <SEP> of <SEP> pages, <SEP> or <SEP> pages<tb> customized <SEP> to <SEP> match <SEP> the <SEP> demographics <SEP> of <SEP> a <SEP> given <SEP> user, <SEP> e. <SEP> g., <SEP> a<tb> customized <SEP> and/or <SEP> branded <SEP> version <SEP> of <SEP> Eudora <SEP> provided <SEP> by <SEP> a<tb> major <SEP> retailer, <SEP> e. <SEP> g., <SEP> a <SEP> private <SEP> label <SEP> version <SEP> of <SEP> Eudora <SEP> provided<tb> to <SEP> its <SEP> users <SEP> by <SEP> an <SEP> ISP.<tb>Freeware <SEP> A <SEP> page <SEP> that <SEP> allows <SEP> the <SEP> user <SEP> to <SEP> register<tb> Reg <SEP> Page <SEP> Freeware.<tb>Payware <SEP> A <SEP> page <SEP> that <SEP> accepts <SEP> payment <SEP> for <SEP> Eudora <SEP> Pro<tb> Reg <SEP> Page <SEP> and <SEP> returns <SEP> a <SEP> registration <SEP> code <SEP> to <SEP> the <SEP> user.<tb>Adware <SEP> Reg <SEP> A <SEP> page <SEP> that <SEP> allows <SEP> users <SEP> of <SEP> Adware <SEP> to <SEP> submit<tb> Page <SEP> their <SEP> registration <SEP> information <SEP> to <SEP> the <SEP> software<tb> provider.<tb>Lost <SEP> Code <SEP> A <SEP> page <SEP> that <SEP> helps <SEP> users <SEP> who <SEP> have <SEP> lost <SEP> their<tb> Page <SEP> registration <SEP> codes. <SEP> (May <SEP> require <SEP> human<tb> intervention)<tb> Update <SEP> Page <SEP> A <SEP> page <SEP> generated <SEP> for <SEP> a <SEP> user <SEP> that <SEP> lists <SEP> possible<tb> upgrades <SEP> and <SEP> the <SEP> latest <SEP> version <SEP> for <SEP> which<tb> he/she <SEP> is <SEP> registered.<tb>Archived <SEP> A <SEP> page <SEP> from <SEP> which <SEP> users <SEP> can <SEP> download <SEP> all<tb> Versions <SEP> versions <SEP> of <SEP> Eudora.<tb>Page<tb> Profile <SEP> Page <SEP> A <SEP> web <SEP> page <SEP> where <SEP> users <SEP> can <SEP> enter <SEP> their<tb> profile <SEP> information.<tb>Nag <SEP> A"Nag <SEP> Schedule"is <SEP> a <SEP> bracketed <SEP> set <SEP> of <SEP> numbers. <SEP> The<tb> Schedules <SEP> numbers <SEP> signify <SEP> # <SEP> of <SEP> days <SEP> since <SEP> the <SEP> start <SEP> of <SEP> a <SEP> trial <SEP> period.<tb>Users <SEP> will <SEP> be <SEP> nagged <SEP> on <SEP> the <SEP> days <SEP> indicated. <SEP> The <SEP> last <SEP> number<tb> signifies <SEP> what <SEP> happens <SEP> when <SEP> the <SEP> other <SEP> numbers <SEP> run <SEP> out; <SEP> the<tb> user <SEP> will <SEP> either <SEP> not <SEP> be <SEP> nagged <SEP> (0), <SEP> or <SEP> be <SEP> nagged <SEP> every <SEP> so<tb> <tb> many <SEP> days. <SEP> For <SEP> example, <SEP> a <SEP> schedule <SEP> of <SEP> [0,5,2] <SEP> means <SEP> the <SEP> user<tb> will <SEP> be <SEP> nagged <SEP> on <SEP> the <SEP> first <SEP> day, <SEP> the <SEP> sixth <SEP> day, <SEP> and <SEP> every <SEP> other<tb> dayI<tb> As mentioned above, the"software"advantageously can be provided as a single binary file containing the software, e. g., the Eudora software, which can be installed (if required) and employed by all users. This binary file will operate in one of three major modes of operation: Payware; Freeware; and Adware. The installation and operation of various functions of the software program according to the present invention will now be described in greater detail while referring to several state flow diagrams, which state diagrams illustrate the major user states and the transitions among them. In the flow state diagrams, the following conventions will be observed: Raised grey squares are conceptual names for buttons in dialogs. (A few paths are labeled with menu items. These items can be used to bring up the window in question directly, without waiting for nags. In principle, any dialog or nag can be cancelled, leaving the user back in the initial state. (Web pages cannot change user state or generate more dialogs; hence, all web pages lead back to the user's initial state. With the conventions noted above, the installation of the Eudora e-mail software will now be described while referring to Fig. 4, which depicts state transitions when a version of the software is installed by one of a new user, an old user, and an EP4 user. It will be noted that the software provider doesn't give the user the options to pay for the full feature set or to accept the software with a reduced feature set in the intro dialog. While the software provider will explain those options, e. g., via a dialog box similar to that illustrated in Fig. 4B, as well as the fact that the user can obtain these alternative versions of the software feature set by going through the Help menu, the software defaults to the Adware version. The path taken by EP4 users and box purchasers illustrated in Fig. 4A merits some elaboration. The Code Generator referred to in Fig. 4A advantageously is instantiated by the installer module of the binary file, not in the Eudora e-mail program itself. If the user is using the software's 4. x- > 4.3 update function, the software searches for a copy of EP4 and, on finding a copy of the software, the Code Generator permits the user to generate a RegCode file. If the user is running the installer out of the box, the installer permits RegCode generation without looking for a copy of EP4 first. It should be mentioned that the RegCode file so generated is special in that it contains a line saying"EudoraNeeds-Registration: YES."The Eudora e-mail software will notice this line of text, put the user into the unregistered state, and then nag the user to register the software. Once the user registers, the same registration code will be retransmitted to the user, and the Eudora e-mail software will silently accept it (since it will be the same as the current code), and turn off the need to register flag in the e-mail software. Fig. 5A illustrates a state flow diagram of the process by which theAdware user becomes a registered Adware user. It will be appreciated that, in the illustrated exemplary case, the registration process necessitates interaction between client computer 100a and a registration server 301, which are connected to one another via network 200. In Fig. 5A, the Adware user indicated in Fig. 4A registers with the software provider through several alternative mechanisms. For example, the Ad user may wish to register, and simply activates the"HELP"pulldown menu, which is available from the tool bar illustrated at the top of Fig. 3A, and selects the Payment & Registration option, as depicted in Fig. 5B. Alternatively, the Adware user may receive aNag box, i. e., a Nag dialog box, generated by the software at a predetermined time, as discussed more fully below. Finally, the Ad user may receive a registration via e-mail, i. e., a registration code generated by server 301 and transmitted to the client computer 100a by way of e-mail server 304. As shown in Fig. 5B, the Payment & Registration Window provides several selection buttons, which allow the Ad user to register the Adware, pay for the software, list all versions available to the user, customize or modify the ad stream by providing demographic information, enter a received registration code, and downgrade to the reduced feature set offered to Freeware users. SeeFigs. 5C-5G. It should be mentioned that the user can enter a registration code to become one of a registered Adware user, a registered Freeware user, and a registered Payware user. See Fig. 5F. It will be appreciated that the software operates in accordance with the same state flow diagram for Registered AdwareUsers, except that the Registered Adware User is not subjected to theRegistration Nag. The software provider advantageously can use a registration scheme with a self-validating registration code, so that databases do not need to be used to validate registrations. The algorithm for verification is intended to satisfy several conflicting constraints, i. e., it needs to be secure, yet easy to implement and not unduly burdensome for the user. The Eudora e-mail software checks its registration code at startup for validity. If the registration code is invalid, the user should be considered unregistered. If the user is a paid mode user, this will involve a switch to Sponsored mode, about which the user should be warned using a dialog box (not shown). This alert will be followed by an opportunity to reenter the code. The necessary inputs to generate the registration code are as follows: <tb> RegName <SEP> The <SEP> name <SEP> the <SEP> user <SEP> wishes <SEP> to <SEP> register <SEP> under. <SEP> The <SEP> software<tb> provider <SEP> will <SEP> imply <SEP> but <SEP> not <SEP> require <SEP> that <SEP> this <SEP> be <SEP> the <SEP> user's <SEP> real<tb> name. <SEP> The <SEP> only <SEP> thing <SEP> this <SEP> name <SEP> will <SEP> be <SEP> used <SEP> for <SEP> is <SEP> registration.<tb> Supplied <SEP> by <SEP> the <SEP> user. <SEP> When <SEP> the <SEP> software <SEP> provider <SEP> actually<tb> collects <SEP> this <SEP> name <SEP> from <SEP> the <SEP> user, <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> ask<tb> for <SEP> it <SEP> in <SEP> terms <SEP> of <SEP> first <SEP> and <SEP> last <SEP> names, <SEP> called <SEP> RegFirstName <SEP> and<tb> RegLastName, <SEP> respectively. <SEP> RegName <SEP> is <SEP> built <SEP> by <SEP> concatenating<tb> RegFirstName, <SEP> a <SEP> single <SEP> space, <SEP> and <SEP> RegLastName. <SEP> Each <SEP> of <SEP> the<tb> first <SEP> and <SEP> last <SEP> names <SEP> is <SEP> limited <SEP> to <SEP> 20 <SEP> significant <SEP> characters;<tb> beyond <SEP> that, <SEP> characters <SEP> will <SEP> be <SEP> ignored.<tb>RegMonth <SEP> The <SEP> date <SEP> of <SEP> the <SEP> registration, <SEP> expressed <SEP> as <SEP> the <SEP> number <SEP> of <SEP> months<tb> since <SEP> Jan <SEP> 1,1999, <SEP> e. <SEP> g., <SEP> 8 <SEP> bits <SEP> (20 <SEP> years). <SEP> All <SEP> 1's <SEP> is <SEP> reserved <SEP> are <SEP> for<tb> "never <SEP> expires"situations.<tb>Product <SEP> A <SEP> numeric <SEP> code <SEP> indicating <SEP> what <SEP> product <SEP> the <SEP> registration <SEP> is <SEP> for.<tb>The <SEP> user <SEP> will <SEP> choose <SEP> the <SEP> product; <SEP> the <SEP> software <SEP> provider <SEP> will<tb> translate <SEP> that <SEP> choice <SEP> into <SEP> an <SEP> 8-bit <SEP> code.<tb> It will be appreciated that a plurality of RegCode algorithms advantageously can be employed in generating a self-validating registration code. In brief, the software provider takes the inputs listed above, checksums them, mixes the inputs (including the RegName) and the checksum together in according to any one of a variety of algorithms, and encodes the result as a 16 bit number string. It will also be appreciated that the encoding and bit-mixing can be reversed and then, together with the RegName, the checksum can be used to verify the validity of the registration code. It should be noted that the software provider will store registration codes separately for Freeware (Eudora Light), Adware (Sponsored) and Payware (Eudora Pro) software modes. Acceptance of a registrations code for one mode of operation does not imply that the registration codes for the other modes should be destroyed. Once the registration code has been generated, the user must somehow enter the valid RegCode into the Eudora e-mail client. This can be accomplished in one of three ways: (Manually. Users can type or paste values into the Enter Code dialog box. See Fig. 5F. (Windows Registry. At Eudora startup, the software will look for the RegCode in the Windows registry (e. g., Software\Qualcomm\Eudora\Check, FName, LName, RCode). The values should be copied into the preferences register or associated lookup table of the e-mail client, if these preferences are found and valid. (RegCode File. At Eudora startup, the software will look for a file in the application software folder named"RegCode. dat," in an exemplary case. The values should be copied into the preferences register or associated lookup table of the e-mail client, if these preferences are found and valid. It should also be mentioned that the software provider will allow a special-case MIME part to be mailed to the Eudora e-mail client. The user receiving this part will automatically be asked to verify and enter the information. He/she can also execute the attachment again later. However, he/she cannot forward the attachment to anyone else using the Eudora e-mail client, because a special Content-Type attribute ("regCode") is required to activate the part, and the Eudora e-mail client can't send those. The format of the MIME part (and the RegCode file) is that of a text file containing RFC822-header-style fields. It has a registered MIME type of application/vnd. eudora. data. The fields included in the part are: <tb> Eudora-File-Type <SEP> This <SEP> is <SEP> always <SEP> the <SEP> first <SEP> field, <SEP> and <SEP> describes <SEP> what <SEP> sort<tb> of <SEP> information <SEP> the <SEP> rest <SEP> of <SEP> the <SEP> file <SEP> contains. <SEP> Its <SEP> value<tb> will <SEP> be <SEP> either"regFile"or"Profile."<tb> Eudora-First-Name <SEP> The <SEP> first <SEP> (given) <SEP> name <SEP> of <SEP> the <SEP> registrant, <SEP> in <SEP> US-ASCII.<tb>Eudora-Last-Name <SEP> The <SEP> last <SEP> (family) <SEP> name <SEP> of <SEP> the <SEP> registrant, <SEP> in <SEP> US-ASCII.<tb> Eudora-Reg-Code <SEP> The <SEP> registration <SEP> code <SEP> as <SEP> produced <SEP> by <SEP> the <SEP> registration<tb> system<tb> Profile <SEP> Profile <SEP> information. <SEP> This <SEP> takes <SEP> the <SEP> form <SEP> of <SEP> a<tb> relatively <SEP> short, <SEP> e. <SEP> g., <SEP> 127 <SEP> bytes, <SEP> ASCII <SEP> string. <SEP> A<tb> profile <SEP> is <SEP> generated <SEP> for <SEP> each <SEP> user <SEP> during <SEP> the<tb> registration <SEP> process.<tb>Eudora-Needs-If <SEP> this <SEP> field <SEP> contains"YES", <SEP> then <SEP> the <SEP> user <SEP> should <SEP> be<tb> Registration: <SEP> nagged <SEP> to <SEP> register <SEP> their <SEP> copy <SEP> of <SEP> Eudora. <SEP> This <SEP> is <SEP> used<tb> by <SEP> installers <SEP> that <SEP> generate <SEP> RegCodes <SEP> that <SEP> the<tb> software <SEP> provider <SEP> otherwise <SEP> would <SEP> not <SEP> have <SEP> in <SEP> its<tb> database.<tb>Mailed-To <SEP> This <SEP> is <SEP> the <SEP> address <SEP> the <SEP> information <SEP> was <SEP> mailed <SEP> to. <SEP> If<tb> this <SEP> field <SEP> is <SEP> present <SEP> and <SEP> does <SEP> not <SEP> match <SEP> any <SEP> of <SEP> the<tb> user's <SEP> personalities <SEP> or"me"nickname, <SEP> the<tb> information <SEP> should <SEP> not <SEP> be <SEP> acted <SEP> on.<tb> It should be noted that the Eudora-File-Type field must be present. The other fields listed above may or may not be present. It will be appreciated from the discussion above that RegCodes mailed to the user should be validated prior to use. In order to be used, a RegCode should meet the following tests: Validity-An invalid RegCode should be ignored. Directness-The mailed-to field of the RegCode should contain an address for one of the user's personalities or be in the user's"me" nickname. (Applicability-A new RegCode should not automatically override an existing valid RegCode. The only exceptions to this policy are that a Payware mode RegCode should override a Freeware or Adware RegCode, and a Payware mode RegCode that is the same as the user's existing Payware mode RegCode can be used to disable the"Eudora Needs-Registration"Nag. Once the RegCode has been determined to meet the above tests, the user should be asked to accept the code. An exemplary acceptance dialog box is illustrated in Fig. 5F. As mentioned above, the registration code is self-validating, since one part is a function of the other. However, there is another sense of"validation" to be considered, i. e., whether or not the registration code is"valid"for use with a particular version of Eudora. This is accomplished by comparing theExpMonth in the registration code with a BuildMonth field the software provider will put into the application (in a place that cannot be overwritten by plug-ins, settings, etc.). If the ExpMonth and the BuildMonth correspond, the registration is deemed valid by the e-mail client. Fig. 6A illustrates a state flow diagram of the process by which aFreeware user can become a Registered Free User. It will be appreciated that the state flow diagrams of Figs. 5A and 6A are similar in many respects.However, the state flow diagram of Fig. 6A allows for an additional Nag dialog box, i. e., the so-called Feature Nag dialog box pictured in Fig. 6B, to remind both the Free User and the Registered Free User of the enhanced features available to Adware and Payware users. With respect to Freeware Users andRegistered Freeware Users, it will be appreciated that the Registered FreewareUsers will not receive the Registration Nag dialog box. It will be appreciated that the state flow diagram illustrated in Fig. 6A is very similar to that applicable to the Adware Users (Fig. 5A), with the exception that FreewareUsers are given the option to try the full features rather than enter their demographic information. It should also be mentioned at this point that all users will receive anUpdate Nag dialog box (not shown) at a predetermined interval. Eudora checks the Update Page once per week during an e-mail session. If the Update page has changed, the user is nagged to update the Eudora e-mail software.Even if the page hasn't changed, the user is nagged on a 30-day schedule to check for updates, to ensure that he/she has the latest software version. See the state flow diagram of Fig. 7A. The Update Nag presents the user with versions to which he/she is entitled to upgrade (if any). See Fig. 7B. The Nag itself is anHTML document with links to versions of the Eudora e-mail software for the user to download. Fig. 8 illustrates an exemplary state flow diagram of the process by which a Box user can become a Paid user, i. e., a Payware user. It will be appreciated that the only Nag the software provider presents specifically to theBox users is the Registration Nag. Once a Box user registers, the Box user is converted into a normal Paid user. It should be mentioned however that the payment date for the Box user is set to a specific value by the software provider, so that the software provider can control what versions of the software the Box user will receive, e. g., the period of time for which the user will receive updates from the software provider for free going forward. Having introduced the concept of nagging, this would be a convenient point to discuss various features of nagging implemented in the software according to the present invention. Two major issue are (1) how the software provider nags the user, and (2) when the software provider nags the user. Ideally, Nag Windows are modeless windows. The user can close them using close boxes, or dismiss them by taking one of their action items, or simply leave them open and let them drift wherever they will in the window list. Due to implementation constraints, Windows Nag Windows will be slightly different in behavior than MacOS Nag Windows, which are discussed below.The Nag Windows are floating windows; the software provider expects that the user will probably dismiss the Nag Window in fairly short order. It will be appreciated that the Nag Windows will not, however, stop background tasks from executing. It should be mentioned that there is at most one Nag Window of each variety open at a time; old windows of the same variety advantageously will be recycled. That is, if a given Nag Window is still open the next time the user is due to be nagged, that window will be reused and brought back to the top of the window stack. It should also be mentioned that all Nags applicable to the user should be available to the user by selection from the Help menu, so that the user who dismisses one of the Nag Windows inadvertently can deliberately nag him/her-self if he/she wishes, although such manual Nag invocations do not reset the Nag's timer. Preferably, Nag Windows will be opened on top of all other windows, and no automatically opened windows, including, for example"Tip of the Day" and other dialog boxes and excluding other Nag Windows, will ever be placed above them until the user has manually brought another, non-Nag Window above them. Due to the implementation constraints in the Windows version of the Eudora e-mail software, the only windows that can obscure Nags would be other floating windows. It will be appreciated that this is chiefly due to the requirement that Multiple Document Interface (MDI) child windows be maximizable. It should be mentioned that is a standard Windows interface used by many popular Windows applications and utilities, such as theWindows Program Manager, and the Windows File Manager; the MDI interface is also part of the Common User Access (CUA) standard set by IBM. Each MDIcompliant application enables you to open child windows for file-specific tasks such as editing text, managing a database, or working with a spreadsheet, to name but a few of the possible tasks. Fig. 10 illustrates a flow chart for Nag Window display in MacOS versions of the Eudora e-mail software according to an exemplary embodiment of the present invention. In Fig. 10, the software presents just the In mailbox, as denoted by the symbol (1), i. e., time (1). The Eudora e-mail software then determines that it needs to nag the user, and places the Nag atop the mailbox, as denoted by the symbol (2). Some mail arrives in the"Fresh Meat"mailbox.Ordinarily, this would open on top. However, since there is a"new"Nag being displayed by the software, i. e., one the user has not manually sent behind anything, the"Fresh Meat"instead opens below the Nag, as denoted by symbol (3). The user manually brings Fresh Meat to the front, as denoted by symbol (4).After that, when mail arrives in More Meat, the Nag is no longer new, and MoreMeat can be opened on top in the normal manner, as denoted by the symbol (5). The placement of Nag Windows in any of the Windows environments is, in general, considerably simpler. Nag Windows simply float outside the MDI box, above other floating windows, until the user closes them. The exception to this rule is the Update Nag, which acts like a MacOS Nag Window, if the user assumes that the entire Macintosh diagram takes place inside an MDI box. Note particularly that this indicates that the Update Nag may be maximized in theWindows environment. Although the basic concept of Nag Schedules was introduced above, a more detailed discussion of Nag Schedules at this point would facilitate the understanding of certain aspects and features of the software according to an exemplary preferred embodiment of the present invention. In the Eudora email software, each schedule is a set of numbers representing (save for the last) the number of days since a given date (the Nag base). The software provider further must keep track of the last time the user was nagged (the last lag). Note that both the Nag base and last Nag should be tracked separately for each type of Nag; the software provider must not mix values for Registration Nags andUpdate Nags, for example. The last number of the Nag Schedule is a repeat interval. Once the other Nags are all exhausted, the user is nagged each time this last number of days passes. The best way to understand a Nag Schedule is to view the schedule as a timeline, as illustrated in Fig. 11. This particular timeline is for a Nag Schedule of [0,4,9,12,3]. Note that the Nags which will occur at the 15 and 18 day points are there because of the final number, the repeat interval (of 3 days). Thus, inFig. 11, the user is due to be nagged if there is a Nag day greater than the lastNag and less than or equal to the current day. If more than one Nag day has passed, the user is still nagged only once. It should be mentioned that once the Nag Window has been opened, the last Nag is reset to the current day. It should also be mentioned that a final Nag interval of 0 indicates that the Nag is not done any more after the defined period has expired. It will be appreciated that the Eudora e-mail software advantageously includes a software subroutine which determines whether anyNags are due at application startup and at the completion of each mail check.With respect to the latter case, the software checks the modification date on theUpdate Page once per week during a mail check. If the Update Page has been modified during the past week, the software provider will download update information during the mail check, and nag the user to update his/her software, e. g., the Eudora e-mail software. See Fig. 7B. Finally, it will be noted that when a user's state changes so that an open Nag is no longer relevant, thatNag is closed and no longer displayed. The preceding discussion also touched briefly on various issues with respect to ads; these issues will be developed more fully immediately below.More specifically, the major client issues involving ads are how the software displays the ads, when the software displays the ads, how the software obtains the ads, how the software provider obtains and transmits demographic information, and how the software provider verifies that ads are actually being displayed. Referring again to Fig. 3A, the main window of the Eudora e-mail software shows a squarish ad and three ad buttons in opposite corners of the main window. It should be mentioned that this particular squarish ad is144 pixels high by 128 pixels wide; the software will accommodate ads as large as 144 pixels by 144 pixels. It will be appreciated that the area of the window usable by the mailboxes has been reduced approximately 38%; however, it will also be appreciated that the content area has been left untouched. Fig. 3B illustrates an alternative main window where a small graphic or placard is employed, e. g., in the lower right corner, to indicate that the main window is sponsored. It will be appreciated that the actual information that the software provider can accept from advertisers will be relatively simple. For standard ads, such as that depicted in the lower left-hand corner of Fig. 3A, the ad will consist of an image file, e. g., a GIF file, a PNG file, a JPEG file, etc., of not more than 15K, and not more thanl44 pixels tall by 144 pixels wide. Preferably, this image file will employ the Web Safe Color Palette. This palette, which is sometimes to as the Browser-Safe Palette, contains only 216 colors out of a possible 256 colors definable by 8-bits. The remaining 40 colors vary on Macs and PCs. By eliminating the 40 variable colors, this palette is optimized for cross-platform use. Moreover, the image file advantageously will be associated with a single uniform resource name (URN) to which users who click on the ad will be directed. Each advertiser will also specify the desired scheduling information for the ad, as discussed in greater detail below. In order to facilitate the transmission of the ad to the software provider, e. g., QUALCOMM INCORPORATED, the advertiser may wrap the ad in HTML. The software provider advantageously can also employ HTML-wrapped ads, since this will allow the software provider to include ad parameters as META tags in theHTML page, specify the link address, etc. Moreover, the Toolbar icons will be requested in GIF format as well, but will actually be delivered to the client in a composite format and transformed into standard icons. In addition, placards for sponsors of the Freeware version illustrated in Fig. 3B should be no more than 31 pixels tall, and on the order of 88 pixels wide, though the precise width can be varied at runtime. It should be mentioned here that when the user clicks on an ad, the software provider will normally take the user to the software provider's clickthrough counter and then redirect the user's browser to the link listed with the ad. The click-through counter advantageously can be one of the software provider's servers, e. g., one of servers 302 and 303. It will be appreciated that this will require that the software provider will compose a URN which includes a server name, some tracking information, and the ultimate destination URN, and then the server will redirect the user's browser to the destination URN. One complication occurs if the user is offline at the time that the clickthrough is attempted. When the user is offline, several possible actions by the software are possible. For example, the software could initiate an online session. Alternatively, the software could simply flag the link using the link history facility. See Fig. 12, which depicts a window/menu that the software maintains, similar to the history lists maintained by most browsers. When the ad is clicked while the software is offline, the software advantageously adds the link to the link history window, and flags this link so that the user knows he/she had wanted, but was unable, to visit that site during a previous e-mail session. Moreover, the software advantageously may be constructed to permit the user's browser respond to the click-through. It will be appreciated that some browsers have sophisticated features of their own for dealing with offline conditions, and the software provider shouldn't discount the idea that the user might wish to rely on them. Alternatively, the software may permit transmission of the link to the browser for subsequent handling by the browser when it is online, i. e., the software can allow the user to tell the software provider to send the link to the user's browser the next time he/she is online. In summary, the software provider will, in an exemplary and nonlimiting case, mandate that the following standard for all advertisements submitted by advertisers: No larger than 144x144 pixels. Ads smaller than this will be centered in a 144x144 window and surrounded by the standard frame color. (GIF or JPEG. The software provider advantageously can convert the GIF file to a PhotoShop (PNG) file, but this is transparent. It should be noted that the software provider will not presently accept PNG ads directly, because of the gamma bugs in PhotoShop. No larger than 15K. This will reduce the bandwidth required to transmit the ad as well as the goodwill cost of user bandwidth. No animation. This is a cornerstone of the"unobtrusive"message to users aspect of exemplary embodiments of the present invention. (A single URN of not more than 900 characters. There are suspected limits of 1K on URN size. Limiting the customer's URN to 900 characters will allow the software provider to annotate the URN and still stay within the 1K limit. (A user-friendly title string of not more than 31 characters. This string will be displayed in the link history window, and should be something users will relate to. (Use Web Safe Color Palette. This 216-color palette optimized for users with 256-color systems, as mentioned above. It should be mentioned that Toolbar buttons, i. e., the buttons in the upper right-hand corner of Fig. 3A have the same requirement as standard ads, except for the following: Both 16x16 and 32x32 sizes required. These are the sizes the client supports, the software provider needs them both. (GIF only. The software will not render JPEG images in the toolbar. With respect to the co-brand spot ad illustrated in the lower right-hand corner of Fig. 3B, the spot has the same requirement as standard ads, except for the following: No larger than 95 pixels wide by 31 pixels high. (GIF only. One troublesome issue regarding the ad placement illustrated in Fig. 3A is the relative ease with which a user might be able to hide the ads from view by placing a small window directly over the ad. Advantageously, the software performs a check to determine that the ad is both onscreen and uncovered. If the screen state does not satisfy both of these criteria, the software will either nag the user to uncover the ad or automatically re-order the windows so that the ad is uncovered. If the user persists in covering the ad for a predetermined period of time, the software will automatically devolve to freeware mode. Since one of the major reasons for providing an Adware version of software such as the Eudora e-mail program is to provide a mechanism by which advertisers can subsidize the cost of the software, the software provider is clearly motivated to ensure that all Eudora users are actually looking at the ads. Stated another way, displaying an ad on the screen of the client computer 100a, for example, while the user is in another room does not justify the expense of the ad for the advertiser. For that reason, the software includes functions which permit measurement of the actual time that the user is in front of the computer while the ad is present. Absent some sort of positive ocular fastening device, the best thing the software can do to measure user attention is to monitor for user input to the client computer 100a, thus verifying the user's presence in front of the display device 208. Given that the primary user input devices to the client computer 100a are the mouse 204 and the keyboard 203, the e-mail client will monitor for both mouse and keyboard operation by the user when the Adware version of the Eudora e-mail client is frontmost, and periodically report this activity back to, for example, the software provider. In other words, the user will be considered"present and accounted for"if the mouse moves significantly, if a mouse button states change, or if keys are pressed or released. Moreover, the software will consider a period before and after such an event as"face time"for the ad. In an exemplary case of the software according to the present invention, the software measures the period and refers to the total length of this period as kFaceInterval. There is no need to be overly precise about this value, e. g., a kFaceInterval of sixty (60) seconds, which begins with a user event, is employed in the exemplary, non-limiting case being discussed. Having discussed the format of the ads being displayed by the software, a detailed discussion of the methodology by which the ads are actually obtained for display will now be presented. The general methodology for obtaining ads for display is to connect to a QUALCOMM INCORPORATED site during a mail check, or some other time when the software senses a live network connection, and download ads into a local cache. It will be appreciated that the act of downloading the ad can be the trigger for billing the advertiser, in order to avoid the necessity of collecting billing information from individual clients. In contrast, proprietary systems such as that provided by JUNO, upload ad display data to the designated e-mail server whenever the user accesses his/her e-mail account for any reason. In order to make reasonable decisions about how to download ads, the software provider needs to have some idea of what impact the ad downloads will have on users. In order to assess that impact, the software provider must make assumptions (or gather information) about what a typical Eudora user's habits are, and what the ads will be like in terms of transmission characteristics.Part of the Adware process is to add instrumentation in the software client so that the software provider can begin to answer these questions intelligently, rather than by guesswork. However, one must start with some basic assumptions. For example, Fig. 13A is a table listing the assumptions used in determining the impact of ad transmission on e-mail program operations; Fig.13B is a table listing the bandwidth requirements in terms of the subscriber base versus of the number of new ads to be downloaded each day to the subscribers.The implications of these calculations are as follows. Given that the goal is for an average turnover of an ad is, for example, three days, the top line in the table illustrated in Fig. 13B would be the one used by the software provider. The worst-case, i. e., maximum bandwidth, scenario would be to turn over, for example, 25 ads a day. These values are highlighted in the table of Fig. 13B. In order to determine what ads are to be shown for a particular user class, as well as in order to transmit particular ad parameters, the software provider advantageously employs a PlayList. The PlayList is in its essence a list of URNs from which to fetch the actual ads as well as a set of attribute-value pairs, on a per-ad basis. The exact format of the PlayList is discussed in greater detail shortly. PlayLists will specify the complete set of ads the client should have, along with parameters for displaying those ads, as discussed immediately below. It should be noted that ads may appear in a PlayList but not be scheduled for display for a long time (or even at all). The presence of such ads in the PlayList will cause the client to retrieve the ads for storage on the client for future display. The general requirements for the PlayList are as follows: 1) The request for a PlayList will contain information to help thePlayList server determine what ads a copy of Eudora is required to fetch. 2) The PlayList can also contain parameters for Eudora as a whole, including the ability to modify how often New PlayLists are checked for. 3) PlayLists are allowed to specify whether or not they should replace all older PlayLists or merely be merged with them. It should be mentioned that the merge function will allow a more web-like advertising model, e. g., a model employing a rotating ad pool, should the software provider choose to employ such a model. The basic ad fetch process will now be described while referring to Fig.14, which is a state flow diagram of an exemplary ad fetch process according to the present invention, and Fig. 1. First, the client software running on client computer 100a identifies itself to the PlayList server 302, e. g., ads. eudora. com.The client software, e. g., the Eudora software, provides to the PlayList server 302 basic client information and the ID of the PlayList the client software currently has installed. The ads. eudora. com server responds with either an indication that the current PlayList is still valid, uses an Hyper Text TransferProtocol (HTTP) redirect to send the client to a different PlayList server, e. g., another PlayList server 302', or responds directly with the New PlayList fromPlayList server 302. See Fig. 14. In the event that the New PlayList is received from PlayList server 302, the client software compares the New PlayList with its current set of ads, and begins fetching ads not resident in the e-mail client's ad cache from one of more ad servers, e. g., the ad server 303 illustrated in Fig. 1, according to URNs included in the PlayList. The client software also deletes ads not currently appearing in the PlayList. Advantageously, the client software performs a check for a New PlayList every three days. It should be mentioned that the 3 day interval betweenPlayList checks is arbitrary and applicable only to the exemplary preferred embodiments of the present invention being discussed. It should also be mentioned that the ads preferably will be fetched as needed to fill the PlayList, possibly over many mail checks. Moreover, the ad fetch process will be limited to one minute per mail check, irrespective of the tasking of either the e-mail client software or the client computer 100a. After one minute, the client software will disconnect from the ad server 303. This will often mean that the email client software has not filled the PlayList when the ad fetch operation is terminated. This is acceptable. The software will utilize the available ads while the remaining ads are being downloaded. Furthermore, the software provider advantageously can provide for multiple servers on a peer with ads. eudora. com server 303. It will be appreciated that these servers will provide extra ads for some Eudora user communities, e. g., all of the users at a company serviced by one ISP, etc. Stated another way, an ISP which provides additional services such as local and long distance telephone access may wish to cross promote these services to its own customer base. Thus, the ISP advantageously can contract for such localized promotion. The PlayLists transmitted to the ISP's branded Adware e-mail clients would be linked to an ad server 303"maintained by the ISP in that instance. Given a set of available ads, the software still needs to choose which ad to display next. It will be appreciated that this is a matter of much excitement in the Web ad industry, where many choices are allegedly made to maximize the profit of the advertiser. In particular, ads that generate better user response are preferred because such ads generate extra revenue-such ads are frequently tied to the content of the Web page upon which they are displayed. However, it is unlikely that either the software provider or the client software will be able to derive a significant benefit from the ad scheduling algorithms currently run on ad services. This is in part due to the fact that the ads being displayed by the email client software are divorced from the content being displayed, i. e., neither the software provider nor the client software are cognizant of the content of any particular ad that the user is looking at, and in part due to the fact that the email client software will be requesting ads in a batch for later display, rather than requesting them in"real time". As mentioned above, the PlayLists provide certain global inputs to the ad scheduling algorithm, including the parameters listed in the table immediately following. PARAMETER <SEP> DESCRIPTION<tb> FaceTimeQuota <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> per <SEP> day <SEP> that <SEP> the <SEP> e-mail <SEP> client <SEP> software<tb> is <SEP> supposed <SEP> to <SEP> show <SEP> the <SEP> ad.<tb>RerunInterval <SEP> The <SEP> age <SEP> beyond <SEP> which <SEP> ads <SEP> should <SEP> not <SEP> be"rerun"after <SEP> the<tb> "runout", <SEP> i. <SEP> e., <SEP> maximum <SEP> permissible, <SEP> time <SEP> is <SEP> I<tb> In addition, the per-ad inputs in the PlayList associated with ad scheduling are set forth in the following table. <tb>PARAMETER <SEP> DESCRIPTION<tb> ShowFor <SEP> This <SEP> is <SEP> the <SEP> number <SEP> of <SEP> seconds <SEP> the <SEP> ad <SEP> should <SEP> be <SEP> shown<tb> for <SEP> at <SEP> any <SEP> given <SEP> time. <SEP> This <SEP> number <SEP> might <SEP> be <SEP> small, <SEP> like <SEP> a<tb> TV <SEP> ad <SEP> (e. <SEP> g., <SEP> 30), <SEP> or <SEP> large, <SEP> more <SEP> like <SEP> a <SEP> billboard <SEP> (e. <SEP> g., <SEP> 3600<tb> for <SEP> one <SEP> hour, <SEP> uninterrupted).<tb>ShowForMax <SEP> Maximum <SEP> total <SEP> amount <SEP> of <SEP> time <SEP> to <SEP> show <SEP> this <SEP> ad. <SEP> The <SEP> ad<tb> is <SEP> exhausted <SEP> after <SEP> this <SEP> time, <SEP> and <SEP> should <SEP> be <SEP> discarded<tb> once <SEP> new <SEP> ads <SEP> arrive.<tb>DayMax <SEP> Maximum <SEP> number <SEP> of <SEP> times <SEP> per <SEP> day <SEP> to <SEP> show <SEP> this<tb> particular <SEP> ad.<tb>BlackBefore <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> the <SEP> ad <SEP> window <SEP> should <SEP> be <SEP> blank<tb> before <SEP> the <SEP> ad <SEP> is <SEP> displayed.<tb>BlackAfter <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> the <SEP> ad <SEP> window <SEP> should <SEP> be <SEP> blank<tb> after <SEP> the <SEP> ad <SEP> is <SEP> displayed. <SEP> BlackAfter <SEP> runs <SEP> concurrently<tb> with <SEP> the <SEP> blackBefore <SEP> of <SEP> the <SEP> next <SEP> ad, <SEP> so <SEP> that <SEP> the <SEP> actual<tb> time <SEP> between <SEP> ads <SEP> is <SEP> max <SEP> (blackAfter, <SEP> blackBefore), <SEP> not<tb> blackAfter <SEP> + <SEP> blackBefore.<tb> StartDT <SEP> Date/time <SEP> (time <SEP> zone <SEP> optional) <SEP> before <SEP> which <SEP> the <SEP> ad<tb> should <SEP> not <SEP> run.<tb>EndDT <SEP> Date/time <SEP> (time <SEP> zone <SEP> optional) <SEP> after <SEP> which <SEP> the <SEP> ad<tb> should <SEP> not <SEP> I<tb> There are some values the software provider computes that are also input to the scheduling algorithm. These global values are listed in the table which follows. PARAMETER <SEP> DESCRIPTION<tb> AdFaceTimeToday <SEP> The <SEP> total <SEP> amount <SEP> of <SEP> ad <SEP> Face <SEP> time <SEP> for <SEP> the <SEP> current <SEP> day<tb> during <SEP> which <SEP> regular <SEP> ads <SEP> have <SEP> been <SEP> shown.<tb>TotalFaceTimeToday <SEP> The <SEP> total <SEP> amount <SEP> of <SEP> Face <SEP> time <SEP> for <SEP> the <SEP> current <SEP> day.<tb> The software also keeps track of and reports these values to the software provider for every ad: PARAMETERDESCRIPTION<tb> NumberShownToday <SEP> The <SEP> number <SEP> of <SEP> times <SEP> an <SEP> ad <SEP> has <SEP> been <SEP> shown <SEP> on <SEP> the<tb> current <SEP> day.<tb>ThisShowTime <SEP> The <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the <SEP> current <SEP> ad <SEP> has <SEP> received.<tb>LastShownDate <SEP> The <SEP> last <SEP> date/time <SEP> that <SEP> the <SEP> e-mail <SEP> client <SEP> software<tb> showed <SEP> this <SEP> I<tb> Advantageously, the software provider implements three major states of the ad scheduler, the regularState, the runoutState, and the rerunState. In the regularState, the e-mail client software advantageously is showing regular ads and accounting for them. It will be appreciated that this is what actually generates charges for the bulk of the ads displayed on the e-mail client. In contrast, the runoutState is selected when the e-mail client software has shown enough regular ads to fill the assigned faceTimeQuota, and the ad cache includes one or more runout ads available for showing. In the rerunState, the e-mail client software has exhausted both its regular ad quota and the runout ads, i. e., the e-mail client software is now reshowing the regular ads, but the software provider is not charging for them. It should be mentioned here that the software provider advantageously can provide a custom installer to various ISPs, book publishers, etc., that will label or brand the copies of Eudora that they have distributed. The software provider will then credit these distributors with a percentage of the ad revenue generated by the clients they distribute. It will be appreciated that these credits may be offset by cross promotional activities associated with each branded version of the Adware e-mail client, for the reasons previously discussed. Given the discussion presented immediately above, a more detailed explanation of various aspects of the exemplary e-mail client software according to the present invention can now be provided. As previously noted, the PlayList is a way to control the fetching and display of ads in software, e. g., in the Eudora e-mail client. The primary benefits associated with the PlayList are the separation of ad parameters from ad images, insulation of the Eudora client from intimate knowledge of ad image servers, and centralized server intelligence in ad distribution, without requiring user registration or centralized user databases. Thus, it will be appreciated thatPlayLists are extremely malleable objects. In an exemplary case, the PlayLists can exert varying degrees of control over how the Eudora client behaves, from specifying the exact set of ads Eudora runs to simply transmitting abstractURNs which will choose their own ads. If PlayLists are used to their fullest advantage, they will give the software provider a powerful tool in controlling ad display in software such as Eudora; if PlayLists are later deemed irrelevant, the PlayLists cost the software provider one extra, brief network connection per day. As discussed above with respect to Figs. 1 and 14, the client computer 100a connects to a PlayList server 302 (which may redirect to a different server 302') via a network 200. Then, the PlayList server 302 returns a PlayList to the client computer 100a via the network 200. Subsequently, the e-mail client software on the client computer fetches the ads specified in the PlayList. The PlayList Request, which is sent by the Eudora client to the PlayList server 302 in order to initiate the ad fetch process, is not a simple burst of binary code. The PlayList Request is a block of extensible markup language (XML) code employed to provide the server 302 with sufficient information to build or select the proper New PlayList for the user. The information in the PlayListRequest is shown in the following table. <tb>PARAMETER <SEP> DESCRIPTION<tb> UserAgent <SEP> This <SEP> is <SEP> a <SEP> string <SEP> identifying <SEP> the <SEP> application <SEP> requesting <SEP> the<tb> PlayList, <SEP> its <SEP> version <SEP> number, <SEP> and <SEP> the <SEP> platform <SEP> on <SEP> which <SEP> it <SEP> is<tb> running.<tb>PlayList <SEP> (s) <SEP> This <SEP> identifies <SEP> the <SEP> PlayList <SEP> (s) <SEP> that <SEP> the <SEP> client <SEP> is <SEP> currently <SEP> using.<tb>This <SEP> may <SEP> have <SEP> multiple <SEP> values <SEP> if <SEP> the <SEP> client <SEP> is <SEP> working <SEP> off<tb> more <SEP> than <SEP> one <SEP> PlayList.<tb>Entry <SEP> A <SEP> list <SEP> of <SEP> the <SEP> id's <SEP> of <SEP> the <SEP> ads <SEP> recently <SEP> shown <SEP> by <SEP> this <SEP> client. <SEP> The<tb> entries <SEP> are <SEP> nested <SEP> inside <SEP> the <SEP> PlayList <SEP> to <SEP> which <SEP> they <SEP> belong.<tb>Each <SEP> entry <SEP> can <SEP> have <SEP> zero <SEP> or <SEP> more <SEP> of <SEP> the <SEP> following <SEP> associated<tb> attributes <SEP> or <SEP> types <SEP> (the <SEP> number <SEP> following <SEP> the <SEP> equal <SEP> sign <SEP> (=)<tb> indicates <SEP> an <SEP> exemplary <SEP> value <SEP> attached <SEP> to <SEP> the <SEP> attribute <SEP> which<tb> is <SEP> used <SEP> to <SEP> achieve <SEP> the <SEP> description <SEP> of <SEP> the <SEP> entry <SEP> attributes<tb> providedbelow):<tb> Active="0"The <SEP> ad <SEP> is <SEP> no <SEP> longer <SEP> being <SEP> shown.<tb>IsRunout="1"The <SEP> ad <SEP> is <SEP> a <SEP> runout <SEP> ad. <SEP> This <SEP> saves <SEP> the<tb> server <SEP> having <SEP> to <SEP> do <SEP> a <SEP> lookup <SEP> on <SEP> the<tb> ad.<tb> <tb>IsSponsor="1"The <SEP> ad <SEP> is <SEP> a <SEP> sponsorship <SEP> ad, <SEP> to <SEP> be<tb> shown <SEP> in <SEP> place <SEP> of <SEP> the <SEP> QUALCOMM<tb> logo. <SEP> See <SEP> Fig. <SEP> 3B.<tb>IsButton="1""The <SEP> ad <SEP> is <SEP> a <SEP> toolbar <SEP> button.<tb>Deleted="1""The <SEP> ad <SEP> has <SEP> been <SEP> hidden <SEP> by <SEP> the <SEP> user.<tb>This <SEP> is <SEP> allowed <SEP> only <SEP> for <SEP> toolbar <SEP> ads.<tb>FaceTime <SEP> This <SEP> lists <SEP> the <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the<tb> user <SEP> has <SEP> used <SEP> in <SEP> the <SEP> last <SEP> seven<tb> calendar <SEP> days. <SEP> This <SEP> allows <SEP> the <SEP> server<tb> to <SEP> determine <SEP> how <SEP> many <SEP> ads <SEP> the<tb> client <SEP> is <SEP> likely <SEP> to <SEP> be <SEP> able <SEP> to <SEP> display.<tb>The <SEP> value <SEP> for <SEP> the <SEP> current <SEP> day <SEP> is <SEP> the<tb> greater <SEP> of <SEP> today's <SEP> value <SEP> (see<tb> faceTimeUsedToday) <SEP> and <SEP> last <SEP> week's<tb> value <SEP> for <SEP> today.<tb>FaceTimeLeft <SEP> This <SEP> is <SEP> a <SEP> total <SEP> of <SEP> the <SEP> amount <SEP> of <SEP> face<tb> time <SEP> requested <SEP> by <SEP> the <SEP> ads <SEP> still <SEP> left <SEP> in<tb> the <SEP> client's <SEP> ad <SEP> cache.<tb>FaceTimeUsedToday <SEP> This <SEP> is <SEP> the <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the<tb> client <SEP> has <SEP> used <SEP> toward <SEP> displaying<tb> ads <SEP> today. <SEP> It <SEP> can <SEP> be <SEP> used <SEP> by <SEP> the<tb> server <SEP> to <SEP> determine <SEP> whether <SEP> a <SEP> date<tb> critical <SEP> ad <SEP> can <SEP> be <SEP> shown <SEP> today.<tb>DistributorID <SEP> This <SEP> id <SEP> is <SEP> used <SEP> for <SEP> the <SEP> bounty <SEP> system,<tb> so <SEP> that <SEP> the <SEP> PlayList <SEP> Server <SEP> can<tb> identify <SEP> and <SEP> credit, <SEP> commission <SEP> or<tb> otherwise <SEP> reward <SEP> the <SEP> ISP <SEP> or <SEP> other<tb> organization <SEP> that <SEP> distributed <SEP> this<tb> copy <SEP> of <SEP> Eudora.<tb> <tb>Pastry <SEP> This <SEP> is <SEP> a <SEP> cookie <SEP> the <SEP> PlayList <SEP> Server<tb> gave <SEP> to <SEP> the <SEP> Eudora <SEP> e-mail <SEP> client <SEP> in<tb> the <SEP> past. <SEP> It <SEP> could <SEP> contain <SEP> any <SEP> state<tb> information/settings <SEP> the <SEP> server<tb> wishes <SEP> to <SEP> save.<tb>Profile <SEP> Profiling <SEP> information <SEP> originally<tb> entered <SEP> on <SEP> the <SEP> software <SEP> provider's<tb> web <SEP> page <SEP> and<tb> subsequently/concurrently <SEP> stored<tb> with <SEP> the <SEP> e-mail <SEP> client.<tb>Screen. <SEP> height <SEP> The <SEP> height <SEP> of <SEP> the <SEP> display <SEP> on <SEP> which<tb> the <SEP> ads <SEP> are <SEP> shown, <SEP> in <SEP> pixels.<tb>Screen. <SEP> width <SEP> The <SEP> width <SEP> of <SEP> the <SEP> display <SEP> on <SEP> which <SEP> the<tb> ads <SEP> are <SEP> shown, <SEP> in <SEP> pixels.<tb>Screen. <SEP> depth <SEP> The <SEP> color <SEP> depth <SEP> of <SEP> the <SEP> display <SEP> on<tb> which <SEP> the <SEP> ads <SEP> are <SEP> shown, <SEP> in<tb> colors/bits <SEP> per <SEP> pixel.<tb>PlayListVersion <SEP> The <SEP> version <SEP> # <SEP> of <SEP> the <SEP> PlayList <SEP> routine<tb> employed <SEP> by <SEP> this <SEP> particular <SEP> client.<tb> It will be appreciated that not all of these parameters are likely to be actively used at the same time; some are present to support particular modes of operation (see below), and will not be used in other modes. It should be mentioned here that every PlayList Request is checksummed with MD5. SeeRFC1321-"The MD5 Message-Digest Algorithm"at http://www. facs. org/rfcs/rfcl321. html. The PlayList server 302 preferably ignores requests that fail checksum verification. After the client makes a PlayList Request, the server 302 replies with aPlayList Response. Preferably, the PlayList Response is divided into two major sections; the ClientInfo section, which updates general client behavior regarding ads, i. e., speed with which the ads turn over, and the New PlayList itself, which describes the ads the client should fetch. It should be mentioned that the PlayList Server, e. g., server 302, may also return an empty response, meaning that the e-mail client should continue on its course with the ads it already has. It should also be mentioned that every PlayList Response is checksummed with MD5, just as the PlayList Request is. The MD5 digest is encoded in hexadecimal and put in a"CheckSum"header in the PlayListResponse. Advantageously, the e-mail clients ignore PlayLists that fail checksum verification. Before describing the sections of the PlayList Response, it should be mentioned that the e-mail client sometimes becomes, for lack of a better term, befuddled due to old client bugs, server bugs, etc. Sometimes the bad data inherited by even an updated client is too garbled for the system to function properly. While the client could be programmed to detect this condition, it is preferable to leave the task, i. e., error detection, to the server, which can be changed more easily. Thus, when the server detects that a client is"befuddled," the PlayList server 302 responds with just a single command: reset. NoClientInfo should follow, no PlayList should follow, just the reset command.*On receiving the reset command, the client discards its accumulated ad databases and records, including PlayLists, faceTime history, ad history, ad caches, etc. Everything is reset to the pristine condition that the e-mail client software had before the Adware software was run for the very first time. It should be mentioned that Link History is exempted from the reset command, both for reasons of practicality and because it is so user-visible. The only other item of ad data that reset does not affect is the ad failure counter, which should be retained across a reset. The client should then recognize that it has noPlayList, and make another request to the PlayList Server for the neededPlayList. The Clientlnfo section updates various client parameters. The parameters are listed immediately below. PARAMETER <SEP> DESCRIPTION<tb> ReqInterval <SEP> This <SEP> is <SEP> the <SEP> number <SEP> of <SEP> hours <SEP> the <SEP> client <SEP> should <SEP> wait <SEP> before<tb> <tb> checking <SEP> for <SEP> a <SEP> New <SEP> PlayList. <SEP> If <SEP> ad <SEP> turnover <SEP> is <SEP> high, <SEP> this<tb> will <SEP> be <SEP> a <SEP> small <SEP> number. <SEP> A <SEP> sponsored <SEP> freeware <SEP> version<tb> might <SEP> have <SEP> a <SEP> much <SEP> higher <SEP> number <SEP> here, <SEP> so <SEP> that <SEP> it <SEP> checked<tb> for <SEP> a <SEP> New <SEP> PlayList <SEP> only <SEP> once <SEP> a <SEP> week <SEP> or <SEP> once <SEP> a <SEP> month.<tb>Clients <SEP> may <SEP> also <SEP> check <SEP> for <SEP> New <SEP> PlayLists <SEP> if <SEP> they <SEP> have <SEP> ads<tb> with <SEP> nonzero <SEP> showForMax <SEP> values, <SEP> and <SEP> the <SEP> ads <SEP> have <SEP> used<tb> up <SEP> much <SEP> of <SEP> their <SEP> time.<tb>HistInterval <SEP> This <SEP> value <SEP> is <SEP> the <SEP> number <SEP> of <SEP> days <SEP> the <SEP> client <SEP> must <SEP> remember<tb> that <SEP> it <SEP> showed <SEP> a <SEP> particular <SEP> ad. <SEP> It <SEP> will <SEP> report <SEP> this <SEP> to <SEP> the<tb> PlayList <SEP> server <SEP> so <SEP> that <SEP> the <SEP> server <SEP> can, <SEP> at <SEP> its <SEP> discretion,<tb> choose <SEP> not <SEP> to <SEP> direct <SEP> the <SEP> showing <SEP> of <SEP> ads <SEP> for <SEP> competing<tb> services <SEP> to <SEP> that <SEP> particular <SEP> client, <SEP> competing <SEP> ads <SEP> are<tb> separated <SEP> from <SEP> one <SEP> another <SEP> by <SEP> the <SEP> HistInterval <SEP> value.<tb>Pastry <SEP> The <SEP> previously <SEP> mentioned <SEP> cookie. <SEP> The <SEP> server <SEP> can <SEP> store<tb> whatever <SEP> state <SEP> information <SEP> it <SEP> wishes <SEP> in <SEP> this <SEP> cookie.<tb>Flush <SEP> More <SEP> command <SEP> than <SEP> parameter, <SEP> if <SEP> present, <SEP> it <SEP> causes <SEP> the<tb> client <SEP> to <SEP> discard <SEP> an <SEP> old <SEP> PlayList <SEP> or <SEP> ad. <SEP> Flushed <SEP> ads <SEP> and<tb> PlayLists <SEP> are <SEP> removed <SEP> completely, <SEP> and <SEP> no <SEP> longer <SEP> show <SEP> up<tb> in <SEP> ad <SEP> histories.<tb>Width <SEP> The <SEP> width <SEP> in <SEP> pixels <SEP> the <SEP> client <SEP> should <SEP> make <SEP> the <SEP> ad <SEP> window<tb> be.<tb>Height <SEP> The <SEP> height <SEP> in <SEP> pixels <SEP> of <SEP> same.<tb>FacetimeQuota <SEP> The <SEP> number <SEP> of <SEP> seconds <SEP> of <SEP> facetime <SEP> the <SEP> client <SEP> should<tb> devote <SEP> to <SEP> regular <SEP> ads, <SEP> before <SEP> moving <SEP> to <SEP> the <SEP> runout <SEP> ad.<tb>RerunInterval <SEP> The <SEP> number <SEP> of <SEP> days <SEP> an <SEP> ad <SEP> may <SEP> be"rerun" <SEP> ; <SEP> that <SEP> is, <SEP> shown<tb> for <SEP> free <SEP> after <SEP> all <SEP> other <SEP> ads <SEP> and <SEP> the <SEP> runout <SEP> are <SEP> exhausted.<tb>The <SEP> time <SEP> is <SEP> measured <SEP> from <SEP> the <SEP> last <SEP> non-rerun <SEP> showing <SEP> of<tb> the <SEP> ad.<tb> From the discussion above, it will be appreciated that the ClientInfo section is a powerful feature of PlayLists. It allows the software provider to control the application in a global way, including segueing smoothly from one ad model to another. It will be appreciated that if this were the only benefit the software provider derived from PlayLists, it alone would make implementation of PlayLists worthwhile. As mentioned above, the PlayList Response is divided into two major sections; the ClientInfo section, which updates general client behaviors, and theNew PlayList itself, which describes the ads the client should fetch. The NewPlayList itself has one global value, PlayListID. This id is the id value that the client returns to the PlayList server the next time the client computer 100a connects to the PlayList server 302. It will be appreciated that this PlayListID advantageously can be included in the PlayList Request, or can be separately uploaded to the PlayList server in a myriad of forms, e. g., as a cookie. The remainder of the PlayList is a list of ads. Each ad is allowed to have many parameters, although it's likely not all of them will be used with any single ad, and it is possible that some of them will never be used at all. The parameters include the scheduling parameters, which are described in detail above, and ad information, which includes the information listed immediately below. <tb>PARAMETER <SEP> DESCRIPTION<tb> AdID <SEP> A <SEP> unique <SEP> identifier <SEP> for <SEP> the <SEP> ad <SEP> in <SEP> question. <SEP> A <SEP> 64-bit<tb> integer, <SEP> the <SEP> top <SEP> 32 <SEP> bits <SEP> of <SEP> which <SEP> are <SEP> a <SEP> server <SEP> authority<tb> id, <SEP> the <SEP> bottom <SEP> 32 <SEP> bits <SEP> of <SEP> which <SEP> are <SEP> an <SEP> identifier <SEP> unique<tb> to <SEP> the <SEP> server <SEP> authority.<tb>Title <SEP> A <SEP> human-friendly <SEP> string <SEP> used <SEP> to <SEP> refer <SEP> to <SEP> the <SEP> ad.<tb>Src <SEP> A <SEP> URN <SEP> indicating <SEP> where <SEP> to <SEP> get <SEP> the <SEP> actual <SEP> ad <SEP> to <SEP> show.<tb>This <SEP> might <SEP> be <SEP> highly <SEP> specific <SEP> (e. <SEP> g.,<tb> http <SEP> ://med <SEP> ia48. <SEP> doublecl <SEP> ick. <SEP> net/eudora/coke/drinkcoke. <SEP> gif) <SEP> or <SEP> it<tb> might <SEP> be <SEP> much <SEP> more <SEP> general<tb> (e. <SEP> g., <SEP> http <SEP> ://ads. <SEP> doubleclick. <SEP> net/eudora/ad <SEP> ; <SEP> ord=136784421 <SEP> ?).<tb>Another <SEP> important <SEP> PlayList <SEP> feature <SEP> is <SEP> that <SEP> the <SEP> PlayList<tb> <tb> permits <SEP> the <SEP> client <SEP> software <SEP> to <SEP> pull <SEP> ads <SEP> from <SEP> many<tb> different <SEP> servers. <SEP> The <SEP> software <SEP> provider <SEP> could, <SEP> for<tb> example, <SEP> run <SEP> its <SEP> own <SEP> servers <SEP> in <SEP> parallel <SEP> with <SEP> those<tb> belonging <SEP> to <SEP> DoubleClick, <SEP> and <SEP> take <SEP> ads <SEP> from <SEP> each<tb> server, <SEP> or <SEP> some <SEP> of <SEP> the <SEP> servers, <SEP> based <SEP> on <SEP> the <SEP> PlayList.<tb> There <SEP> can <SEP> be <SEP> a <SEP> checksum <SEP> attribute <SEP> on <SEP> the <SEP> src <SEP> tag. <SEP> If<tb> present, <SEP> its <SEP> value <SEP> is <SEP> a <SEP> hexadecimal-encoded <SEP> MD5 <SEP> digest<tb> of <SEP> the <SEP> ad <SEP> data. <SEP> The <SEP> client <SEP> may <SEP> check <SEP> this <SEP> checksum<tb> against <SEP> the <SEP> ad <SEP> data.<tb>IsButton <SEP> Is <SEP> this"ad"a <SEP> toolbar <SEP> button? <SEP> If <SEP> so, <SEP> it <SEP> will <SEP> be <SEP> scheduled<tb> separately <SEP> from <SEP> the <SEP> main <SEP> ads. <SEP> The <SEP> only <SEP> scheduling<tb> parameters <SEP> that <SEP> are <SEP> meaningful <SEP> for <SEP> toolbar <SEP> buttons <SEP> are<tb> startDT <SEP> and <SEP> endDT.<tb>IsSponsor <SEP> Is <SEP> this"ad"a <SEP> sponsor <SEP> placard? <SEP> If <SEP> so, <SEP> it <SEP> will <SEP> be <SEP> scheduled<tb> separately <SEP> from <SEP> the <SEP> main <SEP> ads.<tb>IsRunout <SEP> Is <SEP> this <SEP> ad <SEP> intended <SEP> to <SEP> be <SEP> run <SEP> after <SEP> all <SEP> other <SEP> ads <SEP> have<tb> exhausted <SEP> their <SEP> runs <SEP> for <SEP> a <SEP> given <SEP> day? <SEP> There <SEP> will <SEP> only <SEP> be<tb> one <SEP> active <SEP> isRunout <SEP> ad <SEP> in <SEP> any <SEP> client's <SEP> collection <SEP> of<tb> PlayLists.<tb>URN <SEP> The <SEP> Uniform <SEP> Resource <SEP> Name <SEP> of <SEP> the <SEP> server <SEP> (e. <SEP> g., <SEP> a <SEP> Web<tb> site <SEP> address) <SEP> to <SEP> which <SEP> the <SEP> user <SEP> is <SEP> directed <SEP> when <SEP> he/she<tb> clicks <SEP> on <SEP> the <SEP> ad.<tb> It should be mentioned that the term Uniform Resource Name (URN) indicates a generic set of all names/addresses that are short strings that refer to resources available via the Internet. Thus, URN encompasses both a UniformResource Locator (URL), which is subset of URN schemes that have explicit instructions on how to access a particular resource on the Internet, and aUniform Resource Identifier (URI), which is another subset of URNs. It will be appreciated that the URL and URI subsets may overlap. It will also be appreciated that the terms URN, URL, and URI advantageously can be used interchangably; whichever term is used is meant to address the named resource in its broadest possible sense. It has been mentioned in passing that not all parameters are likely to be used at one time. In fact, PlayLists are flexible enough to support many ad models. PlayLists are crucial to some ad models, to others they are helpful but not central, to still others they are marginally useful, but do not present significant impediments. The use of PlayLists does not predispose the software provider towards any specific ad model; the PlayLists advantageously can be used to support any ad models that the software provider chooses. Indeed,PlayLists permit the software provider to switch between ad models midstream, should the software provider decide to do so. In the discussion that follows, several ad models will be discussed with respect to Figs. 16A and 16B in an effort to illustrate how PlayLists would be used for each ad model. It will be appreciated that this will demonstrate the essential neutrality of the PlayList concept to the ad model. Fig. 16A illustrates the ad model associated with persistent ads while Fig.16B depicts the parameters associated with a short-lived ad model. One thing to notice here is how few of the parameters from any of the sections appear in the chart. It will be appreciated that varying as few as five parameters advantageously causes the Adware to shift between these two distinct ad modes. That's because they are largely not relevant to the choice of ad model.The parameters will either be used or not, irrespective of the ad model. For example, the software provider can implement blank space after an ad in any model, and the software provider can eschew blank space after an ad in any model. Most of the parameters fall into this it-just-doesn't-matter category. With respect to the short-lived ad model, it will be appreciated that the software provider accepts many ads; either from many advertisers or only a few advertisers. Ads do not persist for many days; they're used up and discarded at a relatively rapid rate. In this model, PlayLists will be used additively. Each time the client runs low on ads, it will ask for another PlayList which will describe a few more ads to mix with the clients'existing ads. When ads exceed their allotted time, the ads are discarded. In this ad model, the PlayList server really only serves to transmit parameters for ads. However, that is acceptable, since the parameters have to be transmitted somehow, after all. Suppose the software provider wants to mix ad models, e. g., desires to provide a mix of long-running ads and short-lived ads. How this situation is handled depends on the stoichiometry. If the cache is or will be filled with mostly persistent ads and only a few short-lived ones, the software provider can merely increase the reqInterval and use PlayLists as in the Persistent Ad Model.In other words, the software provider merely picks a few random ads to go on each PlayList, and picks a few more random ads to go on the next PlayList, which the client will fetch the next day. If, on the other hand, the cache will contain mostly short-lived ads and only a few persistent ads, the computer system 10 will use multiple PlayLists. One PlayList will list the persistent ads, as discussed above; the remaining facetime will be filled using PlayLists of short-lived ads. The above discussion illustrates how PlayLists can be used to support widely differing ad models. The reason PlayLists can do this is that they're really only an extra level of server control and in between Eudora and its ads. Given the importance of ads to Adware e-mail software, one of the software provider's key concerns is"what happens if the Adware does not receive ads ?" For example, users or ISPs may simply shut off the flow of ads toEudora by using firewalls or other means. Alternatively, the user may simply delete ads or Playlists (or both) from, for example, his/her computer on a random or periodic basis. If this happens, then users wil have no ads to display, i. e., the users get the full-featured version of Eudora without either seeing ads or paying. This would defeat one significant aspect of the exemplary software according to the present invention. On the other hand, users may have hardware or software problems or other issues that keep them from fetching ads, or the software provider's ad servers might even be down for some reason. Users should not be punished for this. The software provider will distinguish between these two situations by asking a simple question, i. e., is the user sending or receiving mail? If the answer is yes, the software provider will assume that the blocking of ads is something the software provider needs to address. The way the software provider addresses this issue is with an escalating series of Ad Failure Nags.These will continue for two weeks or until the software receives ads. For every two days the software does receive ads, the software will decrement the AdFailure Nag timer by one day. If the timer runs out, the software will display an apology to the user, revert to the Freeware version, and mark the user's software as owned by a Deadbeat User. Deadbeat Users will only be allowed to return to Adware if the ad server can be connected to at the time the user attempts to return to Adware. See Figs. 17A-17C. It should be noted that if the software provider should ever decide to retire Eudora and wish to let people use it without ads, the software provider can simply publish a permanent registration code. Alternatively, the e-mail client advantageously includes several more sophisticated functions for determining that an ad failure condition requires the employment of the Ad Failure Nag discussed above. For example, the client device can identify an ad download failure condition when a corresponding ad download function has failed to downloads during a predetermined period of time. In addition, the e-mail client device can identify an ad display failure condition when a corresponding ad display function has failed to display ads for a predetermined time period, e. g., the time (s) specified in the New PlayList received from the PlayList server and/or the current PlayList (s) stored for use by the e-mail client device. Either condition invokes the Ad Failure Nag function discussed above. One of the things the software provider will need to know is that the ads the software provider thinks are being displayed are actually being displayed, thus confirming that the ads are being displayed as frequently and for as long as the software provider thinks they are being displayed. It will be appreciated that this will be crucially important to maintaining credibility with advertisers.An exemplary audit scheme contains the following features: (Keep a rotating log of ad displays. This log will be rolled over once per week. The log will record ad-related events--when an ad was displayed, when it was removed, and when it was clicked on--in addition to other events, like cumulative face time in Eudora, cumulative run time, etc. At random, ask the user for permission to transmit the log. At a frequency of one out of every hundred users per month, ask for the user's permission to return the log to the software provider. If the permission is given, the log will be formatted in ASCII, placed in an outgoing message, and queued. The user will be given the opportunity to inspect and, if he/she desires, cancel the log collection. See Fig. 18A. (For selected users, deliver a pastry. In addition to the random send of the log, the software provider will also, at random, ask particular users for their permission to audit transactions in detail with the server. This will allow the software provider to correlate client and server behavior. Additional details on instrumentation applicable to the exemplaryEudora e-mail client software is provided in Figs. 18B-18E. The various state flow diagrams illustrated, for example, in Figs. 5A, 6A, 7A, 8 and 9, referred to a plurality of web pages, i. e., HTML pages that can be accessed and retrieved from one of the software provider's servers, e. g., registration server 301. See Fig. 1. The general purposes of these pages and theURNs which the software uses to access these pages will now be described in greater detail below. It will be appreciated that it will be helpful for the client to give the server information to help the server direct the user to the proper location or to assist the user by prefilling certain items on Web page based forms. That is the function of the query part of the URNs. The elements that might go in query parts are listed below. It will be noted that the query parts are divided into two groups. The first group includes items which are considered personal, and great care should be taken to transmit them only when appropriate; the second group includes items which are not considered to be privacy-sensitive. <tb>Realname <SEP> The <SEP> Real <SEP> Name <SEP> field <SEP> from <SEP> the <SEP> user's <SEP> Dominant <SEP> e<tb> mail <SEP> personality. <SEP> (EP4 <SEP> supports <SEP> multiple <SEP> e-mail<tb> personalities <SEP> for <SEP> IMAP4 <SEP> (both <SEP> POP3) <SEP> e-mail<tb> accounts.)<tb> Regfirst <SEP> The <SEP> first <SEP> name <SEP> under <SEP> which <SEP> the <SEP> user <SEP> registered <SEP> last<tb> time <SEP> (if <SEP> any).<tb>Reglast <SEP> The <SEP> last <SEP> name <SEP> under <SEP> which <SEP> the <SEP> user <SEP> registered <SEP> last<tb> time <SEP> (if <SEP> any).<tb>Regcode <SEP> The <SEP> user's <SEP> current <SEP> Eudora <SEP> registration <SEP> code <SEP> (if <SEP> any).<tb>OldReg <SEP> The <SEP> user's <SEP> old-form <SEP> RegCode.<tb> e-mail <SEP> The <SEP> e-mail <SEP> address <SEP> from <SEP> the <SEP> user's <SEP> Dominant<tb> personality.<tb>Profile <SEP> The <SEP> profile <SEP> information <SEP> the <SEP> user <SEP> has <SEP> entered.<tb> Destination <SEP> This <SEP> is <SEP> the <SEP> URN <SEP> which <SEP> the <SEP> user <SEP> wishes <SEP> to <SEP> visit.<tb> Adid <SEP> This <SEP> is <SEP> the <SEP> id <SEP> of <SEP> an <SEP> ad <SEP> on <SEP> which <SEP> the <SEP> user <SEP> clicked.<tb>Platform <SEP> MacOS, <SEP> Windows, <SEP> Palm, <SEP> Nintendo <SEP> 64, <SEP> etc.<tb> <tb>Product <SEP> The <SEP> software <SEP> provider's <SEP> code <SEP> name <SEP> for <SEP> the <SEP> product<tb> being <SEP> registered. <SEP> Eudora, <SEP> PDQMail, <SEP> etc.<tb>Version <SEP> The <SEP> version <SEP> number <SEP> of <SEP> the <SEP> product <SEP> being <SEP> used <SEP> to<tb> register. <SEP> This <SEP> should <SEP> be <SEP> of <SEP> the <SEP> form<tb> Major. <SEP> Minor. <SEP> Bugfix. <SEP> Build.<tb>DistributorID <SEP> This <SEP> will <SEP> be <SEP> a <SEP> code <SEP> which <SEP> sites <SEP> may <SEP> apply <SEP> for, <SEP> which<tb> will, <SEP> in <SEP> turn, <SEP> allow <SEP> the <SEP> site, <SEP> i. <SEP> e., <SEP> its <SEP> controlling<tb> entities, <SEP> to <SEP> receive <SEP> a <SEP> continuing <SEP> revenue <SEP> stream <SEP> in<tb> return <SEP> for <SEP> providing <SEP> users <SEP> with <SEP> this <SEP> custom<tb> branded <SEP> copy <SEP> of <SEP> Eudora.<tb>Action <SEP> What <SEP> it <SEP> is <SEP> the <SEP> user <SEP> has <SEP> requested <SEP> to <SEP> do; <SEP> register, <SEP> pay,<tb> lostcode, <SEP> etc.<tb>Mode <SEP> Either <SEP> Payware, <SEP> Adware, <SEP> or <SEP> Freeware.<tb>Topic <SEP> Used <SEP> for <SEP> support <SEP> items, <SEP> this <SEP> tells <SEP> the <SEP> server <SEP> what<tb> particular <SEP> kind <SEP> of <SEP> support <SEP> is <SEP> needed.<tb> Typically, all of the software provider's non-ad URNs begin with: http ://jump. eudora. com/jump. cgi ? action=whatever The"action"value determines what function the user wishes to perform.The software provider then appends various other query parts to the URN, suitably %-escaped, i. e., separated by a percentage (%) or ampersand ( & ) symbol (for example), according to the chart illustrated in Fig. 19. A brief discussion of each type of web page referenced in Fig. 19 is provided immediately below. <tb>PAYMENT <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> user's <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> info, <SEP> name, <SEP> e-mail <SEP> address, <SEP> and <SEP> whatever <SEP> other<tb> information <SEP> the <SEP> software <SEP> provider <SEP> wants <SEP> to<tb> compile <SEP> about <SEP> its <SEP> users. <SEP> It <SEP> will <SEP> also <SEP> ask <SEP> them <SEP> for<tb> a <SEP> question <SEP> and <SEP> answer <SEP> for <SEP> use <SEP> if <SEP> they <SEP> ever <SEP> lose<tb> their <SEP> payment <SEP> code. <SEP> It <SEP> should <SEP> return, <SEP> e. <SEP> g.,<tb> display <SEP> and <SEP> also <SEP> e-mail, <SEP> their <SEP> official <SEP> registration<tb> name <SEP> and <SEP> registration <SEP> code.<tb>FREEWARE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> same <SEP> info <SEP> as <SEP> the<tb> REGISTRATION <SEP> Payment <SEP> web <SEP> page, <SEP> minus <SEP> the <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> information. <SEP> It <SEP> should <SEP> send <SEP> back <SEP> (that <SEP> is, <SEP> display<tb> and <SEP> also <SEP> e-mail) <SEP> their <SEP> official <SEP> registration <SEP> name<tb> and <SEP> registration <SEP> code.<tb>ADWARE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> same <SEP> info <SEP> as <SEP> the<tb> REGISTRATION <SEP> Payment <SEP> web <SEP> page, <SEP> minus <SEP> the <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> information. <SEP> It <SEP> should <SEP> send <SEP> back <SEP> (that <SEP> is, <SEP> display<tb> and <SEP> also <SEP> e-mail) <SEP> their <SEP> official <SEP> registration <SEP> name<tb> and <SEP> registration <SEP> code.<tb>BOX <SEP> REGISTRATION <SEP> This <SEP> web <SEP> page <SEP> exists <SEP> to <SEP> accept <SEP> registrations<tb> WEB <SEP> PAGE <SEP> generated <SEP> by <SEP> Box <SEP> or <SEP> updater <SEP> installers. <SEP> It <SEP> should<tb> simply <SEP> accept <SEP> the <SEP> user's <SEP> code, <SEP> validate <SEP> it, <SEP> mail <SEP> it<tb> back, <SEP> and <SEP> display <SEP> a"thank <SEP> you <SEP> for <SEP> registering"<tb> page <SEP> or <SEP> dialog <SEP> box.<tb>LOST <SEP> CODE <SEP> This <SEP> web <SEP> page <SEP> helps <SEP> users <SEP> find <SEP> their <SEP> registration<tb> WEB <SEP> PAGE <SEP> codes. <SEP> When <SEP> they <SEP> register/pay, <SEP> they'll <SEP> be <SEP> asked<tb> <tb> to <SEP> provide <SEP> their <SEP> name, <SEP> e-mail <SEP> address, <SEP> and <SEP> a<tb> question <SEP> and <SEP> answer. <SEP> When <SEP> they <SEP> come <SEP> to <SEP> the<tb> lost <SEP> code <SEP> page, <SEP> they'll <SEP> be <SEP> asked <SEP> for <SEP> name <SEP> and<tb> address, <SEP> and <SEP> if <SEP> that <SEP> matches, <SEP> they'll <SEP> be <SEP> asked<tb> their <SEP> question. <SEP> If <SEP> all <SEP> that <SEP> goes <SEP> well, <SEP> their<tb> RegCode <SEP> will <SEP> be <SEP> mailed <SEP> to <SEP> them. <SEP> If <SEP> they <SEP> can't<tb> receive <SEP> mail, <SEP> they'll <SEP> have <SEP> to <SEP> call.<tb>UPDATE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> list <SEP> the <SEP> updates <SEP> that <SEP> are<tb> WEB <SEP> PAGE <SEP> available <SEP> to <SEP> the <SEP> user. <SEP> Ideally, <SEP> it <SEP> would <SEP> list <SEP> only<tb> those <SEP> updates <SEP> the <SEP> user <SEP> does <SEP> not <SEP> already <SEP> have,<tb> and <SEP> clearly <SEP> indicate <SEP> which <SEP> updates <SEP> are <SEP> free <SEP> and<tb> which <SEP> updates <SEP> the <SEP> user <SEP> needs <SEP> to <SEP> pay <SEP> for. <SEP> This<tb> web <SEP> page <SEP> will <SEP> be <SEP> downloaded <SEP> to <SEP> the <SEP> user's<tb> system <SEP> from <SEP> time <SEP> to <SEP> time <SEP> and <SEP> displayed"off<tb> line"in <SEP> Eudora, <SEP> and <SEP> so <SEP> it <SEP> should <SEP> be <SEP> kept <SEP> small.<tb>ARCHIVED <SEP> VERSIONS <SEP> This <SEP> web <SEP> page <SEP> should <SEP> list <SEP> all <SEP> versions <SEP> of <SEP> Eudora,<tb> WEB <SEP> PAGE <SEP> so <SEP> that <SEP> users <SEP> can <SEP> download <SEP> whatever <SEP> they<tb> happen <SEP> to <SEP> need.<tb>PROFILE <SEP> WEB <SEP> PAGE <SEP> The <SEP> purpose <SEP> of <SEP> this <SEP> web <SEP> page <SEP> is <SEP> to <SEP> collect<tb> demographic <SEP> information <SEP> so <SEP> that <SEP> ads <SEP> delivered<tb> to <SEP> the <SEP> user <SEP> can <SEP> more <SEP> precisely <SEP> targeted <SEP> by<tb> advertisers. <SEP> At <SEP> this <SEP> page, <SEP> the <SEP> user <SEP> will <SEP> be <SEP> asked<tb> a <SEP> series <SEP> of <SEP> questions <SEP> about <SEP> his/her <SEP> personal<tb> preferences, <SEP> habits, <SEP> etc., <SEP> e. <SEP> g., <SEP> buying <SEP> habits,<tb> sleeping <SEP> habits, <SEP> preferences <SEP> in <SEP> clothing, <SEP> etc. <SEP> No<tb> information <SEP> identifying <SEP> the <SEP> user <SEP> is <SEP> to <SEP> be<tb> collected <SEP> on <SEP> this <SEP> page! <SEP> The <SEP> information <SEP> will <SEP> be<tb> reduced <SEP> to <SEP> a <SEP> cookie, <SEP> mailed <SEP> to <SEP> Eudora <SEP> and<tb> stored <SEP> as <SEP> part <SEP> of <SEP> the <SEP> user's <SEP> settings <SEP> in <SEP> the<tb> Eudora <SEP> directory <SEP> (folder). <SEP> The <SEP> procedure <SEP> for<tb> accepting <SEP> a <SEP> profile <SEP> is <SEP> the <SEP> same <SEP> as <SEP> the <SEP> procedure<tb> for <SEP> accepting <SEP> a <SEP> registration <SEP> code, <SEP> detailed <SEP> below.<tb>SUPPORT <SEP> WEB <SEP> PAGES <SEP> The <SEP> software <SEP> provider <SEP> will <SEP> need <SEP> several <SEP> web<tb> <tb> pages <SEP> for <SEP> resolving <SEP> user <SEP> problems. <SEP> For <SEP> these<tb> pages, <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> use <SEP> the"topic"<tb> part <SEP> of <SEP> the <SEP> query <SEP> to <SEP> direct <SEP> users <SEP> to <SEP> situation<tb> specific <SEP> help <SEP> as <SEP> needed.<tb> Having discussed the client side of the overall system illustrated in Fig.1, is it now time to turn to the server side of the system. The network will not be discussed in detail, however, as it is something well known in the art. In particular, the PlayList Server (PLS) or Servlet, i. e., the applet responding to the PlayList Request, shall now be described in detail. The PLS is a server side program which services HTTP requests and returns HTTP responses. It will be appreciated that each request launches a different thread, and that the data format of communications between the client and the PLS isXML-encoded in the exemplary embodiment. The PLS advantageously can be instantiated using the following Java@ packages. <tb>PKG <SEP> DESCRIPTION <SEP> USAGE<tb> XP <SEP> XP <SEP> is <SEP> an <SEP> XML <SEP> 1.0 <SEP> parser <SEP> written <SEP> in <SEP> The <SEP> PLS <SEP> uses <SEP> the<tb> Java. <SEP> The <SEP> parser <SEP> checks <SEP> a <SEP> given <SEP> XML <SEP> XP <SEP> parser <SEP> for:<tb> document <SEP> for <SEP> well-formedness <SEP> and <SEP> 1. <SEP> parsing <SEP> the<tb> validity. <SEP> Additional <SEP> information <SEP> is <SEP> client <SEP> request <SEP> to<tb> available <SEP> from <SEP> ensure <SEP> that <SEP> it <SEP> is<tb> http://www. <SEP> jclark. <SEP> com/xml/xp/. <SEP> valid.<tb>2. <SEP> parsing <SEP> the<tb> PlayList <SEP> Response<tb> to <SEP> ensure <SEP> that <SEP> it <SEP> is<tb> valid<tb> SAX <SEP> SAX <SEP> (Simple <SEP> API <SEP> for <SEP> XML) <SEP> is <SEP> a <SEP> The <SEP> PLS <SEP> uses <SEP> the<tb> standard <SEP> interface <SEP> for <SEP> event-based <SEP> SAX <SEP> interface <SEP> both<tb> XML <SEP> parsing, <SEP> the <SEP> parser <SEP> reads <SEP> the <SEP> in <SEP> the <SEP> XML<tb> XML <SEP> document <SEP> line <SEP> by <SEP> line <SEP> and <SEP> request <SEP> and <SEP> the<tb> initiates <SEP> events <SEP> that <SEP> contain <SEP> XML <SEP> response. <SEP> In<tb> information <SEP> about <SEP> the <SEP> line <SEP> that <SEP> was <SEP> the <SEP> request, <SEP> the<tb> <tb> just <SEP> read. <SEP> The <SEP> PLS <SEP> listens <SEP> to <SEP> particular <SEP> PLS"looks"for<tb> events <SEP> of <SEP> interest <SEP> and <SEP> extract <SEP> the <SEP> data <SEP> specific <SEP> tags <SEP> to<tb> from <SEP> the <SEP> XML <SEP> document <SEP> in <SEP> that <SEP> way. <SEP> build <SEP> the <SEP> request<tb> Additional <SEP> information <SEP> is <SEP> available <SEP> object. <SEP> In <SEP> the<tb> from <SEP> response, <SEP> the <SEP> PLS<tb> http://www. <SEP> megginson. <SEP> com/SAX/ <SEP> sends <SEP> events <SEP> to<tb> generate <SEP> the<tb> PlayList <SEP> XML<tb> response.<tb>MM. <SEP> MySQL <SEP> MM. <SEP> MySQL <SEP> is <SEP> a <SEP> Java <SEP> Database <SEP> The <SEP> PLS <SEP> use <SEP> the<tb> Connectivity <SEP> (JDBC) <SEP> Type-4 <SEP> driver, <SEP> JDBC <SEP> methods <SEP> to:<tb> i. <SEP> e., <SEP> an <SEP> all-Java <SEP> driver <SEP> that <SEP> issues <SEP> 1. <SEP> Establish<tb> requests <SEP> directly <SEP> to <SEP> the <SEP> PlayList <SEP> connection <SEP> (s) <SEP> to<tb> server <SEP> database. <SEP> It <SEP> will <SEP> be <SEP> appreciated <SEP> communicate <SEP> with<tb> that <SEP> this <SEP> is <SEP> the <SEP> most <SEP> efficient <SEP> method <SEP> the <SEP> database <SEP> using<tb> of <SEP> accessing <SEP> the <SEP> database. <SEP> The <SEP> JDBC <SEP> JDBC. <SEP> PLS <SEP> first<tb> API <SEP> is <SEP> made <SEP> up <SEP> of <SEP> classes <SEP> and <SEP> establishes <SEP> a<tb> interfaces <SEP> found <SEP> in <SEP> the <SEP> Java. <SEP> sql <SEP> and <SEP> connection<tb> Java. <SEP> text <SEP> packages. <SEP> Additional <SEP> through <SEP> the<tb> information <SEP> is <SEP> available <SEP> at <SEP> appropriate <SEP> JDBC<tb> http://www. <SEP> worldserver. <SEP> com/mm. <SEP> driver. <SEP> The<tb> mysql/connection <SEP> object<tb> can <SEP> be <SEP> used <SEP> to<tb> perform <SEP> all<tb> operations <SEP> on <SEP> the<tb> given <SEP> database. <SEP> In<tb> an <SEP> exemplary<tb> case, <SEP> the <SEP> PLS <SEP> will<tb> create <SEP> a <SEP> pool <SEP> of<tb> connection <SEP> objects<tb> during <SEP> the <SEP> Servlet<tb> initialization.<tb>2. <SEP> Execute <SEP> SQL<tb> statements <SEP> and<tb> retrieve <SEP> results <SEP> the<tb> PLS <SEP> performs <SEP> a<tb> <tb> SQL <SEP> query <SEP> to <SEP> the<tb> database <SEP> using<tb> both <SEP> Statement<tb> and <SEP> Prepared<tb> Statement <SEP> objects.<tb> What follows is an explanation of task flow in the PLS when the Servlet doPost method is invoked. See Fig. 20. The PLS parses the XML request and builds objects that represents the client update request. It will be noted that data access is performed using SAX. When logging the client request, the PLS stores the client request information in a so-called ClientUpdate table (not shown). It will be appreciated that the PlayList Request can be received from a plurality of e-mail clients residing on the client computers generally denoted 100n through any given day. When issuing the same SQL statement repeatedly, it will be appreciated that it is more efficient to use a Prepared Statement rather then generating a new Statement in response to a query. In the logging operation, the software provider advantageously can employ the following semantic to avoid repetitive Statement generation: PreparedStatement ps = conn. prepareStatement ("INSERT INTOClientUpdate (date, userAgent, PlayListId, Y) values ( ?, ?,?,?,..)"); It should be mentioned that in generating a New PlayList, the Servlet advantageously can employ both SQL queries and programming filtering. It will also be appreciated that these processes are synchronized in order to prevent conflicts when accessing the database. Appropriate pseudo code of generating a PlayList is depicted in Figs. 21A and 21B. The first block of pseudo code in Fig. 21A generates an ad list. It will be appreciated that the ad list generated by the first block of pseudo code holds all the image ads that are active and can be delivered within a predetermined time frame. The second block of pseudo code listed in Fig. 21A calculates the time needed to deliver the ads. The third block of pseudo code, which is illustrated in Fig. 21B, determines additional ads which can be used to fill the available facetime. In other words, if the e-mail client software has remaining time to fill, the generated PlayList will automatically fill the available time with runout ads, i. e., find a run out ad which is not in the ads history and which also fits into the Goal show time left. When generating XML, it is often useful to generate comments, processing instructions, and so on. The package XP Writer provides a set of methods for creating specific kinds of nodes in the output XML code, i. e., file.The following is a short list of methods PLS employs in generating the XML output. Starts an element-start-tag Ends an element-end-tag or close the current start-tag as an empty element. Attribute add attributes to a tag name value pair format (Comments writes a comment The PLS stores the information generated in response to a request in two tables, a PlayList general response table, which holds the client info section andPlayList general information, and a PlayList specific response, which holds the entry section. It will be appreciated that the PLS advantageously can use the prepared statement API to optimize performance in response to a query. Referring again to Fig. 20, that figure illustrates a class diagram which advantageously describes the representation and rendering of the PlayList, as will as the PlayList Response. It will be appreciated that this class diagram includes repeated XML Write method calls; these method calls are employed byPLS to generate the XML tags associated with the PlayList. Turning now to Fig. 22, that figure illustrates the major PlayList ServletClasses, which collectively define the PlayList Servlet. More specifically, thePlayList Request class handles the request and subsequently maps the XML request to the clientUpdate object while the PlayListResponse class handles the response and writes the clientUpdateResponse back to the client. In addition, the PlayListsGenerate class generates the PlayLists while the DBManager class handles the Data Base connection pool. Additional details are readily apparent from Fig. 22. It will be appreciated from Fig. 23 that all of the storage operations employing the database advantageously can be threaded. As mentioned above, all actions with respect to the database are performed the MM. MySQL package. In summary, one exemplary embodiment of the present invention encompasses software for converting a general purpose computer into a specialized PlayList server for supplying a PlayList Response to a client device for exchanging information with an information server system over a communications network and storing ads. More specifically, the software instantiates a PlayList Response generation function for generating a PlayListResponse identifying a plurality of selected ads to be presented by the client device, and a first communications function that completes a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device, wherein the information server system and the PlayList server are independently controlled. It will be appreciated that, while the PlayList directs the presentation, e. g., display, of ads on the client device, e. g., an e-mail client, the ads advantageously may be delivered to or retrieved by the client device in any number of ways in this preferred embodiment. In this exemplary embodiment, the PlayList Request preferably includes ad identifiers and ad presentation instructions; corresponding uniform resource names (URNs) can be included but may be omitted. According to another exemplary embodiment, the present invention encompasses software for converting a general purpose computer into a specialized PlayList server for supplying a PlayList Response to a client device exchanging information with an information server system and receiving ads from an ad server over a communications network. The software advantageously includes a PlayList Response generation function for generating a PlayList Response identifying a plurality of selected ads to be presented by the client device, and a first communications function that effects a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device. Preferably, the information server system and the PlayList server are independently controlled. It will be appreciated that this exemplary and non-limiting embodiment of the present invention contemplates a specific communications channel between the client device and a dedicated ad server (system) for delivery of ads defined by the PlayList. It will also be appreciated that the PlayList Request employed by this exemplary embodiment includes both information dictating presentation of the ads and/or operation of the client device with respect to ad presentation functions, and the name and URN for ads included in a New PlayList. According to yet another exemplary embodiment, the present invention provides software for converting a general purpose computer into a specializedPlayList server for supplying a PlayList Response to a client device exchanging information with an information server system and receiving ads from an ad server over a communications network, including: (a PlayList Response generation function for generating a PlayListResponse identifying a plurality of selected ads to be presented by the client device, (a PlayList Request parsing function for extracting selected information from the PlayList Request; (a PlayList generation function receiving an output of the database driver function for generating a PlayList for inclusion in the PlayList Response which identifies a plurality of selected ads to be presented by the client device in response to receipt of a PlayList Request, (a selected information supply function for supplying the selected information to the PlayList Response generation function to thereby initiate thePlayList generation function, (a first communications function that effects a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device, and (a second communication function that effects a PlayList Request receive function with the client device via the communications network, wherein the information server system and the PlayList server are independently controlled. Preferably, the PlayList Request parsing function includes an extensible markup language (XML) parsing function for verifying the wellformedness of the PlayList Request, a PlayList analysis function receiving the PlayList Request after verification by the XML parsing function for generating an object, and a database driver function receiving the object for building a query from the object and applying the query to a PlayList server database. It should be noted that the PlayList Response generation function is initiated by receipt of a PlayList Request, which, in an exemplary case, includes the name of the current PlayList (s) employed by the client device providing thePlayList Request. While each of the numerous client devices connected to an information server generate a PlayList Request, the discussion of this specific aspect of the present invention, i. e., the PlayList server, can best be understood from the point of view of a system including only one client device; the actual implementation of the, for example, e-mail client device contemplates the use of thousands of client devices. The PlayList Request advantageously can include information regarding the currently running PlayList (s) on the client device, and user data fields that store data regarding the progress made by the client device in presenting, e. g., displaying, the ads stored by the client device. An exemplary and non-limiting list of the information that can be provided to the PlayList server via thePlayList Request includes: (a first user data field identifying a current PlayList; (a second user data field identifying user demographic data; (a third user data field identifying user/client device behavior data; (a fourth user data field identifying usage history of the client device; (a fifth user data field identifying the respective software operating on the client device; (a sixth user data field identifying the respective operating system of the client device; (a seventh user data field identifying the amount of time the user has used client device over a prescribed time interval; (an eighth user data field identifying the total amount of display time required for the stored ads that remain to presented by the client device; (a ninth user data field identifying the total amount of times that ads were presented by the client device during the prescribed time interval; (a tenth user data field identifying the dimensions of a display screen associated with the client device; and (a list of the ad identifiers corresponding to advertisements that have been displayed in the prescribed most recent time interval. Advantageously, the PlayList Request parsing function can extract selected information from the PlayList Request and employ the selected information and other information, e. g., information provided by the entity controlling the PlayList server, in generating the PlayList Response. It will be appreciated that the PlayList Request may include all or a subset of the information listed immediately above; the PlayList Request parsing function extracts information contained in at least one of the user data fields. In any event, the receipt of the PlayList Request by the PlayList server initiates generation of the PlayList Response. In response to the PlayList Request, the PlayList Response generation function generates one of an action command and the PlayList Response. With respect to the former, the PlayList Response generation function advantageously can generate the action command in response to receipt of a garbled PlayList Request. This can be generally thought of as an error code directing the client device to send a New PlayList Request. It will be appreciated that the action command can include an associated error message, which is presentable to the user by the client device. Alternatively, the action command may cause the client device to delete all of the ads received and/or stored by the client device responsive to a command issued to the PlayList server by an entity controlling the PlayList server. In other words, there are times when the software provider may wish to flush the existing ads; the entity controlling the PlayList server, e. g., the software provider, sends a command to the PlayList server, which command causes the PlayList server to respond with a flush command to either specific PlayList Requests, e. g., PlayList Requests generated by a particular software version, of all PlayList Requests. With respect to the latter, a detailed discussion follows. As discussed above, the PlayList Response advantageously includes both client information, information regarding how the client device, e. g., a PDA device, is to present, e. g., display, the selected ads, i. e., the ads that during the time period following receipt of the PlayList Response by the client device, and a New PlayList. For example, selected parameters included in the client information advantageously can switch the client device from between a persistent presentation mode and a short-lived presentation mode of presenting the ads. The client information can, in an exemplary case: (control the turnover rate of the ads presented by the client device; (specify the periodicity at which the client device generates thePlayList Request; (establish a minimum time separation between competing ones of the ads; (establish specifications directing the manner in which the client device is to present each of the ads. For example, when the ads available to the client device include both current ads (paid ads) and expired ads (free ads), the client information includes a minimum time period during which the client device presents the current ads before the client device presents the expired ads. The client information may also establish a maximum time period during which the client device is permitted to present the expired ads. In any event, the PlayListResponse advantageously may include commands or selected parameters which direct the client device to either concatenate the New PlayList to the current PlayList (s) or discard the current PlayList (s) in favor of the NewPlayList. The command, or the selected parameters, controlling this facet of the client device operation is executed upon receipt of the PlayList Response by the client device over the effected communications link. The New PlayList included in the PlayList Response includes a name and a corresponding Uniform Resource Name (URN) for each of the selected ads. It will be appreciated that the URN can correspond to one of a storage location of the respective named ad on an ad server or a location on the ad server redirecting the client device to a location on another storage device for the respective named ad. Alternatively, the URN specifies a location on the ad server redirecting the client device to an ad storage location collocated on the ad server for the respective named ad. It should be mentioned at this point that, in addition to the name and URN of each of the selected ads, the New PlayList may also include information identifying an ad type, i. e., postage stamp ad, toolbar ad, or placard ad, for each one of the respective selected ads. It should be noted that in at least one exemplary embodiment of the present invention, the PlayList server instantiated by software stored on the server computer 302 advantageously responds to a PlayList Request written, i. e., coded, in extensible markup language (XML). One of ordinary skill in the art of documents generated in XML will appreciate that these documents, e. g., the PlayList Request, advantageously can have an associated document type definition (DTD). In order to optimize system performance, the PlayList server should have the DTD available, i. e., available to the PlayList Request parsing function. There are several options for ensuring that the DTD is available to thePlayList server. First, the DTD for each of different types of client devices, e. g., e-mail client device or PDA, is stored by the PlayList server. In that case, thePlayList Request need only include a DTD tag, which identifies the particularDTD to be employed by the PlayList Request parsing function. Second, theDTD advantageously can be embedded in the PlayList Request. In either case, both the PlayList server and the client device implicitly use the same DTD. It should be mentioned that the software provider should make provisions with respect to ad security. There are really two security issues to consider. One is whether or not the client is getting valid ads (call this client security), and the second is whether or not a valid client is fetching ads (call this server security). Client security is of relatively small importance. If a given person manages to trick Eudora into displaying some ads other than those transmitted by the software provider, it probably doesn't matter a great deal. This is not to say that it could not become problematic if large numbers of clients at one or more sites began doing it; however, a carefully worded license agreement should make at least large sites avoid actions which would cause this particular problem. However, to avoid trivial attacks, PlayLists and ads advantageously can be checksummed with MD5 (or another mechanism), and the checksums recorded in the PlayList. Then the client can checksum the PlayList and ads using the same secret seed, and compare its checksums to those in the PlayList.If it fails to get the proper ads, this will be treated as a failure to get ads at all. Server-side security is potentially a much bigger problem. The software provider intends to charge advertisers for ads, based on the understanding that the software provider's users will actually see the ads the software provider is charging for. To do this with confidence, the software provider should ascertain that it is actually Eudora that is downloading the ads, and not some rogue process written to fetch many ads. Why would someone bother to fetch ads?While the software provider can't discount the"because they can"motivation of the amateur hacker, the real issue is the ad revenue, i. e., ad bounty. Because every ad fetch can generate revenue for a third party, there is a very significant financial incentive for that third party to cause a lot of ad fetches. It thus becomes imperative that the software provider prevent (and/or detect) ad fetches not made by copies of Eudora. Given that such fetches may be in violation of the agreement the software provider signed with the distributor, these fetches could constitute a form of fraud. There are several different approaches to fraud detection which advantageously can be implemented in the software running, for example, onAd server 303. Whatever method the software provider eventually uses to prevent fraud, it will be important also to detect fraud should it occur. There are two broad classes of fraud detection; authentication and statistical analysis. Authentication is easily understood; if the program fetching the ads fails to prove that it is a valid copy of Eudora, the software provider will be alerted to possible fraud. However, authentication provides challenges of its own, and may be impossible or impractical or simply unnecessary. Statistical analysis has some significant benefits, but also significant drawbacks. The benefits include minimal work in the client (and hence no vulnerability to disassembly, etc.), no run-time burdens on either the client or the server, i. e., everything can be done"after the fact"during accounting runs, easily changeable from the software provider's end, ability to be applied retroactively, etc. The drawbacks to statistical analysis include that statistical analysis will never be entirely certain, and that the software provider may not collect the proper statistics, etc. A listing of parameters or statistical measures that the software provider may gather or compute is presented immediately below. <tb>ClientID <SEP> It's <SEP> hard <SEP> to <SEP> see <SEP> a <SEP> way <SEP> to <SEP> avoid <SEP> generating <SEP> some <SEP> sort <SEP> of<tb> client <SEP> id <SEP> for <SEP> use <SEP> with <SEP> fetching <SEP> ads. <SEP> The <SEP> software <SEP> provider<tb> might <SEP> hope <SEP> that <SEP> such <SEP> identifiers <SEP> will <SEP> be <SEP> self-validating, <SEP> but<tb> it <SEP> is <SEP> preferable <SEP> that <SEP> the <SEP> software <SEP> provider <SEP> needs <SEP> to <SEP> know<tb> what <SEP> particular <SEP> installation <SEP> of <SEP> Eudora <SEP> is <SEP> actually <SEP> fetching<tb> ads. <SEP> This <SEP> can <SEP> then <SEP> be <SEP> used <SEP> in <SEP> compiling <SEP> statistics <SEP> and<tb> performing <SEP> computations. <SEP> By"installation"the <SEP> software<tb> provider <SEP> means <SEP> a <SEP> single <SEP> storage <SEP> system <SEP> directory <SEP> (PC) <SEP> or<tb> folder <SEP> (Mac) <SEP> with <SEP> a <SEP> Eudora <SEP> mail <SEP> structure <SEP> in <SEP> it, <SEP> i. <SEP> e., <SEP> data<tb> interchanged <SEP> between <SEP> the <SEP> e-mail <SEP> client <SEP> and <SEP> at <SEP> least <SEP> one<tb> server <SEP> and <SEP> not <SEP> necessarily <SEP> the <SEP> e-mail <SEP> client <SEP> itself, <SEP> per <SEP> se.<tb>IpAddress <SEP> The <SEP> software <SEP> provider <SEP> will <SEP> likely <SEP> want <SEP> to <SEP> log <SEP> requests <SEP> by<tb> the <SEP> IP <SEP> address <SEP> of <SEP> the <SEP> originating <SEP> e-mail <SEP> client.<tb>DistributorID <SEP> Of <SEP> course <SEP> a <SEP> cornerstone <SEP> of <SEP> the <SEP> referral <SEP> payment <SEP> system <SEP> is<tb> <tb> the <SEP> fact <SEP> that <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> record <SEP> the<tb> distributor <SEP> ID <SEP> for <SEP> the <SEP> client <SEP> fetching <SEP> ads. <SEP> The <SEP> software<tb> provider <SEP> should <SEP> collect <SEP> this <SEP> when <SEP> users <SEP> pay <SEP> or <SEP> even<tb> register <SEP> the <SEP> software.<tb>NumPaidUsers <SEP> This <SEP> statistic <SEP> is <SEP> the <SEP> number <SEP> of <SEP> paid <SEP> users <SEP> with <SEP> a <SEP> given<tb> distributor <SEP> ID.<tb>NumClientIDs <SEP> This <SEP> statistic <SEP> is <SEP> the <SEP> number <SEP> of <SEP> client <SEP> ID's <SEP> with <SEP> a <SEP> given<tb> distributor <SEP> ID.<tb>NumAdsFetched <SEP> The <SEP> number <SEP> of <SEP> ads <SEP> fetched <SEP> by <SEP> a <SEP> particular <SEP> client <SEP> ID.<tb> Given the raw data available from monitoring the parameters listed above, the following is an exemplary and non-inclusive list of possible statistical measures which can be generated. NumAdsFetched <SEP> A <SEP> client <SEP> ID <SEP> with <SEP> a <SEP> very <SEP> high <SEP> number <SEP> of <SEP> ads <SEP> fetched <SEP> is<tb> suspicious.<tb>NumClientIDs/Paid <SEP> users <SEP> is <SEP> a <SEP> very <SEP> hard <SEP> number, <SEP> because <SEP> the <SEP> software<tb> NumPaidUsers <SEP> provider <SEP> will <SEP> have <SEP> collected <SEP> credit <SEP> card <SEP> information <SEP> and<tb> charged <SEP> against <SEP> this <SEP> card. <SEP> Thus, <SEP> it <SEP> can <SEP> serve <SEP> as <SEP> a <SEP> useful<tb> measuring <SEP> stick <SEP> for <SEP> how <SEP> many <SEP> clients <SEP> the <SEP> software<tb> provider <SEP> can <SEP> expect. <SEP> A <SEP> particular <SEP> distributor <SEP> with <SEP> a <SEP> very<tb> high <SEP> ratio <SEP> or <SEP> a <SEP> ratio <SEP> that <SEP> suddenly <SEP> goes <SEP> higher <SEP> bears<tb> investigation.<tb> One of the issues which the software provider must be very cognizant of is the protection of the user's privacy, i. e., the user generally does not want to receive ads based on information that the user unknowingly submitted to the software provider. There is an extremely vocal and paranoid subset of the user community, who object to practically all forms of information gathering, even the most benign. Even relatively innocent devices like serial numbers are considered something to be completely avoided. While the serial number of a software program may seem like a trivial matter to the software supplier, users who object to this type of"tagging"exist, and the software provider should be cognizant of such users. In order to avoid such concerns to the maximum extent possible, the software provider should adopt a Confidential InformationPolicy which includes the following provisions: Obtain Permission-Before the software provider gathers or transmits any data that might identify the user to the advertiser, the software provider should obtain the user's explicit (See Fig. 18A) or near-explicit permission. The term near-explicit is employed to denote that the software provider may, for example, put a special privacy warning in the web page where the user registers a software program such as Eudora. Here, the user is clearly taking an action to submit data to the software provider; as such, explicit permission shouldn't be needed. On the other hand, the software provider should go out of its way to identify areas where an unreasonable user might be able to claim that he/she didn't know he/she was giving information to the software provider, and ask for explicit permission there, even if it seems relatively obvious to the software provider. (Data Separation-Insofar as possible, the software provider should maintain payment information separate from registration information, and both types of information should be maintained separate from demographic information, etc. While it may be very tempting to correlate databases, the software provider faces potential crucifixion if the databases are actually correlated. Moreover, since the software provider can still deliver very targeted advertising without database correlation, the software provider should maintain separate databases. (User Verifiability-Insofar as possible, protections established by the software provider should be verifiable by end users with packet sniffers.The software provider may even encourage the practice of watching the software's, e. g., Eudora's, actions. It is one thing to say"The software provider does not give your personal data to advertisers ;" it is quite another for the user to be able to verify that this is the case. Strong Public and Private Commitment-The software provider needs to be clear and public with its privacy policies, and the software provider needs to respect them internally. If the software provider merely views privacy as something the software provider must do to avoid adverse press coverage, the software provider will do it poorly and wind up in trouble. In summary, the present invention encompasses a multi-moded software product, e. g., e-mail software, which includes three"self-contained"different versions (or,"modes"), including a"first full feature set"version which is activated when the software product is paid for by the user (i. e., a"Payware version"), a"second full feature set"version which is activated when the user agrees (e. g., either by default or by explicit agreement) to accept advertisements delivered to the client device in order to subsidize the software product (i. e., an "Adware"version), and a"reduced feature set"version which is activated when the software product is not paid for (i. e., a"freeware"version) and the"second full feature set"version is not activated. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have such multi-moded software installed thereon. It will be appreciated that the first and second full feature sets are identical with respect to e-mail support features; it will also be appreciate that the second full feature set includes PlayList and ad fetching and display features which are dormant in the first full feature set. Moreover, the present invention further encompasses multi-moded software as set forth above, wherein the multi-moded software includes a mode switching function which automatically switches from the"Adware"version to the"freeware"version upon detecting a prescribed condition (e. g., based upon monitored user activity level, and/or less than a prescribed number of ads having been downloaded, i. e.,"deadbeat user"criterion). The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have such multi-moded software installed thereon. It will be appreciated from the discussion above that the present invention further encompasses multi-moded software as set forth above, wherein the multi-moded software includes a mode switching function which automatically switches from the"Adware"version to the"freeware"version upon detecting occurrence of a prescribed"ad failure condition", e. g., less than a prescribed number of ads having been received and/or displayed by the client device within a prescribed time period, and an"Ad Failure Nag"function which monitors"time since last Nag"and which generates an"Ad Failure Nag" according to a"Nag Schedule"which is dynamically varied based on the monitored"time since last Nag"information and/or based on cumulative ad download/display statistics or information. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this multi-moded software product installed thereon. In one exemplary embodiment, the present invention further encompasses multi-moded software as set forth above, wherein the multimoded software includes a Nag function which generates different types ofNags dependent upon the current mode of the software product which is currently activated, and/or based upon time since the last Nag was generated, and/or based on cumulative ad download/display statistics or information, and/or based on other monitored conditions. For example, the different types of Nags could include a"Registration Nag", a"Payware Nag", an"AdwareNag", an"Update Nag", and an"Ad Failure Nag". The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this multi-moded software product installed thereon. In another exemplary embodiment, the present invention encompasses a software product (e. g., e-mail software) that incorporates an automatic advertisement download function for automatically downloading advertisements to be displayed when the software is activated, and a control function for monitoring user activity levels and for controlling the display of downloaded advertisements at the client device based upon the monitored user activity levels (e. g., based upon"discrete"and/or"cumulative"ad display parameters). The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. The present invention also encompasses an e-mail software product that incorporates a control function for automatically downloading advertisements from a remote server system which is separate and independent from the e-mail server system, as well as the system and method for automatically distributing the advertisements to client devices which have this e-mail software product installed thereon. In particular, the system includes an ad server system that manages, administers, and controls the distribution of advertisements, and which is controlled by a control entity (e. g., one operated by the present assignee, QUALCOMM INCORPORATED) which is separate and independent from the control entity which controls the e-mail server system which provides e-mail services to any particular client device which has this e-mail software product installed thereon. Thus, in sharp contrast to the Juno Online Services system, in accordance with this aspect of the present invention, the ad server system and the e-mail server system are operated independently, i. e., under the control of separate and independent control entities. Advantageously, the present invention also encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement files download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a control function for locally controlling the display of downloaded advertisements at the client device based upon ad parameters included in the downloaded advertisement files, e. g., including (for each ad), various combinations and sub-combinations of the following ad parameters, namely, the maximum ad display time, or face time, for any given display of that particular ad, the maximum total/cumulative ad display time, or face time, for that particular ad, the maximum number of times to display that particular ad per day, the date/time before which that particular ad should not run, and the date/time after which that particular ad should not run. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. It will be appreciated that the present invention also encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement download function which fetches a PlayList from a remote server system (e. g., a PlayList server system) which specifies the advertisements to be fetched by the client device on which the software product is installed and the source addresses (e. g., URNs) of the ad servers on which the specified advertisements are stored, fetches the advertisements specified in the fetchedPlayList, and stores the fetched advertisements on the client device. The present invention further encompasses a system and method for distributing advertisements to client devices which have this software product installed thereon, including a PlayList server (or PlayList server system) which, in response to a PlayList Request from a particular client device that includes a client PlayList identifier, compares a client PlayList identified by the clientPlayList identifier with a current PlayList (which may optionally be customized to that particular client device) stored on the PlayList server, and then sends back to the client device a New PlayList which specifies the new advertisements to be fetched by the client device, and the source addresses of the ad servers on which the specified new advertisements are stored. Optionally, the above-described automatic advertisement download function of the software product installed on the client device can delete (discard) all or PlayList server-specified ones of the advertisements which are currently stored on the client device, e. g., those which are not specified in the current PlayList; and/or the above-described automatic advertisement download function of the software product installed on the client device can merge the New PlayList with the current client PlayList. The present invention also encompasses several variations and details of implementation of this novelPlayList/ad fetch process utilized in the Eudora Adware scheme. Moreover, the present invention encompasses a software product, e. g., email software, which incorporates a custom installer which identifies the specific software product distributor that distributed that software product.The present invention further encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a custom installer which identifies the specific software product distributor which distributed that software product, for the purpose of facilitating apportionment of advertising revenue the software product vendor receives from advertisers to specific software product distributors. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices which have this software product installed thereon, wherein the system includes a centralized control facility which receives software product distributor ID information from the client devices and uses this software product distributor ID information to facilitate apportionment of advertising revenue the software product vendor receives from advertisers to specific software product distributors. Alternatively, or additionally, a central database function which identifies (e. g., by means of cross-referencing and/or correlation tables) the software product distributor ID for each software product distributed by the software vendor, e. g., based on a serial number or reference code associated with each copy of the software product, can be utilized. Furthermore, the present invention encompasses a software product, e. g., e-mail software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a control function which utilizes a built-in"deadman timer"to impose a time limit for each particular advertisement download session, e. g., the client device will be disconnected from the remote server system upon expiration of the time limit imposed by the"deadman timer". The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. It will also be appreciated that the present invention can be characterized as a software product, e. g., e-mail software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and an instrumentation and auditing module having various novel features/functions, e. g., maintaining a rotating log of adrelated statistics and/or performing random and/or statistically-based ad effectiveness audits with user permission. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon, wherein the system includes a centralized control facility for obtaining ad-related statistical information from selected client devices, in a random or statistical manner, e. g., for the purpose of monitoring the integrity and/or effectiveness of the advertisement distribution system. Moreover, the present invention encompasses a software product, e. g., email software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a"link history" function which enables the user to review previously-viewed advertisements, e. g., by providing a graphical user interface (GUI) which includes a link history window that lists links the user has previously visited and ads that have been previously displayed to the user, along with some status information on each.Preferably, a mechanism will be provided to enable the user to select an ad listed in the link history window for display, e. g., by single-clicking the appropriate ad link, and to enable the user to visit the source Web site of any given ad listed in the link history window, e. g., by double-clicking the appropriate ad link. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. Furthermore, the present invention encompasses a software product, e. g., e-mail software, which incorporates a"Nag"function that monitors"time since last Nag"and that"nags"the user according to a"Nag Schedule"which is dynamically varied based on the monitored"time since last Nag"information. Finally, the present invention encompasses a software product, e. g., email software, that incorporates a download function that downloads separate file portions representing a single image during separate communication sessions with a remote server (e. g., separate file portions of an advertisement file, e. g., a GIF file). The present invention further encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. Although presently preferred embodiments of the present invention have been described in detail hereinabove, it should be clearly understood that many variations and/or modifications of the basic inventive concepts herein taught, which may appear to those skilled in the pertinent art, will still fall within the spirit and scope of the present invention, as defined in the appended claims.What is claimed is:
A high dielectric constant memory cell capacitor and method for producing the same, wherein the memory cell capacitor utilizes relatively large surface area conductive structures of thin spacer width pillars or having edges without sharp corners that lead to electric field breakdown of the high dielectric constant material. The combination of high dielectric constant material in a memory cell along with a relatively large surface area conductive structure is achieved through the use of a buffer material as caps on the thin edge surfaces of the relatively large surface area conductive structures to dampen or eliminate the intense electric field which would be generated at the corners of the structures during the operation of the memory cell capacitor had the caps not been present.
What is claimed is: 1. A method for fabricating a large surface area, high dielectric constant capacitor, comprising:providing a first conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said first conductive material to form large surface area structures including top edge portions; providing, following said removing, an insulative buffer material along said top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a second conductive material over said high dielectric constant material. 2. The method of claim 1, wherein:said providing said insulative buffer material comprises patterning a mask material comprising said insulative buffer material prior to said removing; said removing said portions of said first conductive material to form said large surface area structures comprises etching said first conductive material through said mask material; and a portion of said mask material remains after etching said first conductive material. 3. The method of claim 1, wherein applying said high dielectric constant material over said large surface area structures and said insulative buffer material comprises applying a high dielectric constant material selected from the group comprising BaxSr(z-x)TiO3, BaTiO3, SrTiO3, PbTiO3, Pb(Zr,Ti)O3, (Pb,La,Zr,Ti)O3, (Pb,La)TiO3, KNO3, and LiNbO3.4. The method of claim 1, wherein said providing said insulative buffer material comprises providing silicon nitride.5. The method of claim 1, wherein providing said first conductive material in electrical communication with said drain region on said semiconductor substrate comprises providing platinum in electrical communication with said drain region on said semiconductor substrate.6. The method of claim 1, wherein applying said second conductive material over said high dielectric constant material comprises applying platinum over said high dielectric constant material.7. The method of claim 1, wherein said removing said portions of said first conductive material to form said large surface area structures comprises patterning a mask material on said first conductive material and etching said first conductive material.8. The method of claim 7, wherein said providing said insulative buffer material comprises allowing a portion of said mask material to remain after etching said first conductive material.9. A method for fabricating a memory cell, comprising:providing conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said conductive material to form large surface area structures; providing, following said removing, an insulative buffer material over top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a conductive material over said high dielectric constant material. 10. The method of claim 9, wherein:said providing said insulative buffer material comprises patterning a mask material comprising said insulative buffer material prior to said removing; and said removing said portions of said conductive material to form said large surface area structures comprises etching said conductive material through said mask material; and a portion of said mask material remains after etching said conductive material. 11. The method of claim 9, wherein said removing said portions of said conductive material to form said large surface area structures comprises patterning a mask material on said conductive material and etching said conductive material.12. The method of claim 11, wherein said providing said insulative buffer material comprises allowing a portion of said mask material to remain after etching said conductive material.13. The method of claim 9, wherein said providing said insulative buffer material comprises providing silicon nitride.14. A method for fabricating a semiconductor DRAM, comprising:providing conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said conductive material to form large surface area structures; providing, following said removing, an insulative buffer material over top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a conductive material over said high dielectric constant material. 15. The method of claim 14, wherein:said providing said insulative buffer material comprises patterning a mask material comprising said insulative buffer material prior to said removing; said removing said portions of said conductive material to form said large surface area structures comprises etching said conductive material through said mask material; and a portion of said mask material remains after etching said conductive material. 16. The method of claim 14, wherein said removing said portions of said conductive material to form said large surface area structures comprises patterning a mask material on said conductive material and etching said conductive material.17. The method of claim 16, wherein said providing said insulative buffer material comprises allowing a portion of said mask material to remain after etching said conductive material.18. The method of claim 14, wherein said providing said insulative buffer material comprises providing silicon nitride.19. A method for fabricating a large surface area, high dielectric constant capacitor, comprising:providing a first conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said first conductive material to form large surface area structures including top edge portions; providing an insulative buffer material comprising silicon nitride along said top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a second conductive material over said high dielectric constant material. 20. A method for fabricating a memory cell, comprising:providing conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said conductive material to form large surface area structures; providing an insulative buffer material comprising silicon nitride over top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a conductive material over said high dielectric constant material. 21. A method for fabricating a semiconductor DRAM, comprising:providing conductive material in electrical communication with a drain region on a semiconductor substrate; removing portions of said conductive material to form large surface area structures; providing an insulative buffer material comprising silicon nitride over top edge portions of said large surface area structures; applying a high dielectric constant material over said large surface area structures; and applying a conductive material over said high dielectric constant material.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation of application Ser. No. 09/234,282, filed Jan. 19, 1999, now U.S. Pat. No. 6,190,965, issued Feb. 20, 2001, which is a divisional of application Ser. No. 08/994,849, filed Dec. 19, 1997, now U.S. Pat. No. 6,150,691, issued Nov. 21, 2000.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to a memory cell capacitor and method for producing the same. More particularly, the present invention relates to a method of forming high dielectric constant memory cell capacitors which utilize relatively large surface area structures without electric field breakdown of the high dielectric constant material on the relatively large surface area structures.2. State of the ArtA widely-utilized DRAM (Dynamic Random Access Memory) manufacturing process utilizes CMOS (Complimentary Metal Oxide Semiconductor) technology to produce DRAM circuits which comprise an array of unit memory cells, each including one capacitor and one transistor, such as a field effect transistor. In the most common circuit designs, one side of the transistor is connected to one side of the capacitor, the other side of the transistor and the transistor gate are connected to external circuit lines called the bit line and the word line, and the other side of the capacitor is connected to a reference voltage that is typically one-half the internal circuit voltage. In such memory cells, an electrical signal charge is stored in a storage node of the capacitor connected to the transistor that charges and discharges the circuit lines of the capacitor.Higher performance, lower cost, increased miniaturization of components, and greater packaging density of integrated circuits are ongoing goals of the computer industry. The advantages of increased miniaturization of components include: reduced-bulk electronic equipment, improved reliability by reducing the number of solder or plug connections, lower assembly and packaging costs, and improved circuit performance. In pursuit of increased miniaturization, DRAM chips have been continually redesigned to achieve ever higher degrees of integration. However, as the dimensions of the DRAM chips are reduced while at the same time memory capacity is increased, the occupation area of each unit memory cell of the DRAM chips must be reduced. This reduction in occupied area necessarily results in a reduction of the dimensions of the cell capacitor, which, in turn, makes it difficult to ensure required storage capacitance for transmitting a desired signal without malfunction. However, the ability to densely pack the unit memory cells, while maintaining required capacitance levels, is a crucial requirement of semiconductor manufacturing if future generations of ever higher memory capacitor DRAM chips are to be successfully manufactured.In order to minimize a decrease in storage capacitance caused by the reduced occupied area of the capacitor, the capacitor should have a relatively large surface area or a high dielectric constant dielectric layer in the capacitor. With regard to increasing capacitor surface area, there have been a variety of methods proposed for achieving this goal, including forming the capacitor such that various three-dimensional shapes extend therefrom. These three-dimensional shapes may include fins, cylinders, and boxes, as well as forming rough surfaces on these shapes.With regard to the use of a high constant capacitor layer, the dielectric constant is a value characteristic of a material which is proportional to the amount of charge that can be stored in the material when it is interposed between two electrodes. High dielectric constant materials which can be used include BaxSr(z-x)TiO3 [BST], BaTiO3, SrTiO3, PbTiO3, Pb(Zr,Ti)O3 [PLZT], (Pb,La,Zr,Ti)O3 [PLZT], (Pb,La)TiO3 [PLT], KNO3, and LiNbO3. Unfortunately, most high dielectric constant materials are incompatible with existing processes and cannot be simply deposited on a polysilicon electrode as are presently utilized dielectric materials, such as Si3N4, SiO2/SiN4, and Si3N4/SiO2 composite layers. The incompatibility is a result of the O2 rich ambient atmosphere present during high dielectric constant material deposition or during annealing steps. The O2 oxidizes portions of the material used for the storage node plate.U.S. Pat. No. 5,381,302 issued Jan. 10, 1995 to Sandhu et al. teaches methods for fabricating capacitors compatible with high dielectric constant materials wherein a storage node electrode is provided with a barrier layer, such as titanium nitride, which prohibits diffusion of atoms. A recessed conductive plug of polysilicon is deposited in a via, wherein a titanium layer is deposited on the conductive plug. A rapid thermal anneal is then performed to form a titanium silicide layer. The unreacted titanium layer is removed and a barrier layer is formed on the titanium silicide layer. A platinum layer is then deposited and patterned over the barrier layer, followed by a high dielectric constant layer which is followed by the deposition of a cell plate (preferably platinum) to form the capacitor. Although a high dielectric constant capacitor is formed, the capacitor has a low (i.e., relatively small) surface area. Furthermore, if the platinum layer is not properly patterned (i.e., misaligned) such that the barrier layer is exposed, oxidation of the barrier layer, the titanium silicide layer, and the conductive plug may occur.Although the formation of high dielectric constant capacitors is known, forming such high dielectric constant capacitors with relatively large surface area structures, such as fins and cylinders, to further increase their storage capacitance is not feasible. This infeasibility may be attributed to an electric field which forms when the capacitor is in operation. If a thin structure, such as a fin, is formed in an effort to increase surface area, this electric field becomes particularly intense at the corners or edge of the thin structure. This intense electric field can break down the dielectric material, which breakdown can result in capacitor failure.Therefore, it would be advantageous to develop a technique for forming a relatively large surface area, high dielectric constant capacitor, and RAM chips, memory cells, and capacitors employing same, while using inexpensive, commercially-available, widely-practiced semiconductor device fabrication techniques and equipment without requiring complex processing steps.SUMMARY OF THE INVENTIONThe present invention includes a novel memory cell capacitor and techniques for the formation of the memory cell capacitor which allow for, and promote, utilization of the advantages of relatively large surface area structures and high dielectric constant materials. The present invention utilizes a buffer material as a cap on the edge surfaces of the relatively large surface area structures to dampen or eliminate the intense electric field which is generated at the edge surfaces of the relatively large surface area structures during the operation of the capacitor.The method of the present invention is practiced after the formation of an intermediate structure comprising transistor gates on a silicon substrate which has been oxidized to form thick field oxide areas and which has been exposed to implantation processes to form drain and source regions. The intermediate structure further comprises at least one barrier layer which covers the transistor gates and the silicon substrate.The method of the present invention comprises patterning a first resist layer on the barrier layer, which is then etched to expose the drain regions in the substrate, forming vias. The resist layer is then stripped and a layer of conductive polysilicon material is applied over the structure to fill the vias. The polysilicon material is etched such that it is recessed within the vias. If oxidation of the polysilicon material during subsequent processing steps is a problem, a shield material may be applied and spacer etched to form a shield layer between the polysilicon material and the gates, and the barrier layer.A layer of metal is applied over the structure. The structure is then heated, which causes a silicide reaction wherever the metal layer contacts the polysilicon material to form a metal silicide layer within the vias. The unreacted portion of the metal layer is then selectively removed, leaving the metal silicide layer covering the polysilicon material.A metal barrier layer is applied over the metal silicide layer and the barrier layer. The metal silicide layer and metal barrier layer prevent the out diffusion of silicon from the polysilicon material (during subsequent heat steps) into a cell node which is to be formed above the metal barrier layer.A resist layer is then applied over the metal barrier layer to substantially fill the vias. The resist layer is then etched such that plugs of the resist layer remain in the vias. The metal barrier layer is then etched to form a bottom contact adjacent the metal silicide layer. The resist plugs are then stripped away.A layer of conductive material is deposited over the barrier layer and into the vias, thereby substantially filling the same, to contact the bottom contact. The conductive material is then patterned and etched to form electrically isolated, individual storage nodes which have relatively large surface area structures. These relatively large surface area structures can take the form of walls, columns, pins, annular circles, wedges, cones, or any such shape. A common element of each of these relatively large surface area structures is that each of them will have a relatively thin edge portion, surface, or shape edge where the conductive material is patterned. The material used to pattern the conductive material may be left on these edge portions or a buffer material may be added to these edge portions by any known techniques to form a cap on the edge portions.One embodiment of etching conductive material comprises depositing a layer of oxide material over the conductive material layer. A resist layer is patterned and the oxide material is etched to form an opening, preferably circular, and to expose portions of the conductive material layer. Preferably, one edge of each opening is substantially centered in the center of the underlying polysilicon material.The patterned resist layer is stripped and a mask material layer is deposited over the etched oxide material and the exposed conductive material layer. The mask material layer is then etched to form spacers. The etched oxide material is etched to leave the spacers free-standing. The pattern of the spacers is transferred down through the conductive material layer, and, preferably, into a portion of the barrier layer to form at least one relatively large surface area structure, such as a fin, in the conductive material layer. The transfer of the spacer pattern results in the conductive material layer forming electrically isolated, individual storage nodes with the spacers remaining on the edge portions of the relatively large surface area structure as the buffer material to form the cap.After the relatively large surface area structures are formed with the cap on the edge portions thereof, a layer of high dielectric constant material is deposited over the etched structure. The capacitors are completed by depositing an upper cell plate, preferably platinum, over the high dielectric constant material.BRIEF DESCRIPTION OF THE DRAWINGSWhile the specification concludes with claims particularly pointing out and distinctly claiming that which is regarded as the present invention, the advantages of this invention can be more readily ascertained from the following description of the invention when read in conjunction with the accompanying drawings in which:FIGS. 1-27 illustrate cross-sectional and plan views of a method of fabricating a high dielectric constant capacitor for a memory cell according to the present invention;FIG. 28 illustrates a cross-sectional view of a high dielectric constant capacitor for a memory cell having a buried bit line according to the present invention;FIG. 29 illustrates a cross-sectional view of a thin structure on a high dielectric constant capacitor;FIG. 30 illustrates a cross-sectional view of a thin structure on a high dielectric constant capacitor having a cap according to the present invention;FIG. 31 illustrates a thin structure for a high dielectric constant capacitor having a cap according to the present invention with a certain amount of redeposition of a conductive material layer thereon; andFIG. 32 illustrates the thin structure of FIG. 31 with the cap removed.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSFIGS. 1-24 illustrate a technique for forming a high dielectric constant cell capacitor for a memory cell. It should be understood that the figures presented in conjunction with this description are not meant to be illustrative of actual cross-sectional views of any particular portion of an actual semiconductor device, but are merely idealized representations which are employed to more clearly and fully depict the process of the invention than would otherwise be possible.FIG. 1 illustrates a cross-sectional view of an in-process intermediate structure 100 in the production of the memory cell array (i.e., a DRAM). This intermediate structure 100 comprises a substrate 102, such as a lightly doped P-type crystal silicon substrate, which has been oxidized to form thick field oxide areas 104 and exposed to implantation processes to form drain regions 106 and source regions 108 of N+ doping. Transistor gate members 112 are formed on the surface of the substrate 102, including transistor gate members 112 residing on a substrate active area 110 spanned between the drain regions 106 and the source regions 108 and transistor gate members 112 residing on the thick field oxide areas 104. The transistor gate members 112 each comprise a lower buffer layer 114, preferably made of silicon dioxide, separating a gate conducting layer or wordline 116 of the transistor gate member 112 from the substrate 102. Transistor insulating spacer members 118, preferably made of silicon nitride, are formed on either side of each transistor gate member 112. A cap insulator 122, also preferably made of silicon nitride, is formed on the top of each transistor gate member 112. A first barrier layer 124 (preferably made of tetraethyl orthosilicate-TEOS, or the like) is applied over the transistor gate members 112 and the substrate 102. A second barrier layer 126 (preferably made of borophosphosilicate glass-BPSG, phosphosilicate glass-PSG, borosilicate glass-BSG, or the like) is deposited over the first barrier layer 124. The second barrier layer 126 may be optionally planarized, if necessary, preferably using an abrasive process, such as chemical mechanical planarization (CMP).It is, of course, understood that a single barrier layer could be employed. However, a typical barrier configuration is a layer of TEOS over the transistor gate members 112 and the substrate 102 followed by a BPSG layer over the TEOS layer. The TEOS layer is applied to prevent dopant migration. The BPSG layer contains boron and phosphorus which can migrate into the source and drain regions formed on the substrate during inherent device fabrication heating steps. This migration of boron and phosphorus can change the dopant concentrations in the source and drain regions which can adversely affect the transistor performance.A first resist layer 128 is patterned, as shown in FIG. 2, and the second barrier layer 126 and the first barrier layer 124 are etched to expose the drain regions 106 in the substrate 102, forming vias 132, as shown in FIG. 3. The first resist layer 128 is then stripped, as shown in FIG. 4, and a layer of conductive polysilicon material 134 is applied over the structure to fill the vias 132, as shown in FIG. 5. The polysilicon material 134 is etched such that it is recessed within the vias 132, as shown in FIG. 6. This may be achieved with CMP, wet etch, dry etch, or a combination thereof. If oxidation of the polysilicon material 134 during subsequent processing steps (such as dielectric layer formation) is a problem, a shield material, such as a silicon nitride material, may be applied and spacer etched to form a shield layer 140 between the polysilicon material 134 and the gate members 112, the first barrier layer 124, and the second barrier layer 126, as shown in FIG. 7.A layer of metal 136, preferably titanium, is applied over the structure, such as by chemical vapor deposition or by sputter deposition, as shown in FIG. 8. The structure is heated, which causes a silicide reaction wherever the metal layer 136 contacts the polysilicon material 134 to form a metal silicide layer 138, such as titanium silicide (TiSi2), as shown in FIG. 9. The unreacted metal is then selectively removed through the use of an etchant that does not attack the metal silicide layer 138 or the second barrier layer 126, preferably an ammonium hydroxide/peroxide strip. This leaves the metal silicide layer 138 covering the polysilicon material 134, as shown in FIG. 10.A metal barrier layer 142, preferably TiN, TiAlN, or the like, is applied over the metal silicide layer 138 and the second barrier layer 126, as shown in FIG. 11. The metal barrier layer 142 prevents the out diffusion of silicon from the polysilicon material 134 (during subsequent heat steps) to a cell node which is to be formed above the metal barrier layer 142.A resist layer 144 is then applied, preferably by spin deposition, over the metal barrier layer 142 to substantially fill the vias 132, as shown in FIG. 12. The resist layer 144 is then etched, preferably using an oxygen plasma dry etch, such that resist plugs 146 remain in the vias 132, as shown in FIG. 13. The metal barrier layer 142 is then etched, preferably by a wet etch using ammonium hydroxide/peroxide, sulphuric acid/peroxide, or the like, to form a bottom contact 148, as shown in FIG. 14. The resist plugs 146 are then stripped away, preferably with an oxygen dry etch, as shown in FIG. 15.A layer of conductive material 152, preferably platinum, is deposited over the second barrier layer 126 and into the vias 132 to contact the bottom contact 148, as shown in FIG. 16. The conductive material layer 152 is preferably planarized and a layer of oxide material 154, preferably TEOS, is deposited over the conductive material layer 152, as shown in FIG. 17. A second resist layer 156 is patterned and the oxide material 154 is etched to form openings 150, preferably circular openings offset from vias 132, and expose portions of the conductive material layer 152, as shown in FIG. 18. Preferably, one edge of each opening 150 is substantially centered over the center of the underlying polysilicon material 134. FIG. 19 illustrates a top plan view of the openings 150 along lines 19-19 of FIG. 18.As shown in FIG. 20, the patterned second resist layer 156 is stripped and a mask material layer 158, preferably silicon nitride, is deposited over the etched oxide material 154 and the exposed conductive material layer 152. The mask material layer 158 is then etched, preferably comprising a spacer etch, to form insulative spacers 162, as shown in FIG. 21. The etched oxide material 154 is selectively etched (selective to the mask material layer) to leave the insulative spacers 162 free-standing, as shown in FIG. 22. The pattern of the insulative spacers 162 is transferred down through the conductive material layer 152, preferably by ion milling or dry etching, and, preferably, into a portion of the second barrier layer 126 to form relatively large surface area or thin structures, such as annular walls 163, in the conductive material layer 152, as shown in FIG. 23. The transfer of the spacer pattern results in the conductive material layer 152 forming electrically isolated, individual cell nodes 160 with the insulative spacers 162 remaining on the uppermost edges of thin portions 165 of the relatively large surface area structures, such as annular walls 163, to form caps of buffer material. FIG. 24 illustrates a top plan view of an annular structure 167 formed by the previously discussed method along lines 24-24 of FIG. 23. FIG. 25 is a side plan view of the upper portion of the annular structure 167 along lines 25-25 of FIG.24.It is, of course, understood that the insulative spacers/caps 162 can be defmed by patterning and etching the conductive material, by any known technique, to create the electrically isolated, individual cell nodes 160 including relatively large surface area structure or annular wall 163. These relatively large surface area structures 163 will, of course, have thin portions or edge surfaces 165 (see FIG. 23) where the conductive material layer 152 is patterned. The material used to pattern the conductive material layer 152 may be left on these edge surfaces 165 as a cap, or a cap of buffer material may be added to the outer edges of these edge surfaces 165 by any known techniques.A layer of high dielectric constant material 164, preferably a BST (barium-strontium-titanate) material, is deposited over the etched structure, as shown in FIG. 26. The capacitors 168 are completed by depositing an upper cell plate 166, preferably platinum, over the high dielectric constant material 164, as shown in FIG. 27.After the formation of the capacitors 168, bit lines, comprising a conductive material, may be formed to extend into and contact the source regions 108. However, the bit lines may be disposed within the second barrier layer 126 prior to the formation of the capacitors 168. This is accomplished by depositing a first portion 174 of the second barrier layer 126, forming a bitline 172 to contact the source region 108, by any known technique, and depositing a second portion 176 of the second barrer layer 126. This would result in a final structure 170 with a buried bit line 172, as shown in FIG. 28.The present invention provides a substantial improvement in the electric field damping effect of the insulative spacer 162 on the annular walls 163. When thin structures 180 are used in a high dielectric constant capacitor (see FIG. 29), during the operation of the capacitor, an electric field 182 (represented by arrows) present in the capacitor structure is particularly intense at the outer corners 184 of the thin annular wall 163 and the edge 165 of the thin annular wall 163 defmed therebetween, due to the relatively small surface area toward which the field is formed. The electric field 182 thus breaks down the high dielectric constant material 164 at one or more portions of the outer edge 165 of the annular walls 163, which breakdown results in capacitor failure. The present invention substantially reduces or eliminates the effects of the intense electric field 182 at the corners 184 of the thin structure 180. As shown in FIG. 30, the presence of the insulative spacer 162 atop the annular wall 163 (e.g., the thin structure) acts as an insulator or dampening mechanism on the top of the annular wall 163. The insulative spacer 162 keeps the intense electric field from forming in the corners 184 of the annular wall 163 by providing a large dielectric barrier between the outer edges of the annular wall 163 and the upper cell plate 166 (not shown in FIG. 30). Thus, the electric field 182 is formed to extend only substantially perpendicular to a centerline 186 of the annular wall 163.Yet another substantial improvement in the present invention is in the isolation of the polysilicon material 134 from the high dielectric constant material 164. The polysilicon material 134 is generally used to make electrical contact with the substrate 102, because the polysilicon material 134 will not contaminate the substrate 102. However, most of the high dielectric constant materials 164, such as BST, are formed in highly oxidative environments. If the polysilicon material 134 comes into contact or is proximate to such an environment, the polysilicon material 134 will oxidize and become less conductive. Thus, the structure and method of formation of the high dielectric constant capacitor of the present invention isolates the polysilicon material 134 from such oxidation by recessing the polysilicon material 134 away from the high dielectric constant material layer 164, as shown in FIGS. 26 and 27.Yet still another advantage of the present invention is the allowance for a certain amount of redeposition of the conductive material layer 152 on the insulative spacer 162. As shown in FIG. 31, some conductive material layer 152 may redeposit on the insulative spacer 162 during the formation of the individual cell nodes 160, as discussed and illustrated with FIGS. 22 and 23. If the insulative spacer 162 were removed after such conductive material layer 152 redeposition, sharp protrusions 188 may remain, as shown in FIG. 32. These sharp protrusions 188 may result in shorting since subsequently deposited high dielectric constant material will be very thin over the sharp protrusions 188.It is, of course, understood that the insulative spacer 162 need not be left on or subsequently formed to provide the caps. The tip portions of the annular walls 163 may be rendered non-conductive through physical and/or chemical processes, thereby providing the buffer material or non-conductive caps.Having thus described in detail preferred embodiments of the present invention, it is to be understood that the invention defined by the appended claims is not to be limited by particular details set forth in the above description as many apparent variations thereof are possible without departing from the spirit or scope thereof.
Disclosed herein is an apparatus that includes a first semiconductor chip including a plurality of memory cell arrays and a plurality of first bonding electrodes electrically connected to the memory cell arrays, and a second semiconductor chip including a logic circuits and a plurality of second bonding electrodes electrically connected to the logic circuits. The first and second semiconductor chips are stacked with each other so that each of the first bonding electrodes is electrically connected to an associated one of the second bonding electrodes.
1.A device including:A first semiconductor chip including a plurality of memory cell arrays and a plurality of first bonding electrodes electrically connected to the memory cell array; andA second semiconductor chip including a logic circuit and a plurality of second bonding electrodes electrically connected to the logic circuit,Wherein the first and second semiconductor chips are stacked on each other such that each of the plurality of first bonding electrodes is electrically connected to an associated one of the plurality of second bonding electrodes,Wherein each of the memory cell array includes a plurality of first signal lines extending in a first direction, a plurality of second signal lines extending in a second direction different from the first direction, and A plurality of memory cells on an associated one of the intersection points of the plurality of first signal lines and the plurality of second signal lines,Wherein the plurality of memory cell arrays include first and second memory cell arrays adjacent to each other in the second direction,The plurality of first bonding electrodes includes:A first group, which is positioned at one end of the first memory cell array in the first direction and is electrically connected to a predetermined one of the plurality of first signal lines in the first memory cell array By;A second group, which is positioned at the other end of the first memory cell array in the first direction and is electrically connected to the remaining of the plurality of first signal lines in the first memory cell array者; andA third group, which is positioned at one end of the second memory cell array in the first direction and is electrically connected to a predetermined one of the plurality of first signal lines in the second memory cell array Alternatively, the position of the third group in the first direction is positioned between the positions of the first and second groups in the first direction.2.The device according to claim 1, wherein the plurality of first bonding electrodes further comprise a fourth group positioned at the other end of the second memory cell array in the first direction and electrically connected to For the remainder of the plurality of first signal lines in the second memory cell array, the position of the second group in the first direction is positioned in the third and fourth groups Between the positions in the first direction.3.4. The device of claim 2, wherein the plurality of first bonding electrodes further comprise a fifth group, the fifth group being positioned at one end of the first memory cell array in the second direction And electrically connected to a predetermined one of the plurality of second signal lines in the first memory cell array, and the position of the fifth group in the first direction is the same as that of the third group The positions in the first direction overlap.4.4. The device of claim 3, wherein the plurality of first bonding electrodes further comprises a sixth group, the sixth group being positioned at one end of the second memory cell array in the second direction And electrically connected to a predetermined one of the plurality of second signal lines in the second memory cell array, the position of the sixth group in the first direction is the same as that of the second group The positions in the first direction overlap.5.The device according to claim 4, wherein the plurality of second bonding electrodes comprise seventh and eighth electrodes electrically connected to the first, second, third, fourth, fifth, and sixth groups, respectively , Ninth, tenth, eleventh and twelfth groups.6.The apparatus of claim 5, wherein the second semiconductor chip further comprises:A first wiring layer having a plurality of first wirings extending in the first direction, one of the plurality of first wirings overlapping the plurality of second bonding electrodes; andA second wiring layer having a plurality of second wirings extending in the second direction, and one of the plurality of second wirings overlaps the plurality of second bonding electrodes.7.The device according to claim 6, wherein the other one of the plurality of second wirings extends between the seventh group and the ninth group so as not to interact with the plurality of second bonding electrodes overlapping.8.7. The apparatus according to claim 7, wherein the second semiconductor chip further includes a third wiring having a third wiring extending in the first and second directions so as not to overlap the plurality of second bonding electrodes Floor.9.A device including:A plurality of memory cell arrays, each of which includes a plurality of first signal lines extending in a first direction, a plurality of second signal lines extending in a second direction different from the first direction, and a plurality of A plurality of memory cells on the associated one of the intersections of the first signal line and the plurality of second signal lines; andA plurality of first bonding electrodes, each of which is electrically connected to an associated one of the plurality of first signal lines,Wherein the plurality of memory cells are arranged in a plurality of rows, each of which includes a memory cell array arranged in the first direction in the plurality of memory cell arrays, and the plurality of rows are arranged in the second Direction, andThe positions of the memory cell arrays in the plurality of memory cell arrays in adjacent rows among the plurality of rows are shifted by a half pitch in the first direction.10.The device according to claim 9, wherein the plurality of first bonding electrodes are grouped in a plurality of first groups, and each of the plurality of first groups extends along the first direction Along with the boundary arrangement of adjacent memory cell arrays in the plurality of memory cell arrays.11.11. The device of claim 10, further comprising a plurality of second bonding electrodes, each of the plurality of second bonding electrodes electrically connected to an associated one of the plurality of second signal lines.12.The apparatus according to claim 11, wherein the plurality of second bonding electrodes are grouped in a plurality of second groups, and each of the plurality of second groups extends along the second direction Along with the boundary arrangement of adjacent memory cell arrays in the plurality of memory cell arrays.13.The apparatus of claim 12, wherein the plurality of first groups and the second group are adjacent to each other in the second direction.14.9. The apparatus of claim 9, wherein the plurality of first signal lines are a plurality of bit lines, and the plurality of second signal lines are a plurality of sub word lines.15.The apparatus of claim 14, wherein the plurality of memory cells are a plurality of DRAM cells.16.A device including:A plurality of unit areas, each of which includes a first logic circuit and a second logic circuit;A plurality of first bonding electrodes electrically connected to the first logic circuit and arranged so as to overlap the first logic circuit; andA plurality of second bonding electrodes electrically connected to the second logic circuit and arranged so as to overlap with the second logic circuit,Wherein the plurality of unit areas are arranged in a plurality of rows, each of which includes a unit area of the plurality of unit areas arranged in the first direction, and the plurality of rows are arranged in the second direction AndWherein the positions of the unit areas in the plurality of unit areas in adjacent rows of the plurality of rows are shifted by a half pitch in the first direction.17.The apparatus of claim 16, wherein the first logic circuit includes a sense amplifier, and the second logic circuit includes a sub-word driver.18.The device according to claim 17, further comprising a first wiring layer having a plurality of first wirings extending in the first direction and a first wiring layer having a plurality of second wirings extending in the second direction Two wiring layer.19.The apparatus according to claim 18, further comprising a third wiring layer having a third wiring extending in the first and second directions so as not to overlap the plurality of first and second bonding electrodes .20.The apparatus according to claim 19, wherein the third wiring transmits a predetermined signal asynchronous with operations of the sense amplifier and the sub-word driver.
Semiconductor memory device with multiple chips connected by hybrid bonding methodBackground techniqueA method used in a semiconductor device such as a DRAM (Dynamic Random Access Memory) is known, in which a memory chip with a memory cell array and a logic chip with a logic circuit including a sense amplifier and a word driver are manufactured in two different On the wafer, and the resulting two wafers are mixed and bonded. According to this method, memory cell arrays and logic circuits can be manufactured through separate processes, thereby allowing individual process conditions to be optimized.However, the arrangement pitch of bit lines or sub-word lines formed on the memory chip is significantly smaller than the arrangement pitch of bonding electrodes used in hybrid bonding. Therefore, in this type of semiconductor device, both the memory chip and the logic chip require wiring layers for pitch conversion. The details of this aspect are disclosed in US 2015/0287706 A1 filed by the inventor. The memory chip disclosed in US 2015/0287706 A1 has the following configuration: wherein a plurality of memory cell arrays are laid out in a matrix (in both the bit line direction and the sub word line direction), so that the transfer is asynchronous with the operation of the memory cell array The wiring of the signal passes through the sense amplifier area or the sub-word driver area, where the result is that noise can be easily superimposed on the sub-word line or bit line.Summary of the inventionA semiconductor memory device including a plurality of chips connected by a hybrid bonding method is disclosed. An example device includes a first semiconductor chip and a second semiconductor chip. The first semiconductor chip includes a plurality of memory cell arrays and a plurality of first bonding electrodes electrically connected to the memory cell array. The second semiconductor chip includes a logic circuit and a plurality of second bonding electrodes electrically connected to the logic circuit. The first and second semiconductor chips are stacked on each other such that each of the plurality of first bonding electrodes is electrically connected to an associated one of the plurality of second bonding electrodes. Each of the memory cell arrays includes a plurality of first signal lines extending in a first direction, a plurality of second signal lines extending in a second direction different from the first direction, and a A plurality of memory cells on an associated one of intersection points of the plurality of first signal lines and the plurality of second signal lines. The plurality of memory cell arrays include first and second memory cell arrays adjacent to each other in the second direction. The plurality of first bonding electrodes includes a first group, a second group, and a third group. The first group is positioned at one end of the first memory cell array in the first direction and is electrically connected to a predetermined one of the plurality of first signal lines in the first memory cell array By. The second group is positioned at the other end of the first memory cell array in the first direction and is electrically connected to the remaining of the plurality of first signal lines in the first memory cell array By. The third group is positioned at one end of the second memory cell array in the first direction and is electrically connected to a predetermined one of the plurality of first signal lines in the second memory cell array By. The position of the third group is in the first direction and is positioned between the positions of the first and second groups in the first direction.In another aspect of the invention, an example device includes a plurality of memory cell arrays and a plurality of first bonding electrodes. Each of the plurality of memory cell arrays includes a plurality of first signal lines extending in a first direction, a plurality of second signal lines extending in a second direction different from the first direction, and A plurality of memory cells on the associated one of the intersections of the first signal line and the plurality of second signal lines. Each of the plurality of first bonding electrodes is electrically connected to an associated one of the plurality of first signal lines. The plurality of memory cells are arranged in a plurality of rows, each of which includes a memory cell array arranged in the first direction in the plurality of memory cell arrays, and the plurality of rows are arranged in the second direction on. The positions of the memory cell arrays of the plurality of memory cell arrays in adjacent rows of the plurality of rows are shifted by a half pitch in the first direction.In another aspect of the invention, an example device includes a plurality of unit regions, a plurality of first bonding electrodes, and a plurality of second bonding electrodes. Each of the plurality of unit areas includes a first logic circuit and a second logic circuit. The plurality of first bonding electrodes are electrically connected to the first logic circuit and arranged so as to overlap with the first logic circuit. The plurality of second bonding electrodes are electrically connected to the second logic circuit and arranged so as to overlap with the second logic circuit. The plurality of unit areas are arranged in a plurality of rows, each of which includes a unit area of the plurality of unit areas arranged in the first direction, and the plurality of rows are arranged in the second direction. The positions of the unit areas of the plurality of unit areas in adjacent rows of the plurality of rows are shifted by a half pitch in the first direction.Description of the drawingsFIG. 1 is a schematic diagram illustrating the appearance of the semiconductor device according to the first embodiment.Fig. 2 is a schematic cross-sectional view of the area A shown in Fig. 1.Fig. 3 is a schematic plan view of a memory chip according to an embodiment of the present invention.Fig. 4 is a circuit diagram of a memory cell according to an embodiment of the present invention.Fig. 5 is a schematic plan view of a logic chip according to an embodiment of the present invention.Fig. 6 is an enlarged plan view of the area B shown in Fig. 5.FIG. 7 is a schematic diagram illustrating the appearance of the semiconductor device according to the second embodiment.FIG. 8 is a schematic cross-sectional view of a memory chip taken along a bit line according to an embodiment of the present invention.FIG. 9 is a schematic cross-sectional view of a memory chip taken along a sub-word line according to an embodiment of the present invention.Detailed waysHereinafter, various embodiments of the present invention will be explained in detail with reference to the accompanying drawings. The following detailed description refers to the accompanying drawings that illustrate specific aspects and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present invention. Other embodiments can be utilized, and structural changes, logical changes, and electrical changes can be made without departing from the scope of the present invention. The various embodiments disclosed herein are not necessarily mutually exclusive, because some of the disclosed embodiments can be combined with one or more other disclosed embodiments to form new embodiments.As illustrated in FIG. 1, the semiconductor device 2 according to the first embodiment of the present invention has a configuration in which a memory chip 10 and a logic chip 20 are laminated. A plurality of external terminal electrodes 4 are formed on the surface of the logic chip 20.As illustrated in FIG. 2, the memory chip 10 has a memory cell array 11 including a plurality of memory cells, a bit line connection area 12 and a word line connection area 13. The bit line connection area 12 is connected to bit lines in the memory cell array 11. The word line connection area 13 is connected to the sub word lines in the memory cell array 11. The bit lines and sub word lines in the memory cell array 11 cross each other. The memory cell array 11, the bit line connection area 12 and the word line connection area 13 are covered with an interlayer dielectric film 14. The bit line connection region 12 is connected to the overlying wiring layer 17 via a conductor 15 penetrating the interlayer dielectric film 14. The word line connection area 13 is connected to the overlying wiring layer 17 via a conductor 16 penetrating the interlayer dielectric film 14. The wiring layer 17 is covered with an interlayer dielectric film 18. The wiring layer 17 is a layer provided for pitch conversion. The wiring on the wiring layer 17 is connected to a plurality of bonding electrodes BE1 exposed from the upper surface of the interlayer dielectric film 18.The logic chip 20 has a transistor region 21 including a plurality of transistors constituting a sense amplifier or a word driver, and interlayer dielectric films 22 to 25 covering the transistor region 21. The wiring layers M1, M2, and M3 are formed on the interlayer dielectric films 22, 23, and 24, respectively. The wiring layers M1, M2, and M3 are interconnected and connected via a conductor 26 penetrating the interlayer dielectric films 23 and 24. The wiring on the wiring layer M3 is connected to the plurality of bonding electrodes BE2 exposed from the upper surface of the interlayer dielectric film 25.The memory chip 10 and the logic chip 20 are mixed and bonded such that the bonding electrode BE1 and the bonding electrode BE2 are properly aligned. Therefore, the bit line and the sub word line provided on the memory chip 10 are electrically connected to the transistor region 21 provided on the logic chip 20.As illustrated in FIG. 3, the memory chip 10 has a plurality of memory cell arrays 30. The memory chip 10 according to the present embodiment has memory cell array rows R11, R12, R13, ..., in each of which a plurality of memory cell arrays 30 are arranged in the x direction, and the memory cell array rows R11, R12, R13... are arranged in the y direction. The x-direction position of the memory cell array 30 constituting the memory cell array rows adjacent to each other in the y-direction is shifted by a half pitch, with the result that the memory cell array 30 is arranged in a staggered manner. For example, the x-direction position of the memory cell array 30 constituting the memory cell array row R11 and the x-direction position of the memory cell array 30 constituting the memory cell array row R12 are shifted by a half pitch, and the memory constituting the memory cell array row R12 The x-direction position of the cell array 30 and the x-direction position of the memory cell array 30 constituting the memory cell array row R13 are shifted by a half pitch. The x-direction position of the memory cell array 30 constituting the memory cell array row R11 is aligned with the x-direction position of the memory cell array 30 constituting the memory cell array row R13. However, the shift amount in the x direction between adjacent memory cell arrays 30 in the y direction does not need to be 1/2 pitch, but may be 1/3 pitch. In addition, not all memory cell arrays 30 adjacent in the y direction need to be shifted in the x direction, and the positions of some memory cell arrays 30 adjacent in the y direction in the x direction can be aligned.The memory cell array 30 is defined by the extension range of the bit line BL extending in the x direction and the sub word line SWL extending in the y direction. Therefore, the size of the memory cell array 30 in the x direction is almost the same as the length of a bit line BL in the x direction, and the size of the memory cell array 30 in the y direction is almost the same as the length of a sub word line SWL in the y direction. the same. In FIG. 3, two bit lines BL1 and BL2 and one sub-word line SWL are illustrated, and the memory cell arrays MC1 and MC2 are arranged at the intersection between them. The ends of the bit lines BL1 and BL2 are connected to a sense amplifier provided on the logic chip 20 via the bit line connection area 12. The end of the sub word line SWL is connected to the sub word driver provided on the logic chip 20 via the word line connection area 13. The memory chip 10 according to the present embodiment has a so-called open bit configuration, so that one of the two bit lines (for example, bit line BL1) included in one memory cell array 30 is connected to one end positioned in the x direction The bit line at 1 is connected to the region 12, and the other (for example, the bit line BL2) is connected to the bit line connecting region 12 positioned at the other end in the x direction.The bit line bonding area 31 in which the bonding electrode BE1 corresponding to the bit line BL is disposed is provided at a portion overlapping with the bit line connection area 12. In addition, the word line bonding area 32 in which the bonding electrode BE1 corresponding to the sub word line SWL is disposed is provided at a part of the overlapping word line connection area 13. In this embodiment, the bit line bonding area 31 is arranged at both ends of each memory cell array 30 in the x direction, and the word line bonding area 32 is arranged at two ends of each memory cell array 30 in the y direction. At the ends. The plurality of bonding electrodes BE1 arranged in the bit line bonding area 31 or the word line bonding area 32 constitute a group.For example, the bonding electrode BE1 corresponding to the bit line BL of the memory cell array 30A illustrated in FIG. 3 constitutes a first group G1 connected to one bit line BL and a second group G2 connected to another bit line BL. . Similarly, the bonding electrode BE1 corresponding to the bit line BL of the memory cell array 30B constitutes a third group G3 connected to one bit line BL and a fourth group G4 connected to another bit line BL. As described above, in this embodiment, the memory cell array 30 is arranged in a zigzag shape, so that the group G3 is positioned between the group G1 and the group G2 in the x direction, and the group G2 is positioned between the group G1 and the group G2 in the x direction. Between group G3 and group G4. The bonding electrodes BE1 corresponding to the sub-word lines SWL of the memory cell array 30A constitute a fifth group G5. Similarly, the bonding electrodes BE1 corresponding to the sub-word lines SWL of the memory cell array 30B constitute the sixth group G6. The x-direction position of the fifth group G5 overlaps the x-direction position of the third group G3, and the x-direction position of the sixth group G6 overlaps the x-direction position of the second group G2.As illustrated in FIG. 4, the memory cells MC1 and MC2 are each a DRAM cell composed of a cell transistor 41 and a cell capacitor 42 connected in series. The gate electrode of the cell transistor 41 is connected to its corresponding sub word line SWL, one of the source and drain of the cell transistor 41 is connected to its corresponding bit line BL1 or BL2, and the source and drain of the cell transistor 41 are The other is connected to its corresponding cell transistor 42. A plurality of thus configured memory cells, a plurality of bit lines BL, and a plurality of sub word lines SWL are formed on the memory chip 10, and a sense amplifier for amplifying the potential of the bit line BL and a sub word for driving the sub word line SWL The driver is integrated on the logic chip 20.As illustrated in FIG. 5, the logic chip 20 has a plurality of unit areas 50. The position, shape, size, and number of the plurality of unit regions 50 correspond to the position, shape, size, and number of the plurality of memory cell arrays 30 provided on the memory chip 10. That is, the logic chip 20 has unit rows R21, R22, R23, ..., in each of them, a plurality of unit regions 50 are arranged in the x direction, and the unit rows R21, R22, R23, ... are arranged in the y direction . The x-direction positions of the unit regions 50 constituting the unit rows adjacent to each other in the y-direction are shifted by a half pitch. For example, the x-direction position of the unit area 50 constituting the unit row R21 and the x-direction position of the unit area 50 constituting the unit row R22 are shifted by a half pitch, and the x-direction position and composition of the unit area 50 constituting the unit row R22 The x-direction position of the unit area 50 of the unit row R23 is shifted by a half pitch. The x-direction position of the unit area 50 constituting the unit row R21 is aligned with the x-direction position of the unit area 50 constituting the unit row R23.When the memory chip 10 and the logic chip 20 are mixed and joined, the plurality of memory cell arrays 30 included in the memory chip 10 and the unit area 50 included in the logic chip 20 completely overlap each other, such as in the lamination direction (z direction) Observed on.In this embodiment, as illustrated in FIG. 5, the sense amplifier area 51 is arranged at both ends of each unit area 50 in the x direction, and the sub-word driver area 52 is arranged at each unit area 50 in the y direction. At both ends in the direction. The sense amplifier area 51 is an area where the transistor constituting the sense amplifier and the bonding electrode BE2 connected to the bit line BL are disposed and overlaps the bit line bonding area 31 of FIG. 3. The sub-word driver area 52 is an area where the transistor constituting the sub-word driver and the bonding electrode BE2 connected to the sub-word line SWL are disposed and overlaps the word line bonding area 32 of FIG. 3.The width of the sense amplifier area 51 in the y direction is almost equal to the width of the unit area 50 in the y direction. Therefore, the edge of the unit area 50 in the x direction is completely covered with the sense amplifier area 51. On the other hand, the width of the sub-word driver area 52 in the x direction is smaller than the width of the unit area 50 in the x direction, and therefore the edge of the unit area 50 in the y direction is not completely covered with the sense amplifier area 51 and the sub-word driver Area 52. In addition, a part of the unit area 50 that is not covered with the sense amplifier area 51 and the sub-word driver area 52 is used as an address latch circuit, command decoder, address decoder, FIFO in addition to the sense amplifier and sub-word driver. Circuits, mode registers, DLL circuits, and various logic circuits of the power supply circuit are arranged in the peripheral circuit area 53 where. In this embodiment, the peripheral circuit area 53 has an H shape, as viewed in the lamination direction (z direction).As illustrated in FIG. 6, the sense amplifier region 51 includes a plurality of signals arranged in the y direction at the same pitch as the pitch of the bit line BL and the bonding electrode 62 connected to the end of the corresponding signal wiring 61 in the x direction. Wire 61. The x-direction distance between the ends of the signal wirings 61 adjacent in the y direction is set to be several times the arrangement pitch of the signal wirings 61. This increases the pitch of the signal wiring 61 to the pitch of the bonding electrode 62. In addition, in the present embodiment, the bonding electrodes 62 are arranged in a zigzag shape, so that the distance between the bonding electrodes 62 adjacent in the y direction is also increased. The sub word driver area 52 includes a plurality of signal wirings 63 arranged in the x direction at the same pitch as the pitch of the sub word line SWL and the bonding electrode 64 connected to the end of the corresponding signal wiring 63 in the y direction. The y-direction distance between the ends of the signal wirings 63 adjacent in the x-direction is set to be several times the arrangement pitch of the signal wirings 63. This increases the pitch of the signal wiring 63 to the pitch of the bonding electrode 64. In addition, in the present embodiment, the bonding electrodes 64 are arranged in a zigzag shape, so that the distance between the bonding electrodes 64 adjacent in the x direction is also increased.In this embodiment, the plurality of memory cell arrays 30 included in the memory chip 10 are arranged in a zigzag shape, so that the plurality of unit areas 50 included in the logic chip 20 can also be arranged in a zigzag shape. Therefore, as illustrated in FIG. 5, the position of the sense amplifier area 51 allocated to the unit area 50 adjacent in the y direction is shifted by half the pitch in the x direction, and similarly, the position allocated to the unit adjacent in the y direction The position of the sub-word driver area 52 of the area 50 is shifted by a half pitch in the x direction. Therefore, in the wiring layer M3, the area where the bonding electrode BE2 is formed is arranged in a zigzag shape, making it possible to freely route the wiring S3 formed on the wiring layer M3 in the x and y directions. The wiring S3 formed on the wiring layer M3 does not pass through the sense amplifier area 51 and the sub-word driver area 52, so that even when a signal asynchronous with the operation of the sense amplifier or the sub-word driver is transmitted, the signal does not become Noise to the sense amplifier or sub-word driver.The wiring layer M1 is a layer on which the wiring extending in the x direction is arranged, and the wiring layer M2 is a layer on which the wiring extending in the y direction is arranged. In this case, as illustrated in FIG. 5, the global bit line GBL extending in the x direction may be disposed on the wiring layer M1, and the main word line MWL extending in the y direction may be disposed on the wiring layer M2. In addition, on the wiring layer M2, another wiring S2 may be arranged so as to pass between the sense amplifier regions 51 adjacent in the y direction. The wiring S2 does not pass through the sense amplifier area 51 and the sub-word driver area 52, and the signal passing through the wiring S2 does not become noise to the sense amplifier or the sub-word driver. In addition, on the wiring layer M1, a wiring S1 connected to a transistor constituting the peripheral circuit region 53 may be disposed. The wiring S1 does not pass through the sense amplifier area 51 and the sub-word driver area 52, and the signal passing through the wiring S1 does not become noise to the sense amplifier or the sub-word driver.As illustrated in FIG. 7, the semiconductor device 6 according to the second embodiment of the present invention has a configuration in which two memory chips 10A and 10B and a logic chip 20 are laminated. As illustrated in FIGS. 8 and 9, the memory chip 10A includes bonding electrodes BE3 and BE4 on both surfaces thereof. The memory chip 10B is a chip having the same configuration as that used for the memory chip 10 in the first embodiment and includes a bonding electrode BE1 on one surface thereof. The bonding electrodes BE4 of the memory chip 10A are connected to the bonding electrodes BE1 of the memory chip 10B, respectively. The bonding electrodes BE3 of the memory chip 10A are connected to the bonding electrodes BE2 of the memory chip 20, respectively.FIG. 8 illustrates the connection relationship between the bit line BL and the bonding electrodes BE1, BE3, and BE4, and FIG. 9 illustrates the connection relationship between the sub word line SWL and the bonding electrodes BE1, BE3, and BE4. As illustrated in FIGS. 8 and 9, the bonding electrodes BE1, BE3, and BE4 whose positions are aligned in the lamination direction are shorted to each other. The bit line BL is directly connected to its corresponding bonding electrodes BE1, BE3, and BE4, respectively, and the sub word line SWL is connected to its corresponding bonding electrodes BE1, BE3, and BE4 via switches SW1 to SW4, respectively. The switches SW1 to SW4 are operated such that one of them is turned on based on a selection signal supplied via the other bonding electrode (not illustrated), and the remaining switches are turned off. Therefore, when the sub-word signal supplied from the sub-word driver area 52 of the logic chip 20 is activated, one of the four sub-word lines SWL illustrated in FIG. 9 is activated and connected to the memory cell of the activated sub-word line SWL Be accessed. The data read from the memory cell is supplied to the sense amplifier area 51 of the logic chip 20 via one of the bit lines BL illustrated in FIG. 8.As described above, in the present embodiment, two memory chips 10A and 10B are laminated on the logic chip 20, making it possible to obtain a memory capacity more than twice that in the semiconductor device 2 according to the first embodiment.Although the present invention has been disclosed in the context of certain preferred embodiments and examples, those skilled in the art should understand that the present invention extends beyond the explicitly disclosed embodiments to other alternative embodiments and/or uses of the present invention And its obvious modifications and equivalents. In addition, based on the present invention, those skilled in the art will easily understand other modifications within the scope of the present invention. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still fall within the scope of the present invention. It should be understood that various features and aspects of the disclosed embodiments can be combined with or substituted for each other to form different modes of the disclosed invention. Therefore, it is hoped that the scope of at least some of the inventions disclosed herein should be limited by the specific embodiments disclosed above.For example, although DRAM is exemplified as the application target of the present invention, the present invention can be applied to other RAMs such as MRAM.
A library of cells for designing an integrated circuit, the library comprises continuous diffusion compatible (CDC) cells. A CDC cell includes a p-doped diffusion region electrically connected to a supply rail and continuous from the left edge to the right edge of the CDC cell; a first polysilicon gate disposed above the p-doped diffusion region and electrically connected to the p-doped diffusion region; an n-doped diffusion region electrically connected to a ground rail and continuous from the left edge to the right edge; a second polysilicon gate disposed above the n-doped diffusion region and electrically connected to the n-doped diffusion region; a left floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the left edge; and a right floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the right edge.
CLAIMSWHAT IS CLAIMED IS:1. An apparatus comprising an integrated circuit, the integrated circuit comprising: a supply rail;a ground rail; anda first cell, the first cell having a left edge and a right edge, the first cell comprising:a p-doped diffusion region contained in the first cell and continuous from the left edge to the right edge, the p-doped diffusion region electrically connected to the supply rail;a first polysilicon gate disposed above the p-doped diffusion region and electrically connected to the supply rail;an n-doped diffusion region contained in the first cell and continuous from the left edge to the right edge, the n-doped diffusion region electrically connected to the ground rail;a second polysilicon gate disposed above the n-doped diffusion region and electrically connected to the ground rail;a left floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the left edge; anda right floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the right edge.2. The apparatus of claim 1, the integrated circuit further comprising:a second cell, the second cell having a left edge, the second cell comprising: a p-doped diffusion region;an n-doped diffusion region;a p-doped bridge diffusion region connecting together the p-doped diffusion regions of the first and second cell; andan n-doped bridge diffusion region connecting together the n-doped diffusion regions of the first and second cell.3. The apparatus of claim 2, wherein the p-doped bridge diffusion region and the n- doped bridge diffusion region are each disposed below the right floating polysilicon gate.4. The apparatus of claim 2, wherein the p-doped bridge diffusion region and the n- doped bridge diffusion region are each disposed above the right floating polysilicon gate.5. The apparatus of claim 2, wherein the p-doped bridge diffusion region and the n- doped bridge diffusion region are each disposed below the left floating polysilicon gate.6. The apparatus of claim 2, wherein the p-doped bridge diffusion region and the n- doped bridge diffusion region are each disposed above the left floating polysilicon gate.7. The apparatus of claim 2, the second cell comprising a left floating polysilicon gate disposed over the p-doped region and the n-doped region of the second cell and proximal to the left edge of the second cell.8. The apparatus of claim 7, the second cell having a right edge, wherein the p- doped region of the second cell is continuous from the left edge of the second cell to the right edge of the second cell, and wherein the n-doped region of the second cell is continuous from the left edge of the second cell to the right edge of the second cell.9. The apparatus of claim 1, wherein the apparatus is selected from the group consisting of a phone, a tablet, a base station, and a computer system.10. A method to bridge together cells in an integrated circuit, the method comprising:adding a marker layer to each edge of a cell;for each marker shape touching two diffusion edges, growing a shortest diffusion edge of the two diffusion edges touched by the each marker shape, wherein the growth is a width of the each marker shape; applying a Boolean AND to grown diffusion edges and marker shapes to define new diffusion regions;growing each new diffusion region to have a polysilicon pitch of the integrated circuit; andgrowing each floating gate proximal to a grown new diffusion region.11. The method of claim 10, wherein growing each floating gate proximal to a grown new diffusion region includes growing the each floating gate under the grown new diffusion region12. A method to bridge together cells in an integrated circuit, the method comprising:during post placement of cells in a design, inserting continuous diffusion compatible (CDC) filler cells in the design;grouping and ordering cells on a placement row to form a list of cells; and traversing the list of cells in order, whereinif a cell in the list of cells is a CDC cell and a neighboring cell of the cell is a CDC cell, the cell and its neighboring cell each comprising diffusion regions, then creating a bridge cell at an edge of the cell to overlap the cell and its neighboring cell so as to electrically connect together the diffusion regions of the cell and its neighboring cell.13. The method of claim 12, further comprising:if a second cell in the list of cells is a filler cell and a neighboring cell of the second cell is a CDC cell, the second cell and its neighboring cell each comprising diffusion regions, then replacing the filler cell with a filler cell having a diffusion profile that provides the neighboring cell of the second cell with improved performance and creating a bridge cell at an edge of the second cell to overlap the second cell and its neighboring cell so as to electrically connect together the diffusion regions of the second cell and its neighboring cell.14. A method to generate filler cells in an integrated circuit, the method comprising: adding a first set of placement constraints in a design, the first set of placement constraints to enforce minimum spacing between continuous diffusion compatible (CDC) cells and non-CDC cells;selecting and inserting end cap cells to terminate voids between CDC and non- CDC cells in the design due to the minimum spacing; andcreating filler cells where there are placement voids between CDC cells on a placement row in the design.15. The method of claim 14, further comprising:adding a second set of placement constraints to provide placement constraints in addition to the first set of placement constraints, the second set of placement constraints to prevent a first type of cell in the design from abutting a second type of cell in the design; andselecting and inserting dual end cap cells to terminate voids between incompatible CDC cells in the design.
METHOD AND APPARATUS FOR A DIFFUSION BRIDGEDCELL LIBRARYClaim of Priority under 35 U.S.C. §119[0001] The present Application for Patent claims the benefit of U.S. Provisional Application No. 61/836,309, entitled "METHOD AND APPARATUS FOR A DIFFUSION BRIDGED CELL LIBRARY" filed June 18, 2013, assigned to the assignee hereof, and expressly incorporated herein by reference.Field of Disclosure[0002] Embodiments of the present invention relate to design of integrated circuits. Background[0003] In modern process technologies, transistor performance is highly dependent on the length of diffusion (LOD) past the transistor gate. This dependency may be caused by stress differences in the diffused region, depending on distance to shallow trench isolation (STI), and differences in localized heating between STI and diffused areas during flash annealing, to name just a couple of examples.[0004] In many modern CMOS logic cell libraries, the diffusion is typically laid out in two rows: one row for P-type pMOSFETs (Metal Oxide Semiconductor Field Effect Transistor), and another row for nMOSFETs. Typical logic cell libraries break the diffusion at each cell's edge (border) in order to electrically isolate the transistors inside the cell from neighboring cells. Additionally, there are restrictions on the polysilicon (or metal-gate) layers that enforce a fixed patterning for the layer. These polysilicon patterning rules are such that typical logic cell layouts have a dummy polysilicon feature at a cell edge.[0005] In other types of cell libraries, known as gate-array, a uniform diffusion and polysilicon pattern is used. In some forms of gate-array, the diffused area is not broken at the cell edge, but instead uses MOSFETs that are turned to electrically isolate logically non- equivalent nodes. One byproduct of these cell architectures is that the diffusion sizes (and thus MOSFET widths) are similar for all logic circuits that comprise the basic template of the gate array. SUMMARY[0006] Embodiments of the invention are directed to systems and methods for a diffusion bridged cell library.[0007] Because transistor performance is highly influenced by breaks in the diffusion area, embodiments provide a cell architecture that is expected to maximize the benefit of the LOD effects so as to influence performance or leakage power.[0008] In an embodiment, an integrated circuit includes a cell having a p-doped diffusion region contained in the cell and continuous from its left edge to its right edge, where the p-doped diffusion region is electrically connected to a supply rail. The cell further includes a first polysilicon gate disposed above, i.e. created in a later fabrication step, the p-doped diffusion region and electrically connected to the supply rail; an n-doped diffusion region contained in the cell and continuous from the left edge to the right edge, and electrically connected to the ground rail; a second polysilicon gate disposed above the n-doped diffusion region and electrically connected to the ground rail; a left floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the left edge; and a right floating polysilicon gate disposed over, i.e. created in a later fabrication step, the p-doped and n-doped diffusion regions and proximal to the right edge.[0009] In another embodiment, a method bridges together cells in an integrated circuit. The method includes adding a marker layer to each edge of a cell; for each marker shape touching two diffusion edges, growing a shortest diffusion edge of the two diffusion edges touched by the each marker shape, wherein the growth is a width of the each marker shape; applying a Boolean AND to grown diffusion edges and marker shapes to define new diffusion regions; growing each new diffusion region to have a polysilicon pitch of the integrated circuit; and growing each floating gate proximal to a grown new diffusion region.[0010] In another embodiment, a method bridges together cells in an integrated circuit. The method includes, during post placement of cells in a design, inserting continuous diffusion compatible (CDC) filler cells in the design; grouping and ordering cells on a placement row to form a list of cells; traversing the list of cells in order, wherein if a cell in the list of cells is a CDC cell and a neighboring cell of the cell is a CDC cell, the cell and its neighboring cell each comprising diffusion regions, then creating a bridge cell at an edge of the cell to overlap the cell and its neighboring cell so as to electrically connect together the diffusion regions of the cell and its neighboring cell.[0011] In another embodiment, a method generates filler cells in an integrated circuit. The method includes adding a first set of placement constraints in a design, the first set of placement constraints to enforce minimum spacing between continuous diffusion compatible (CDC) cells and non-CDC cells; selecting and inserting end cap cells to terminate voids between CDC and non-CDC cells in the design due to the minimum spacing; and creating filler cells where there are placement voids between CDC cells on a placement row in the design.[0012] In another embodiment, a method de-tunes cells in an integrated circuit. The method includes generating a bridged timing model for a design comprising bridging cells or end cap cells; generating an un-bridged timing model for the design in which bridging cells or end cap cells are removed; performing a static timing analysis with the bridged timing model; and for each cell in a fast-path in the design, determining the cells neighboring the each cell, and if both neighboring cells are not setup critical within some margin, then any bridging cells neighboring the cell are removed from the design.[0013] In another embodiment, an integrated circuit includes means for providing a supply voltage; means for providing a ground voltage; and a cell. The cell includes a p-doped diffusion region contained in the cell and continuous from its left edge to its right edge, and electrically connected to the means for providing a supply voltage; a first polysilicon gate disposed above the p-doped diffusion region and electrically connected to the means for providing a supply voltage; an n-doped diffusion region contained in the cell and continuous from the left edge to the right edge, and electrically connected to the means for providing a ground voltage; a second polysilicon gate disposed above the n-doped diffusion region and electrically connected to the means for providing a ground voltage; a left floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the left edge; and a right floating polysilicon gate disposed over the p-doped and n-doped diffusion regions and proximal to the right edge.[0014] In another embodiment, a method bridges together cells in an integrated circuit. The method includes adding a marker layer to each edge of a cell; means for growing, the means for growing to grow a shortest diffusion edge of two diffusion edges both touched by a marker shape, wherein the means for growing grows the shortest diffusion edge to have a width of the marker shape; means for defining new diffusion regions, the means for defining new diffusion regions to apply a Boolean AND to grown diffusion edges and marker shapes to define new diffusion regions; growing each new diffusion region to have a polysilicon pitch of the integrated circuit; and growing each floating gate proximal to a grown new diffusion region.BRIEF DESCRIPTION OF THE DRAWINGS[0015] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof.[0016] Figure 1 illustrates a cell layout according to an embodiment.[0017] Figure 2 illustrates a method for forming continuous diffusion regions according to an embodiment.[0018] Figure 3 illustrates a method for forming bridging cells according to an embodiment.[0019] Figure 4 illustrates a method for forming end cap and filler cells according to an embodiment.[0020] Figure 5 illustrates a method for forming dual end cap cells according to an embodiment.[0021] Figure 6 illustrates a method for de-tuning a fast path according to an embodiment.[0022] Figure 7 illustrates a communication system in which embodiments may find application.DETAILED DESCRIPTION[0023] Embodiments are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.[0024] The term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.[0025] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.[0026] Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that specific circuits (e.g., application specific integrated circuits (ASICs)), one or more processors executing program instructions, or a combination of both, may perform the various actions described herein. Additionally, the sequences of actions described herein can be considered to be embodied entirely within any form of a non-transitory, computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, "logic configured to" perform the described action.[0027] Embodiments may provide cell library architectures with some or all of the following features described below. It is expected that such features improve performance for a given process technology that exhibits enhanced device performance for increased LOD.[0028] Embodiments may provide a cell library architecture in which a cell has continuous diffusion regions. Furthermore, embodiments may provide a method for bridging together cells, where some or all such cells have continuous diffusion regions according to the cell library architecture.[0029] Embodiments may provide a cell layout comprising one or more continuous diffusion regions across the cell, where each diffusion region is fully self-contained inside the cell. Embodiments also provide for a cell comprising one or more transistors tied in an OFF configuration, where the gate for a pMOSFET is at Vdd and the gate for an nMOSFET is at Vss. This feature helps to electrically isolate signal and supply nodes, while preserving a continuous diffusion profile for the cell. Such tied transistors are used instead of a STI to isolate electrically separated diffusion regions. [0030] Embodiments may provide a cell comprising a polysilicon contact on top of a transistor gate. This feature allows the transistor's gate to be tied in a manner that does not reduce the width of the transistors on or around the tied transistor. An embodiment cell also may have the feature whereby the left and right edge portions of the diffusion regions are electrically connected to the supply or ground rail. As will be discussed later, this feature facilitates in bridging cells together without shorting a signal line to a supply or ground rail, or shorting a signal line to another signal line.[0031] Embodiments may provide a cell comprising floating polysilicon stripes at the left or right edges for patterning purposes. The cell library architecture itself does not contain diffusion regions at a cell edge. After instantiation, a floating polysilicon stripe is converted to a floating transistor using a post-processing algorithm, which may be termed bridging.[0032] Embodiments may provide a cell library with a limited set of diffusion edge profiles at the left and right boundaries. For example, in a particular embodiment, the pMOSFETs and nMOSFETs are each allowed one of two device sizes and a single offset. Thus, for an arbitrary single row of cells with a left edge and right edge, the number of diffusion edge profiles are 2 (for a pMOSFET or a nMOSFET) times 2 (two diffusion heights) times 2 (for either left or right), yielding 8 different profile types for a given cell.[0033] Embodiments may provide a cell library that includes an encoding to represent the cell's diffusion edge profile, which is stored as a cell name or attribute on the cell.[0034] Such a cell with one or more of the above features may be referred to as a continuous diffusion compatible cell. With these features, the floating polysilicon at a cell edge may be converted to a floating-gate MOSFET of a particular device width and length.[0035] Because the cell library includes continuous diffusion regions internal to the cell, but is constructed to be continuous diffusion compatible (CDC), a technique referred to as bridging may be used to enable continuous diffusion between cells. The purpose of bridging is to connect two cells that are CDC. This is important in achieving the expected performance.[0036] Figure 1 illustrates the layout of a cell 100 according to an embodiment. The p-doped diffusion region is labeled 102 and the n-doped diffusion region is labeled 104. The diffusion regions 102 and 104 are continuous across the width of the cell. In practice, the p-doped diffusion region 102 is formed in an n-well, but for simplicity the n-well is not shown. [0037] The polysilicon layers for forming the gates are each labeled 106. A left polysilicon layer, labeled 108, and a right polysilicon layer, labeled 110, are each floating for the purpose of bridging together cells. For ease of illustration, dielectric layers underneath the polysilicon layers are not shown. As discussed later in this description, diffusion is grown to bridge cells together, where in Figure 1 p-doped and n-doped grown diffusion regions for bridging are labeled as 126 and 128, respectively, and the other cell bridged to the cell 100 is labeled as 101. The cell 101 may be an end cap cell or bridging cell, which is discussed later.[0038] Local interconnect layers 112 are formed in the p-doped region 102 so that the p-doped region is connected to the supply voltage Vdd. The local interconnect layers 114 are formed in the n-doped region 104 so that the n-doped region 104 is connected to the substrate (or ground) voltage Vss. If electrical isolation is desired for portions of the p- doped diffusion region 102, a polysilicon layer, labeled 116, above the p-doped diffusion layer 102, is connected to one of the local interconnect layers 112 by way of an interconnect 118 to provide a pMOSFET turned OFF. The polysilicon layer 116 serves as the gate of the pMOSFET, and is formed above a dielectric layer. The dielectric layer is formed above the p-doped region 102, but is not shown for ease of illustration.[0039] Similarly, if electrical isolation is desired for portions of the n-doped diffusion region 104, a polysilicon layer, labeled 120, above the n-doped diffusion layer 104, is connected to one of the local interconnect layers 114 by way of an interconnect 122 to provide a nMOSFET turned OFF. The polysilicon layer 120 serves as the gate of the nMOSFET, and is formed above a dielectric layer. The dielectric layer is formed above the n-doped region 104, but is not shown for ease of illustration.[0040] Note that the profile of a diffusion region may vary along a cell. For example, the numerical label 124 indicates a transition region in the profile (height) for the n-doped diffusion region 104.[0041] Embodiments may include a bridging step to create new diffusion regions on a floating polysilicon layer at a cell edge that is dependent on the width of diffusion regions in neighboring cells. In the design or layout process, these new diffusion regions may be over or under their respective floating polysilicon layers. Before fabrication, each layer is merged and processed independently and sequenced appropriately in a fabrication line. Depending on the actual fabrication process, e.g., gate-first or gate-last, such a newly created diffusion region during a bridging step may be processed before or after its respective floating polysilicon layer. Accordingly, it is immaterial whether an embodiment treats a new diffusion region in a bridging step as over or under its respective floating polysilicon layer, and referring to a newly created diffusion region as "on" its respective floating polysilicon layer is to be interpreted to mean that it may be under or over the floating polysilicon layer during the design or layout process.[0042] Adding such new diffusion regions may create a floating gate parasitic device at a cell edge. If two neighboring device polysilicon features are not equivalent, additional polysilicon shapes may be added to reduce polysilicon feature variation, or perhaps may be added to satisfy layout design rules. A marker layer may be annotated on a floating gate device to prevent the device from being subject to LVS (layout versus schematic) checking.[0043] The bridging method may be handled in various ways. One approach may be termed a geometric bridging method. In this approach, a marker layer is added at the left or right (or both) edges of all CDC cells in a layout. In post placement, CDC filler cells are inserted where needed. A geometric (shapes-based) processing routine may be used to bridge from cell to cell in a design rule checking (DRC) correct manner. In particular, new diffusion regions may be created, and the length of the floating-gate at the cell edge may be grown where new diffusion regions are created.[0044] An example is illustrated in the flow diagram of Figure 2. A marker layer is added to each floating gate at a cell edge (202). For each marker shape touching two diffusion edges, grow the shortest diffusion edge by the width of its corresponding marker shape (204). New diffusion regions are defined by the Boolean (logical) operator AND applied to each pair of a grown diffusion edge and its corresponding marker shape (206). Each new diffusion is grown at the polysilicon pitch (208). Grow each floating gate under a grown new diffusion based upon the widths of polysilicon lines touching the grown new diffusion (210).[0045] Another embodiment, which may be termed a row-based bridging method, is illustrated in Figure 3. During post placement, CDC filler cells are inserted (302). Cells on the same placement row are grouped together and ordered, for example from left to right (304). Traversing the list of cells in order: 1) if the current cell is a CDC cell and its neighboring cell is a CDC cell, then a bridge cell is created at the cell edge, overlapping the two cells in order to connect the diffusion regions and to correct design rule errors on base layers that may arise from differences in the diffusion edge profiles; or 2) if the current cell is a filler cell, and the neighboring cell is a CDC cell, then the filler cell is replaced with a filler cell having a diffusion profile that provides the neighboring CDC cell with improved or optimal performance, and also a bridge cell is created at the cell edge as described above.[0046] Embodiments also provide a method for intermixing continuous and non-continuous diffusion library cells, or heterogeneous continuous-diffusion library cells. Although continuous diffusion across a cell may increase performance, there may be an area penalty or circuit restrictions (due to library architecture required for continuous diffusion that makes traditional architecture library cells useful for certain circuits. In order to have maximum flexibility in a logic netlist, a combination of continuous diffusion and non-continuous diffusion cells is often desired.[0047] Although at first thought it may appear that using CDC and non-CDC cells together is not optimum, we note that it is important to bridge together CDC cells so that the LOD effects provide improved or optimal performance. Mixing in non-CDC cells will necessitate breaks in the diffusion, which are undesirable, as they will impact LOD of neighboring CDC cells. In order to mitigate performance degradation of bridged CDC cells, it is suggested to terminate the continuous diffusion regions to ensure appropriate performance of these bridged CDC cells. When terminating groups of CDC cells, a terminating (end-cap) dummy cell should be used to provide an LOD that guarantees appropriate performance for the bridged CDC cells in the group.[0048] Figure 4 illustrates a method for mixing CDC library cells with non-CDC library cells.The method adds a placement constraint between two cell types to enforce a minimum spacing between CDC and non-CDC library cells (402). End cap cells are selected and inserted to terminate the void between CDC and non-CDC cells caused by the above minimum spacing constraint (404). The method creates filler cells between regions where there are placement voids between CDC cells on a placement row (406).[0049] Furthermore, the selection and insertion of these cells according to the embodiment of Figure 4 may be chosen to maximize performance of neighboring cells (parametric frequency improvement) by choosing filler and end-cap cells that provide the optimal LOD for the neighboring cell based off a pre-computed table of known good LOD conditions. [0050] Figure 5 illustrates a method for mixing incompatible continuous-diffusion library cells. Step 502 indicates that additional placement proximity constraints are utilized to prevent some types of CDC cells from adjoining other types of CDC cells. This may occur when the diffusion edge profiles for two continuous diffusion cells are incompatible due to design rule restrictions. As an example, suppose there are three diffusion edge profiles denoted as A, B, and C, with design rules: A may touch A or B; B may touch A, B, or C; and C may touch B or C. For this particular set of design rules, a placement restriction should be defined to enforce a minimum spacing between A and C.[0051] Dual end cap cells are selected and inserted to terminate the void between incompatible continuous diffusion cells (504). A dual end cap cell is a cell that terminates two different types of incompatible CDC cells. Referring to the above example and method, cell types A and C are placed with a separation constraint of four tracks, where a track is a set of discretized placement positions on a standard cell row, usually having a pitch corresponding to the poly pitch. A dual end cap cell of type "A-C" that terminates diffusion edge profiles for the A and C cell types is inserted between the cells A and C.[0052] Alternatively, two individual end-cap cells may be inserted instead of a dual-end-cap cell. Encoded diffusion edge profile may be stored by cell name or attribute to select the appropriate filler cell, end-cap cell, or dual-end-cap cell for optimal cell performance.[0053] One drawback to applying performance enhancing LOD approaches is that it may result in increased off-current for the transistors. This leads to increased leakage power, as compared to non-continuous diffusion approaches. Ideally, one would prefer to employ continuous diffusion where required for performance, and non-continuous where leakage power is more important. Accordingly, other embodiments provide a method for reducing leakage power by selectively converting continuous diffusion cells to non- continuous diffusion cells.[0054] During the synthesis and placement portion of a typical design flow, a technique of mixing may be employed to select between continuous and non-continuous diffusion cells. However, at the end of the design flow it may be preferable to make in-place cell swaps to de-tune non-critical paths in order to reduce leakage power. Additionally, there may be fast-paths in a design that may cause race-conditions. In these cases it also may be desirable to convert continuous diffusion cells to non-continuous diffusion cells in order to prevent races and increase hold-time margin. [0055] When using a CDC cell architecture, an embodiment methodology may be used to detune devices in-place to reduce leakage or reduce race-through by selective removal of bridging cells. Note that deciding which bridges to remove is not straightforward, as removal of a bridging cell may negatively affect the cell on both the left and right of the bridging cell.[0056] Figure 6 illustrates a flow diagram for the above-described method. In step 602, a timing model is generated based on characterizing the CDC library, assuming LOD when the library is bridged to neighboring CDC cells or filler and/or end cap cells. This timing model may be termed a bridged or normal timing model. In step 604, a second timing model is generated based on characterizing the CDC library but with a different LOD condition than assumed in step 602, such that the bridges are removed. This timing model may be termed an un-bridged timing model. This should have generally lower leakage and be slower than the original characterization of step 602. In step 606, a static timing analysis is performed using the bridged timing model.[0057] In step 608, for each cell in a fast-path in the design, the cells neighboring that cell (which may be termed a fast-path cell) are determined. In step 610, if both neighboring cells are not setup-critical within some margin, then the bridging cells neighboring the fast-path cell are removed, and the un-bridged timing model is used for the fast-path cell and both neighboring cells. Steps 608 and 610 are repeated for cells that significantly contribute to leakage power but are not setup-critical (step 612).[0058] For the case of CDC cells that abut filler cells or end-cap cells on both the left or right side, the filler and end-cap cells do not need to be adjusted for timing and thus the above steps may be simplified.[0059] To achieve additional accuracy and increase the potential for cell swaps, the above method may be extended to provide for four characterizations: bridged (normal) timing model; right un-bridged timing model; left un-bridged timing model; and both un- bridged timing model. This allows more granularity in that a cell that is bridged on both sides may be de -tuned by removing its left bridging cell only, its right bridging cell only, or both left and right bridging cells. For example, the previously described method may be amended to attempt removal of both bridging cells, but if timing of either the left or right side would violate setup time (plus margin), then the left or right un-bridged timing models should be selected to attempt simulation of removing only the left or right bridging cells. [0060] Figure 7 illustrates a communication system in which embodiments may find application. Figure 7 illustrates a wireless communication network 702 comprising base stations 704A, 704B, and 704C. Figure 7 shows a communication device, labeled 706, which may be a mobile cellular communication device such as a so-called smart phone, a tablet, or some other kind of communication device suitable for a cellular phone network, such as a computer or computer system. The communication device 706 need not be mobile. In the particular example of Figure 7, the communication device 706 is located within the cell associated with the base station 704C. Arrows 708 and 710 pictorially represent the uplink channel and the downlink channel, respectively, by which the communication device 706 communicates with the base station 704C.[0061] Embodiments may be used in data processing systems associated with the communication device 706, or with the base station 704C, or both, for example. Figure 7 illustrates only one application among many in which the embodiments described herein may be employed.[0062] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0063] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0064] The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0065] Accordingly, an embodiment of the invention can include a non-transitory computer readable media embodying a method for a diffusion bridged cell library. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in embodiments of the invention.[0066] While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
A non-transitory computer readable storage medium having stored thereon instructions executable by one or more processors to perform operations including: receiving a plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies; responsive to receiving the plurality of workload parameters, retrieving calibration data from a calibration database; generating a power estimate based on the plurality of workload parameters and the calibration data; and providing the power estimate to a resource manager is shown. Alternatively, the input parameters may include (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) an amount of available power, wherein the estimator may provide an estimation of the frequency at which the nodes should operate to utilize as much of the available power without exceeding the available power.
A method comprising:generating a power estimate based on:a job characteristic to run on one or more processors cores;a list of the one or more processors cores;frequencies associated with the one or more processors cores; andpre-determined data associated with energy consumed by the one or more processor cores; andproviding the power estimate to a resource manager.The method of claim 1, wherein the pre-determined data is stored in memory.The method of claim 1, wherein the job characteristic is associated with workload power.The method of claim 1, wherein the pre-determined data is to model energy consumed by the one or more processor cores.The method of claim 1, wherein the pre-determine data is scaled.A computing system comprises:an SOC with multiple processor cores; andone or more memories to store instructions to cause the SOC to perform a power estimation scheme including:generating a power estimate based on:a job type to run on one or more cores of the multiple processors cores;a list of the multiple processors cores;frequencies associated with the multiple processors cores; andpre-determined data to model energy cost associated with at least a portion of the SOC; andproviding the power estimate to a resource manager.The computing system of claim 6, wherein the job type is associated with workload power.The computing system of claim 6, wherein the pre-determined data is stored in memory.The computing system of claim 6, wherein the one or more memories include one of: non-persistent memory, persistent memory, power-backed random access memory, flash memory, phase-change memory, solid-state drive, hard disk drive, optical disc drive, or portable memory device.An apparatus comprising:means for generating a power estimate based on:a job characteristic to run on one or more processors cores;a list of the one or more processors cores;frequencies associated with the one or more processors cores; andpre-determined data associated with energy consumed by the one or more processor cores; andmeans for providing the power estimate to a resource manager.The apparatus of claim 10, wherein the pre-determined data is stored in memory.The apparatus of claim 10, wherein the job characteristic is associated with workload power.The apparatus of claim 10, wherein the pre-determined data is to model energy consumed by the one or more processor cores.The apparatus of claim 10, wherein the pre-determine data is scaled.A machine-readable storage medium having machine-readable inductions, that when executed, cause one or more machines to perform a method according to any one of claims 1 to 5.
CROSS-REFERENCE TO RELATED APPLICATIONSThe present application claims the benefit of prior U.S. Provisional Patent Application No. 62/040,576 , entitled "SIMPLE POWER-AWARE SCHEDULER TO LIMIT POWER CONSUMPTION BY HPC SYSTEM WITHIN A BUDGET" filed on August 22, 2014, which is hereby incorporated by reference in its entirety.The present application claims the benefit of prior U.S. PATENT APPLICATION NO. 14/582,783 (ATTORNEY DOCKET NO. 42P74562) ENTITLED "METHOD AND APPARATUS TO GENERATE AND USE POWER, THERMAL AND PERFORMANCE CHARACTERISTICS OF NODES TO IMPROVE ENERGY EFFICIENCY AND REDUCING WAIT TIME FOR JOBS IN THE QUEUE" , FILED DECEMBER 24, 2014 ;U.S. Patent Application No. 14/582,979 (Attorney Docket No. 42P74563) entitled "ADJUSTMENT OF EXECUTION OF TASKS" , filed December 24, 2014 ;U.S. Patent Application No. 14/582,985 (Attorney Docket No. 42P74564) entitled "CONTROL OF POWER CONSUMPTION" , filed December 24, 2014 ;U.S. Patent Application No. 14/582,988 (Attorney Docket No. 42P74565) entitled "FORECAST FOR DEMAND OF ENERGY" , filed December 24, 2014 ;U.S. Patent Application No. 14/582,772 (Attorney Docket No. 42P74566) entitled "METHODS AND APPARATUS TO MANAGE JOBS THAT CAN AND CANNOT BE SUSPENDED WHEN THERE IS A CHANGE IN POWER ALLOCATION TO A DISTRIBUTED COMPUTER SYSTEM" , filed December 24, 2014 ;U.S. Patent Application No. 14/582,743 (Attorney Docket No. 42P74567) entitled "MANAGING POWER PERFORMANCE OF DISTRIBUTED COMPUTING SYSTEMS" , filed December 24, 2014 ;U.S. Patent Application No. 14/582,756 (Attorney Docket No. 42P74568) entitled "PROFILING A JOB POWER AND ENERGY CONSUMPTION FOR A DATA PROCESSING SYSTEM", filed December 24, 2014 ;U.S. Patent Application No. 14/582,764 (Attorney Docket No. 42P74569) entitled "A POWER AWARE JOB SCHEDULER AND MANAGER FOR A DATA PROCESSING SYSTEM" , filed December 24, 2014 .FIELDEmbodiments of the disclosure generally relate to the field of power conservation in distributed computer systems. More specifically, one embodiment of the disclosure relates to estimating the power performance of a job to be run on multiple nodes within a distributed computer system to improve job scheduling and monitoring of the jobs processed by the distributed computer system.GENERAL BACKROUNDA distributed computer system may perform parallel computing by the simultaneous use of multiple nodes to execute a computational assignment referred to as a job. Each node may include one or more processors, memory, an operating system, and one or more input/output (I/O) components. The nodes may communicate with each other through a high speed network fabric, e.g., an Ethernet, an Omni-Path, an InfiniBand, or other network, and may use shared file systems or storage. The job may be divided into thousands of parallel tasks distributed over thousands of nodes. These nodes may synchronize with each other hundreds of times a second.Future distributed computer systems are projected to require tens of megawatts of power, making their power management as a foremost concern in the industry. These distributed computer systems will be expected to deliver exascale performance with limited power and energy budgets. Current distributed computer systems may apply power capping to adhere to the limited power and energy budgets. However, current approaches to power capping negatively impact the performance of the distributed computer systems due to typically inaccurate power capping.Current approaches estimate the power needed by one or more nodes of a distributed computer system to run a job based upon the thermal dissipation power (TDP) value of the one or more components comprising each node. As it is rare that a job actually uses the TDP value of each node on which the job is run, the estimation using the TDP values results in an overestimate. By over-estimating the power needed to startup and run a job, current approaches delay the start of the job and reduce the efficiency of the distributed computer system by preventing other jobs from running.The start of running a job is delayed as the over-estimation of the necessary power to start the job causes the distributed computer system to delay the start of the job until the overestimated startup power is available. Alternatively, a more accurate estimation of the startup power would avoid a delay of running the job. In addition, the over-estimation of the power required to run the job results in an over-allocation of power for the job. The over-allocation takes away from power that could be allocated to other jobs requesting to be run by the distributed computer system.In addition, the TDP is not the maximum power that may be consumed by a node. For example, TDP does not accurately measure the electrical power consumption when every component of the node is being used but measures the thermal dissipation. Therefore, it is possible that a job request may consume more power than the TDP estimate which may lead to the distributed computer system attempting to consume more power than it has been allocated by a utility facility.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:FIG. 1 is an exemplary block diagram of a HPC system receiving various inputs.FIG. 2 is an exemplary block diagram of logic comprising the HPC system 100.FIG. 3 is an exemplary embodiment of a user interface screen for designating a user policy while requesting a job be processed using the HPC system 100.FIG. 4A is a table illustrating exemplary parameters used in determining resources necessary to run a job when power monitoring is not implemented.FIG. 4B is a table illustrating exemplary parameters used in determining resources necessary to run a job when power monitoring is implemented.FIG. 5 is a flowchart illustrating an exemplary method for generating an estimate of the startup power.FIG. 6 is a flowchart illustrating an exemplary method for generating an estimate of the minimum required power for a jobFIG. 7 is a flowchart illustrating an exemplary method for generating an estimate of the allocated power required for a job.FIG. 8 a flowchart illustrating an exemplary method for generating an estimate of an operational frequency based on the available power for a job.DETAILED DESCRIPTIONVarious embodiments of the disclosure relate to estimating the power performance of a job that is to be run on a distributed computer system. An estimation of the power performance of a job may be determined based on, at least in part, whether the owner of the job permits the job to be subject to a power limit, the job power policy limiting the power supplied to the job, whether the owner of the job permits the job to be suspended and/or calibration data of the one or more nodes of the distributed computer system on which the job is to run. The distributed computer system may be, for example, a High Performance Computing (HPC) system. In some embodiments of the disclosure, a job may not be subjected to a power policy that limits the power supplied to a job as set forth by the owner of the job; however, a HPC system may, and likely will, have an overall limited power budget that cannot be exceeded by the combination of jobs processed by the HPC system.Embodiments of the disclosure relate to estimating the startup power and/or minimum power required to run a job based on the actual power measurement for each node on which the job will run which takes into consideration the part-to-part variation between nodes. Other embodiments of the disclosure relate to estimating the startup power and/or minimum power required to run a job based on measurements taken while running the job (e.g., a sample portion of the job and/or the full job). Still other embodiments of the disclosure relate to estimating the startup power and/or minimum power required to run a job based on a fixed frequency at which the one or more nodes that will run the job will operate.The HPC system may estimate the power that should be allocated to a job based on a predetermined frequency at which the nodes selected to run the job will operate. The estimate may be based on, at least, the job type (e.g., workload type), a list of nodes selected to run the job, and optionally a minimum power to be supplied to the selected nodes or a frequency at which the selected nodes will operate while running the job. The estimation may provide the HPC system with, at least, a power level for each frequency for each node (e.g., a platform maximum power (PMP), a workload maximum power and/or a workload average power), a thermal estimate that allows the HPC system to manage a cooling system and/or a performance estimate (e.g., performance metric) for one or more frequencies of the selected nodes which allows a user (e.g., the owner of the job) to adjust the job request based on the estimated performance metric (e.g., the time until completion), the estimated power level and the estimated total energy consumption of the job. A workload maximum power of a node may be defined as the maximum observed power sampled while the node was being calibrated (e.g., running a miniature application ("mini-app") and/or a portion of a job). The workload average power of a node may be defined as the average power of all of the power measurements sampled while the node was being calibrated. In at least some embodiments, to start a job a power needed for a job is estimated using one of power estimation techniques as described herein and using one of power calibration techniques as described in a related US Patent Application No. 14/582,783 (Attorney docket number 42P74562) entitled "Methods and apparatus to generate and use power, thermal and performance characteristics of nodes to improve energy efficiency and reducing wait time for jobs in the queue."The workload type may be used to determine the portion of calibration data used to generate an estimation as described above. For example, if the workload type (e.g., the type of job) is similar to a mini-app that has been used to calibrate the nodes of the HPC system, the estimator will retrieve the calibration data associated with the calibration of the nodes using the mini-app (e.g., stored in a calibration database).Alternatively, the workload type may be a small portion of the actual job requested by a user. In such an example, the user may be have submitted a small portion (e.g., calculations totally, for example, 4-5 hours until completion) of the desired job for use in calibrating the one or more nodes that will process the job request. Therefore, the estimator will retrieve the calibration data of the one or more nodes that will process the job associated with the small portion of the desired job.In yet another embodiment, sampling of various parameters of the nodes used to process a job (e.g., inter alia, temperature and/or power consumption) may be done during execution of the job. If the job is requested to be processed again (e.g., with slightly varying input parameters), the estimator may retrieve the calibration data associated with the job during its previous run-time and use that calibration data in generating the estimation.Alternatively, or in addition, the HPC system may estimate the frequency at which a job should be run when the HPC system is aware of the power allocated for the job. The estimate may be based on, for example, the available power for the job (e.g., PMP, workload maximum power or workload average power), the job and the list of the selected nodes to run the job. The estimation may provide, for example, the frequency at which the selected nodes should operate, the expected thermal dissipation, the expected performance, and optionally, the power required for and the expected thermal dissipation of running the job at a higher frequency and/or a lower frequency.In at least some embodiments, a job power, a system power, a job's completion and a job suspension status are monitored using one or more monitoring techniques, as described in a related US Patent Application No. 14/582,756 (Attorney docket number 42P74568) entitled "Methods and apparatus to profile power and energy consumption by a job running in multiple nodes and uses shared resources of a distributed computer system (HPC)."Referring to FIG. 1, an exemplary block diagram of a HPC system receiving various inputs is shown. The HPC system100 includes one or more operating system (OS) nodes 101 (also referred to as a head node), one or more compute nodes 102, one or more input/output (I/O) nodes 103 and a storage 104. A high-speed fabric communicatively connects the OS node 101, the compute nodes 102 and the I/O nodes 103. The high-speed fabric may be a network topology of nodes interconnected via one or more switches. In one embodiment, as illustrated in FIG. 1, the I/O nodes 103 are communicatively connected to the storage 104. The storage 104 may be non-persistent storage such as volatile memory (e.g., any type of random access memory "RAM"); persistent storage such as non-volatile memory (e.g., read-only memory "ROM", power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.The OS node 101 may provide a gateway to accessing the compute nodes 102. For example, prior to submitting a job for processing on the compute nodes 102, a user may be required to log-in to the HPC system100 which may be through the OS node 101. In embodiments of the disclosure, the OS node 101 may accept jobs submitted by users and assist in the launching and managing of jobs being processed by the compute nodes 102. In one embodiment, the OS node 101 comprises a power monitor (not shown); a power estimator (not shown) described herein; and a power calibrator (not shown).In one embodiment, the compute nodes 102 provide the bulk of the processing and computational power. The I/O nodes 103 may provide an interface between the compute nodes 102 and external devices (e.g., separate computers) that may provide input to the HPC system 100 or receive output from the HPC system 100.The system power allocation (Psys) may be provided to the HPC system 100 by, for example, a utility management facility (e.g., as determined by a system administrator or management software such as a datacenter manager). Typically, the Psys will be a limited amount of power allocated to the HPC system 100 with which the HPC system 100 will use to run one or more of the jobs 120. The jobs 120 comprise one or more jobs requested to be run by the HPC system 100 by one or more users. Each job includes a "power policy," which will be discussed in-depth below. The power policy will assist the HPC system 100 in allocating power for the job and aid in the management of the one or more jobs 120 being run by the HPC system 100.In addition, the administrative policies 130 will guide the management of running the jobs 120 by providing an over-arching policy that defines the operation of the HPC system 100. Examples of policies that may be included in the administrative policies 130 include, but are not limited or restricted to, (1) maximize utilization of all hardware and software resources (e.g., instead of running fewer jobs at high power and leaving resources unused, run as many jobs as possible to use as much of the resources as possible); (2) a job with no power limit is given the highest priority among all running jobs; and/or (3) suspended jobs are at higher priority for resumption. Such administrative policies govern the way the HPC system 100 may schedule, launch, suspend and re-launch one or more jobs.I. TERMINOLOGYIn the following description, certain terminology is used to describe features of the invention. For example, in certain situations, both terms "logic" and "engine" are representative of hardware, firmware and/or software that is configured to perform one or more functions. As hardware, logic (or engine) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic.Logic (or engine) may be software in the form of one or more software modules, such as executable code in the form of an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory "RAM"); persistent storage such as non-volatile memory (e.g., read-only memory "ROM", power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code is stored in persistent storage.The term "power monitoring" should be interpreted as dynamically measuring power consumption by one or more of the components comprising the HPC system. The measurements taken may be used to calculate power consumption by, for example, an individual job and/or a group of jobs, as well as to provide statistics on the overall power consumption of the HPC system.The term "power policy" should be interpreted as an input (e.g., one or more parameters) provided to the HPC system that provide guidance on allocation and management of power for a given job. The input may be provided as part of a job request submission and/or may be provided as a separate input (e.g., via a user interface screen or a configuration file). For example, the input may indicate, among other things, (i) whether a job may be subjected to a power limit, (ii) the maximum and/or minimum power at which a job may run and/or (iii) a minimum and/or maximum frequency at which the one or more nodes processing the job may operate.The term "node" should be construed as one or more processors optionally grouped with, at least, a system memory and/or one or more input/output (I/O) components. The one or more processors, the system memory and the one or more I/O components may be referred to as the "components" of a node throughout the specification and claims. Throughout the specification and claims, the terms "processor," "computer processing unit (CPU)," and "core" will be used interchangeably.The term "job" should be interpreted as predetermined calculations performed on the HPC system. For example, a user (e.g., owner of the job) may request that a job be run by the HPC which means the user is requesting to have one or more compute nodes performing calculations according to input parameters and/or data provided by the user. The job request may specify the one or more calculations (e.g., an application) that are to be used for the processing of the job.The term "system power (Psys)" should be interpreted as the amount of power provided to the HPC system by, for example, a facility or datacenter manager. The Psys is the total amount of power the HPC system has to allocate to one or more jobs at any given time.The term "guard band" should be interpreted as a mechanism to assist in the management of a power budget of a HPC system. In one embodiment, the guard band may be an extra power allocation, which may be a predetermined percentage of the power allocated to the job. For example, if a HPC system has 3 MW of power to allocate to a job, the HPC system may only allocate 2.8 MW and maintain 0.2 MW as the guard band to prevent a spike in calculations to cause the power consumption of the job to exceed 3 MW. One purpose of the guard band is to maintain consistent power consumption by a job.The term "platform max power (PMP)" should be interpreted as the power level measured for a node when the node is running a "power-virus." The power-virus is a workload, which may be an artificial workload created solely for calibration, that attempts to run each component of the node as much as possible while the power-virus is being run. Therefore, the PMP is highest possible level of power a node may consume.Lastly, the terms "or" and "and/or" as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, "A, B or C" or "A, B and/or C" mean "any of the following: A; B; C; A and B; A and C; B and C; A, B and C." An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.The invention may be utilized for power management of a distributed computer system, such as a High Performance Computing (HPC) system. In particular, embodiments of the disclosure relate to managing power allocation to one or more jobs run in a HPC system based on estimates of the power consumption for each job as a result of calibration of the nodes within the HPC system. As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.II. POWER AWARE DISTRIBUTED COMPUTER SYSTEMReferring to FIG. 2, an exemplary block diagram of logic comprising the HPC system 100 is shown. The logic of the HPC system 100 illustrated in FIG. 2 provides the bulk of the power management for the HPC system 100 and includes a resource manager 210 including a power aware job scheduler 211 and a power aware job launcher 212, a calibrator 220, an estimator 230, one or more job managers 240 (each job will have its own job manager), a job 250, the user policies 205 and the administrative policies 130. In one embodiment, the resource manager 210 and job manager 240 are configured to collect job power data.The calibrator 220 calibrates the power, thermal dissipation and performance of each node within the HPC system 100. The calibrator 220 may provide a plurality of methods for calibrating the nodes within the HPC system 100. In one embodiment, the calibrator 100 may provide a first method of calibration in which every node within the HPC system 100 runs a sample workload (e.g., a mini-application, a portion of an application and/or a test script) so the calibrator 220 may sample various parameters (e.g., power consumed) at predetermined time intervals in order to determine, inter alia, (1) the average power, (2) the maximum power, and (3) the minimum power for each node. In addition, the sample workload may be run on each node at every operating frequency of the node. In another embodiment, the calibrator 220 may provide a second method of calibration in which calibration of one or more nodes occurs during the run-time of a job. In such a situation, the calibrator 220 may sample the one or more nodes on which a job is running (e.g., processing). In the second method, the calibrator 220 obtains power measurements of each node during actual run-time.The estimator 230 provides the resource manager 210 with estimates of power consumption for each job enabling the resource manager 210 to efficiently schedule and monitor each job requested by one or more job owners (e.g., users). The estimator 220 may provide a power consumption estimate based on, for example, maximum and average power values stored in a calibration database, wherein the calibration database is populated by the processing of the calibrator 220. In addition, the minimum power required for each job may be considered. Other factors that may be used by the estimator 230 to create a power consumption estimate include, but are not limited or restricted to, whether the owner of the job permits the job to be subject to a power limit, the job power policy limiting the power supplied to the job (e.g., a predetermined fixed frequency at which the job will run, a minimum power required for the job, or varying frequencies and/or power supplied determined by the resource manager 210), the startup power for the job, the frequency at which the job will run, the available power to the HPC system 100 and/or the allocated power to the HPC system 100.The each job requested by a user (e.g., the owner of the job) is accompanied by a user policy 205 (also illustrated in FIG. 1). The user policy includes at least a decision on whether the job 250 may be subjected to a power limit, if a power limit is permitted by the policy to limit the power (e.g., fixed frequency, minimum power required, or varying frequency and/or power determined by the resource manager 210), and whether the job 250 may be suspended. The user policy will be discussed in-depth below with FIG. 3.In one embodiment, a power aware job scheduler 211 is configured to receive a selection of a mode for a job (e.g., included within the user policies 205), to determine an available power for the job based on the mode and to allocate a power for the job based on the available power. In one embodiment, the power aware job scheduler 211 is configured to determine a uniform frequency for the job based on the available power. In one embodiment, the power aware job scheduler 211 is configured to determine the available power for the job based on at least one of a monitored power, an estimated power, and a calibrated power. The power aware job scheduler 211 and resource manager 210 are configured to receive information regarding power consumption, to distribute the power budget to each job, and to implement a uniform frequency mechanism to limit power, as described in further detail below.The resource manager 210 uses power aware job scheduler 211 and power aware job launcher 212 to schedule and launch a job based on the received power inputs, e.g., the user policies 205 and the administrative policies 206. In one embodiment, the resource manager 210 is a software object that is responsible for allocation of compute and I/O resources for interactive and batch jobs that one or more users want to run. Typically, the resource manager 210 is also responsible for scheduling the jobs out of the job queue and launching the jobs to run as scheduled. A job manager 240 is configured to control a job to stay within an allocated power budget for the job, as described in further detail below. In one embodiment, the job manager 240 is responsible for operating a job within the constraints of one or more power policies after the job has been launched. In one embodiment, the job manager 240 is used to control power performance of all components (e.g., nodes, or other components) involved in execution of a job as per policies specified by at least one of the user and/or administrator. The power aware job scheduler 211 and job manager 240 are described in the US Patent Application No. 14/582,764 (Attorney docket number 42P74569) entitled "Methods and apparatus for a power aware job scheduler and manager to operate a distributed computing (HPC) within given power limits with high energy efficiency."A. EXEMPLARY POWER POLICY SELECTION USER INTERFACEReferring to FIG. 3, an exemplary embodiment of a user interface screen for designating a user policy while requesting a job be processed using the HPC system 100 is shown. The user interface screen 300 includes the display area 310, 320 and 330. The display area 310 allows a user to designate whether the job, e.g., the job 250, is permitted to be subjected to a power limit (e.g., selecting "NO" results in the power-limiting policy 310, "No Power Limit," as seen in FIGs. 4A and 4B below).The display area 320 pertains to the selection of one of a predetermined power-limiting policy when the user permits the job to be subjected to power-limiting. In the embodiment shown in FIG. 3, the display area 320 provides four additional predetermined power-limiting policies 321-323. The power-limiting policy 321 is a fixed frequency policy ("Fixed-Frequency") in which the user designates a particular frequency at which the one or more nodes on which the job will run should operate. The power-limiting policy 322 is a minimum job power policy ("Minimum Job Power") in which the user designates a minimum power to be supplied to the one or more nodes on which the job 250 will run. The power-limiting policy 323 is an automatic mode ("Auto-mode") in which the resource manager 210 may varying the frequency at which the one or more nodes operate and/or the power supplied to the one or more nodes on which the job 250 is running. The power-limiting policy 324 is a maximum job power policy ("Maximum Job Power") in which the user designates a maximum power to be supplied to the one or more nodes on which the job 250 will run. The display area 330 pertains to the selection of whether the job 250 may be suspended during processing.A user interface screen is not the only method for a user to provide the HPC system 100 with input parameters such as, for example, a power policy, a minimum required frequency, a minimum required power and/or whether the job may be suspended. Alternatively, such parameters may be provided to the HPC system 100 as part of the job submission and/or as a configuration file (e.g., text file). In yet another embodiment, such parameters may be set by a system administrator, a facility manager/administrator and/or predetermined as part of a user's account with the HPC system 100.B. EXEMPLARY PARAMETERS FOR GENERATING POWER AND FREQUENCY ESTIMATIONSReferring to FIG. 4A, a table illustrating exemplary parameters used in determining resources necessary to run a job when power monitoring is not implemented is shown. The table 401 includes the column 421 that includes the parameters provided in an estimate to run a job and a first row 430 that sets forth the various power policies a user may select. The first power policy, "No Power Limit," is set forth in the column 422. A second power policy, "Fixed-Frequency," is set forth in the column 423. A third power policy, "Minimum Power," is set forth in the column 424, a fourth power policy, "Auto-mode," is set forth in the column 425 and a fifth power policy, "Maximum Power," is set forth in the column 426.According to one embodiment, the estimator 230 does not have knowledge as to the power policy selected by the user. For example, the resource manager 210 (e.g., the job scheduler 211 and/or the job launcher 212) may provide the estimator 230 with a request for an estimation of the startup power required for a job and specifically request the PMP at a specified frequency (which would implicitly mean power monitoring is not implemented, as illustrated in FIG. 4A).As is illustrated in the column 422, when a user selects the power policy of "No Power Limit," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as the PMP for the selected nodes; the maximum frequency at which the selected nodes should be run (e.g., all compute nodes 102 may have the same maximum frequency); the minimum power required to run the job as the PMP; and the power to be allocated for the job as the PMP. Therefore, the estimator 230 will consult a calibrator database to determine, and output, the PMP and the maximum frequency at which selected nodes may operate.When a user selects the power policy of "Fixed-Frequency," the resource manager 210 will request the following parameters from the estimator 230, wherein the frequency selected by the user is represented by Fs: the startup power required for a job as the PMP at Fs for the selected nodes; the frequency at which the selected nodes should be run as Fs; the minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) PMP at Fs; and the power to be allocated for the job is the PMP at Fs. Therefore, the estimator 230 will consult a calibrator database to determine, and output, the PMP at Fs.When a user selects the power policy of "Minimum Power," the resource manager 210 will request the following parameters from the estimator 230, wherein the minimum power established by the user is represented by Pmin: the startup power required for a job as Pmin for the selected nodes; the frequency at which the selected nodes should be run as a first operational frequency, Fo_1, the maximum frequency for which PMP is less than or equal to the available power and PMP at F0_1 is equal or greater than Pmin. The minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) the greater of PMP at Fmin and Pmin; and the power to be allocated for the job as the greater of the minimum required power and the PMP at Fo_1. Therefore, the estimator 230 will consult a calibrator database to determine, and output, Fo_1, and, when the job cannot be suspended, the greater of PMP at Fmin and Pmin.When a user selects the power policy of "Auto-mode," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as the PMP at Fmin, wherein Fmin represents the lowest frequency of the selected nodes; the frequency at which the selected nodes should be run as Fo_1; the minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) the PMP at Fmin; and the power to be allocated for the job as the greater of the minimum required power and the PMP at Fo_1. Therefore, the estimator 230 will consult a calibrator database to determine, and output, Fo_1; the greater of the minimum required power and the PMP at Fmin; and when the job cannot be suspended, the PMP at Fmin.When a user selects the power policy of "Maximum Power," the resource manager 210 will request the following parameters from the estimator 230, wherein the maximum power established by the user is represented by Pmax: the startup power required for a job as PMP at Fmin when PMP at Fmin is less than Pmax for the selected nodes; the frequency at which the selected nodes should be run as a second operational frequency, Fo_2, the maximum frequency for which PMP is less than or equal to the lesser of the available power and the maximum power; the minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) PMP at Fmin; and the power to be allocated for the job as the lesser of the Pmax and the PMP at Fo_2. Therefore, the estimator 230 will consult a calibrator database to determine, and output, PMP at Fmin when PMP at Fmin is less than Pmax for the selected nodes; Fo_2; PMP at Fmin when the job cannot be suspended; and the lesser of the Pmax and the PMP at Fo_2.Referring to FIG. 4B, a table illustrating exemplary parameters used in determining resources necessary to run a job when power monitoring is implemented is shown. The table 402 includes the column 441 that includes the parameters provided in an estimate to run a job and a first row 450 that sets forth the various power policies a user may select. The first power policy, "No Power Limit," is set forth in the column 442. The second power policy, "Fixed-Frequency," is set forth in the column 443. The third power policy, "Minimum Power," is set forth in the column 444, the fourth power policy, "Auto-mode," is set forth in the column 445 and a fifth power policy, "Maximum Power," is set forth in the column 446.When power monitoring is implemented, the HPC system 100 (in particular, the job manager 240) is constantly aware of the power being consumed by the job 250 as the power being consumed is being dynamically measured. In contrast, when power monitoring is not implemented, the HPC system 100 determines the available power based on the Psys and the power allocated to the job 250, which is a static value.As is illustrated in the column 442, when a user selects the power policy of "No Power Limit," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as the workload maximum power plus a guard band for the selected nodes; the frequency at which the selected nodes should be run as the maximum frequency of the nodes; the minimum power required to run the job as the workload maximum power plus the guard band; and the power to be allocated for the job as the workload maximum power plus the guard band. Therefore, the estimator 230 will consult a calibrator database to determine, and output, workload maximum power plus a guard band for the selected nodes; and the maximum frequency of the selected nodes.When a user selects the power policy of "Fixed-Frequency," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as the workload maximum at Fs for the selected nodes; the frequency at which the selected nodes should be run as Fs; the minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) the workload maximum at Fs; and the power to be allocated for the job as the workload maximum power at Fs. Therefore, the estimator 230 will consult a calibrator database to determine, and output, the workload maximum at Fs for the selected nodes; and the workload maximum at Fs when the job cannot be suspended.When a user selects the power policy of "Minimum Power," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as Pmin for the selected nodes; the frequency at which the selected nodes should be run as a third operational frequency, Fo_3, the maximum frequency where the workload average power is less than or equal to the available power and workload average power at F0_3 is greater than Pmin. The minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) greater of the workload maximum power at Fmin and Pmin; and the power to be allocated for the job as the greater of the minimum required power and the workload average at Fo_3. Therefore, the estimator 230 will consult a calibrator database to determine, and output, Fo_3; greater of the workload maximum power at Fmin and Pmin when the job cannot be suspended; and greater of the minimum required power and the workload average at Fo_3.When a user selects the power policy of "Auto-mode," the resource manager 210 will request the following parameters from the estimator 230: the startup power required for a job as the workload average at Fmin; the frequency at which the selected nodes should be run as Fo_3; the minimum power required to run the job as either (i) zero when the job may be suspended, or (ii) the workload maximum power at Fmin; and the power to be allocated for the job as the greater of the minimum required power and the workload average at Fo_3. Therefore, the estimator 230 will consult a calibrator database to determine, and output, workload average at Fmin for the selected nodes; Fo_3; the workload maximum power at Fmin when the job cannot be suspended; and the greater of the minimum required power and the workload average at Fo_3.When a user selects the power policy of "Maximum Power," the estimator 230 will provide the resource manager 210 with the following parameters: the startup power required for a job is the workload average at Fmin when the workload average at Fmin is less than Pmax for the selected nodes, or else Pmax; the selected nodes should be run at a fourth operational frequency, Fo_4, the maximum frequency for which the workload maximum power is less than or equal to the lesser of the available power and the maximum power Pmax; the minimum power required to run the job is either (i) zero when the job may be suspended, or (ii) workload maximum power at Fmin; and the power to be allocated for the job is the lesser of the workload maximum power at Fo_4 and Pmax. Therefore, the estimator 230 will consult a calibrator database to determine, and output, workload average at Fmin when the workload average at Fmin is less than Pmax for the selected nodes; Fo_4; workload maximum power at Fmin when the job cannot be suspended; and the lesser of the workload maximum power at Fo_4 and Pmax.C. EXEMPLARY METHODOLOGIES OF ESTIMATING POWER PERFORMANCEReferring to FIG. 5, a flowchart illustrating an exemplary method for generating an estimate of the startup power is shown. Each block illustrated in FIG. 5 represents an operation performed in the method 500 of generating an estimation of the startup power for a job required to be available prior to launching the job. In operation 501, the estimator 230 receives, as input, a unique job identification (ID), a list of nodes on which the job is to run, and optionally, a specified frequency at which the nodes are to operate while running the job. When a specified frequency is provided (yes at operation 502), the startup frequency, Fs, is set as the specified frequency (operation 503). When no specified frequency is provided (no at operation 502), the minimum frequency, Fmin, and the startup frequency, Fs, are set to the lowest frequency for each node within the list of selected nodes (operation 504).In operation 505, the estimator 230 determines whether the job type corresponding to the unique job ID is present in the calibrator database (e.g., the nodes on the list of selected nodes have been calibrated with a workload satisfying a threshold of similarity with the job type of the unique job ID). When the job type is found in the calibrator database (yes at operation 505), the startup power for each node, Ps[NX], with NX representing one of the one or more one nodes on the selected list of nodes, is set to the average workload power for each node at Fs obtained from the calibrator database (operation 506).When the job type is not found in the calibrator database (no at operation 505), the startup power for each node, Ps[NX], is set to the average PMP for each node at Fs obtained from the calibrator database (operation 507).At operation 508, the Ps[NX] is set to the average workload power for each node N at Fs when the job runs at scale (Ps_scaled[NX]). The terms "Pa_scaled[Nx]" and "Pmax_scaled[Nx]" refer to the average and maximum node power, respectively, needed to run the job on compute node Nx when the job is processed on a specified number of compute nodes (as one example, X may be equal to 10,000). The scaling takes into account the fact that the power consumed per node may vary when the job is scaled to run on the specified number of nodes due to reduced power consumption per compute node Nx while the processes operating on each compute node Nx are waiting for communication (e.g., among compute nodes and/or with the OS node 101). According to one embodiment, the wait time to communicate is longer for a larger number of compute nodes.For example, calibration may be run on 100 nodes at one time for a mini-app whereas an actual job request may request the use of 10,000 nodes. In some situations, the average power consumption and maximum power consumption may be less per node when the job runs on 10,000 nodes as communication among 10,000 nodes takes more time than communication among 100 nodes and while the node waits for communication to take place, less power is consumed than when the node is processing calculations. Therefore, the estimator 230 may perform a scaling process on the calibration data to scale the measurements (e.g., maximum temperature, average temperature, maximum power, average power, etc.) based on the number of nodes used during calibration and the number of nodes to be used in the actual processing of the job.At operation 509, the overall startup power for the job, Ps, is set to the sum of the Ps_scaled[NX] for all nodes on the list of selected nodes. At operation 510, the estimates for shared nodes are added. A shared node is a node that performs processing for more than one job. An example of a shared node is an I/O node wherein the I/O node performs control of a storage device shared among multiple nodes and/or performs control over network interfacing (e.g., with a second HPC system and/or user devices). The estimate for shared nodes include an estimate of the power the one or more shared nodes will consume based on the functions the one or more shared nodes will perform. Similar calibration techniques may be used to determine the power consumption of the one or more shared nodes. In operation 511, the estimation of the Ps and Fs for the unique job ID is output to the resource manager 210.Referring now to FIG. 6, a flowchart illustrating an exemplary method for generating an estimate of the minimum required power for a job is shown. Each block illustrated in FIG. 6 represents an operation performed in the method 600 of generating an estimation of the minimum power required to run a job (the minimum power required to run a job is typically less than or equal to the startup power as illustrated in FIG. 5). In operation 601, the estimator 230 receives, as input, at least, a unique job ID and a list of nodes on which the job is to run. At operation 602, Fmin[N] is set to the lowest frequency for each node.In operation 603, the estimator 230 determines whether the job type corresponding to the unique job ID is present in the calibrator database. When the job type is found in the calibrator database (yes at operation 603), the minimum power for each node, Pmin[NX], is set to the average workload power for each node at Fmin obtained from the calibrator database (operation 604). When the job type is not found in the calibrator database (no at operation 603), the minimum power for each node, Pmin[NX], is set to the average PMP for each node at Fmin obtained from the calibrator database (operation 605).At operation 606, the Pmin[NX] is set to the average workload power for each node N at Fmin when the job runs at scale (Pmin_scaled[NX]).At operation 607, the overall minimum power for the job, Pmin, is set to the sum of the Pmin_scaled[NX] for all nodes on the listed of selected nodes. At operation 608, the estimates for shared nodes are added. In operation 609, the estimation of the Pmin and Fmin for the unique job ID is output to the resource manager 210.Referring now to FIG. 7, a flowchart illustrating an exemplary method for generating an estimate of the allocated power required for a job is shown. Each block illustrated in FIG. 7 represents an operation performed in the method 700 of generating an estimation of the allocated power required to run a job. In operation 701, the estimator 230 receives, as input, at least, a unique job identification (ID), a list of nodes on which the job is to run, and an allocated frequency, Fa, at which the nodes are to operate while running the job.In operation 702, the estimator 230 determines whether the job type corresponding to the unique job ID is present in the calibrator database. When the job type is found in the calibrator database (yes at operation 702), the allocated power for each node, Pa[NX], is set to the average workload power for each node at Fa obtained from the calibrator database (operation 703). When the job type is not found in the calibrator database (no at operation 702), the allocated power for each node, Pa[NX], is set to the average PMP for each node at Fa obtained from the calibrator database (operation 704).At operation 705, the Pa[NX] is set to the average workload power for each node N at Fa when the job runs at scale (Pa_scaled[NX]) and the Pmax[Nx] is set to the maximum workload for each node N at Fa when the job runs at scale.At operation 706, the overall allocated power required for the job, Pa, is set to the sum of the Pa_scaled[NX] for all nodes on the listed of selected nodes. At operation 707, the estimates for shared nodes are added.In operation 708, the estimator 230 outputs Pa and Pmax as the estimate for the allocated power and maximum power for the job, respectively.Referring to FIG. 8, a flowchart illustrating an exemplary method for generating an estimate of an operational frequency based on the available power for a job is shown. Each block illustrated in FIG. 8 represents an operation performed in the method 800 of generating an estimation of the operational frequency based on the available power to run a job. In operation 801, the estimator 230 receives, as input, at least, a job type, the power available for the job (Pavail), and a list of nodes on which the job is to run.At operation 802, the operational frequency (Fo) and the allocated power (Pa) are set to an "undefined" value. The Fo, as output by the estimator 230 at operation 811, is the estimate of the frequency at which the nodes of the list of nodes provided to the estimator 230 should operate based on the Pavail. The Fo that is output at operation 811 represents the highest frequency at which the nodes on the list of nodes may operate such that the nodes will not consume more power than Pavail. At operation 803, the variable, Fo_next, is set to the lowest frequency for each node (e.g., as provided in the calibrator database).For example, a user may submit a job, "Job_A," to be run in Auto-mode. When the job is ready to run the job launcher 212 determines there is, for example, 1.2 MW of power available to be allocated to Job_A. Subsequently, the resource manager 210 may request an estimate of, inter alia, the frequency at which the nodes should operate to run the job while consuming less than or equal to 1.2 MW of power from the estimator 230. The estimate, based on, at least, a job type and a list of nodes, provides the resource manager 210 with an output of a maximum frequency at which Job_A may run say while consuming less than or equal to 1.2 MW of power. The estimator 230 may also include in the estimate, an estimate of the average power and an estimate of the maximum power Job_A may consume while operating at 2.3 GHz.At operation 804, the estimator 230 determines whether calibration data for a workload of the job type is present in the calibrator database. When calibration data for a workload of the job type is found in the calibrator database (yes at operation 804), the variable power for each node, Pa_next[NX], is set to a workload power for each node at Fo_next obtained from the calibrator database (operation 805). The workload power is based on the power type parameter received as an input at operation 801. The power type may be, for example, PMP, average workload power or maximum workload power. The power type may be determined from user input (e.g., a power policy selection) and/or whether power monitoring is used (e.g., when power monitoring is not used, the power type may be PMP). When calibration data for a workload of the job type is not found in the calibrator database (no at operation 804), the variable power for each node, Pa_next[NX], is set to the average PMP for each node at Fo_next obtained from the calibrator database (operation 806).At operation 807, the Pa_next[NX] is set to the workload power, as described regarding operation 804, for each node N at Fo_next when the job runs at scale (Pa_next_scaled[NX]) and the Pmax_next[Nx] is set to the maximum workload for each node N at Fo_next when the job runs at scale (Pmax_next_scaled[Nx]). At operation 808, the variable representing overall power required for the job, Pa_next, is set to the sum of the Pa_next_scaled[NX] for all nodes on the listed of selected nodes. At operation 809, the estimates for shared nodes are added.At operation 810, the estimator 230 determines whether the variable power, Pa_next, is less than the available power, Pavail, (or Pmax_next is less than Pavail). When Pa_next is not less than Pavail (no at operation 810), the estimator 230 outputs an estimate including (i) Fo as the estimate for the operational frequency for the job, (ii) Pa as the estimated workload power when the job operates at Fo, (iii) Pmax as the maximum workload power at Fo, (iv) Fo_prev as the operating frequency just lower than Fo on the list of operating frequencies maintained in the calibration database, (v) Pa_prev as the estimated power at when the job operates Fo_prev, (vi) Pmax_prev as the maximum workload power at Fo_prev, (vii) Fo_next as the operating frequency just higher than Fo on the list of operating frequencies maintained in the calibration database, (viii) Pa_next as the estimated power when the job operates at Fo_next, and (vii) Pmax_next as the maximum workload power at Fo_next (operation 811). When the power at the lowest frequency for each node (see operation 803) is less the available power, operation 811 will return Fa and Pa as an "undefined" value. Based on Fa and Pa being set to "undefined," the HPC system 100 will determine that the job requested by the user cannot be run with the current available power at the power type desired.When Pa is less than Pavail (yes at operation 810), the estimator 230 sets Fo equal to Fo_next and Pa equal to Pa_next (operation 812). At operation 813, the estimator 230 determines whether the nodes on the list of selected nodes may operate at a higher frequency than Fo_next (operation 813). When the nodes on the list of selected nodes cannot operate at a higher frequency than Fo_next (no at operation 813), the estimator 230 outputs an estimate including (i) Fo as the estimate for the operational frequency for the job, (ii) Pa as the estimated workload power when job oerates at Fo, (iii) Pmax as the maximum workload power at Fo, (iv) Fo_prev as the operating frequency just lower than Fo on the list of operating frequencies maintained in the calibration database, (v) Pa_prev as the estimated power when the job operates at Fo_prev, (vi) Pmax_prev as the maximum workload power at Fo_prev, (vii) Fo_next as the operating frequency just higher than Fo on the list of operating frequencies maintained in the calibration database, (viii) Pa_next as the estimated job power when the job operates at Fo_next, and (vii) Pmax_next as the maximum workload power at Fo_next (operation 811)..When the nodes on the list of selected nodes can operate at a higher frequency than Fo (yes at operation 813), Fo_next is set to the next higher frequency as listed in the calibrator database (operation 814). When Fo_next is set to the next higher frequency as listed in the calibrator database (operation 814), the method 800 returns to operation 804 as discussed above.The following examples pertain to further embodiments:A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the power estimate to a resource manager.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the power estimate to a resource manager, wherein the calibration data includes one or more of a maximum power, an average power, a maximum temperature, an average power, a performance metric, or a minimum required power.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the power estimate to a resource manager, wherein the power estimate includes one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at each frequency of the list of frequencies, an average power consumed by each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, a maximum temperature of each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, an average power of each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, a performance metric for the workload type while running the workload at each frequency of the list of frequencies, or a minimum required power for the nodes on the list of selected nodes to perform the workload at each frequency of the list of frequencies.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the power estimate to a resource manager, wherein the workload type includes a type of one of a small application, a portion of an application or a test script, wherein the small application, the portion of an application and the test script are used in a calibration of the nodes on the list of selected nodes.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the power estimate to a resource manager, wherein generating the power estimate includes scaling the calibration data, wherein the scaling adjusts the power consumed per node on the list of selected nodes when the distributed computer system performs a job of the workload type to consider a size of the list of selected nodes.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the calibration data includes one or more of a maximum power, an average power, a maximum temperature, an average power, or a performance metric.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the frequency estimate further includes the selected frequency and one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at the selected frequency, an average power consumed by each node on the list of selected nodes while running the workload at the selected frequency, a maximum temperature of each node on the list of selected nodes while running the workload at the selected frequency, an average power of each node on the list of selected nodes while running the workload at the selected frequency, a performance metric for the workload type while running the workload at the selected frequency, or a minimum required power for the nodes on the list of selected nodes to perform the workload at the selected frequency.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the frequency estimate further includes the selected frequency and one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at the selected frequency, an average power consumed by each node on the list of selected nodes while running the workload at the selected frequency, a maximum temperature of each node on the list of selected nodes while running the workload at the selected frequency, an average power of each node on the list of selected nodes while running the workload at the selected frequency, a performance metric for the workload type while running the workload at the selected frequency, or a minimum required power for the nodes on the list of selected nodes to perform the workload at the selected frequency, wherein the selected frequency is a highest frequency at which the nodes on the list of selected nodes may operate such that a total power consumed by the nodes on the list of selected nodes does not exceed the available power while running the workload.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the frequency estimate further includes the selected frequency and one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at the selected frequency, an average power consumed by each node on the list of selected nodes while running the workload at the selected frequency, a maximum temperature of each node on the list of selected nodes while running the workload at the selected frequency, an average power of each node on the list of selected nodes while running the workload at the selected frequency, a performance metric for the workload type while running the workload at the selected frequency, or a minimum required power for the nodes on the list of selected nodes to perform the workload at the selected frequency, wherein the frequency estimate further includes an average power consumed by each node on the list of selected nodes while running the workload at a second frequency, a maximum power consumed by each node on the list of selected nodes while running the workload at the second frequency, an average power consumed by each node on the list of selected nodes while running the workload at a third frequency, and a maximum power consumed by each node on the list of selected nodes while running the workload at the third frequency, wherein the second frequency is a next higher frequency than the selected frequency at which the nodes on the list of selected nodes were calibrated and the third frequency is a next lower frequency at which the nodes on the list of selected nodes were calibrated.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the workload type includes a type of one of a small application, a portion of an application or a test script, wherein the small application, the portion of an application and the test script are used in a calibration of the nodes on the list of selected nodes.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the input parameters further include an option for a selected frequency as one of a frequency resulting in a fastest performance metric or a frequency resulting in a most energy efficient metric.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the input parameters further include an option for a selected frequency as one of a frequency resulting in a fastest performance metric or a frequency resulting in a most energy efficient metric, wherein the frequency resulting in the fastest performance metric is a frequency at which the nodes on the list of selected nodes operate to complete a workload of the workload type in a fastest time.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the input parameters further include an option for a selected frequency as one of a frequency resulting in a fastest performance metric or a frequency resulting in a most energy efficient metric, wherein the frequency resulting in the most energy efficient metric is a frequency at which the nodes on the list of selected nodes operate to complete a workload of the workload type with a lowest aggregate power consumption among the nodes on the list of selected nodes.A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system, responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database, generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency, wherein the frequency estimate further includes the selected frequency and one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at the selected frequency, an average power consumed by each node on the list of selected nodes while running the workload at the selected frequency, a maximum temperature of each node on the list of selected nodes while running the workload at the selected frequency, an average power of each node on the list of selected nodes while running the workload at the selected frequency, a performance metric for the workload type while running the workload at the selected frequency, or a minimum required power for the nodes on the list of selected nodes to perform the workload at the selected frequency, wherein generating the frequency estimate includes scaling the calibration data, wherein the scaling adjusts the power consumed per node on the list of selected nodes when the distributed computer system performs a job of the workload type to consider a size of the list of selected nodes.A system generating a power estimate for a distributed computer system comprising one or more processors and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to the distributed computer system, determine a lowest frequency for each node on the list of selected nodes, generate the power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type.A system comprising one or more processors and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to a distributed computer system, determine a lowest frequency for each node on the list of selected nodes, generate a power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type, wherein when the plurality of input parameters further includes a specified frequency, the lowest frequency is set to the specified frequency.A system comprising one or more processors and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to a distributed computer system, determine a lowest frequency for each node on the list of selected nodes, generate a power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type, wherein when the plurality of input parameters does not include a specified frequency, the lowest frequency is set to a lowest frequency for each node on the list of selected nodes that is associated with calibration data stored within a calibration database of the distributed computer system.A system comprising one or more processors and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to a distributed computer system, determine a lowest frequency for each node on the list of selected nodes, generate a power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type, wherein when data associated with calibration of the nodes on the selected list of nodes for the workload type is determined to be present in a calibration database included in the distributed computer system, (i) an average workload power is determined for each node on the selected list of nodes based on calibration data associated with the workload type stored in the calibration database, and (ii) the average workload power for each node on the selected list of nodes is scaled and summated, wherein the summation is provided in the power estimate as a startup power.A system comprising one or more processors and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to a distributed computer system, determine a lowest frequency for each node on the list of selected nodes, generate a power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type, wherein when data associated with calibration of the nodes on the selected list of nodes for the workload type is determined to not be present in a calibration database included in the distributed computer system, (i) an average maximum power is determined for each node on the selected list of nodes based on calibration data associated with a power-virus stored in the calibration database, and (ii) the average maximum power for each node on the selected list of nodes is scaled and summated, wherein the summation is provided in the power estimate as a startup power.In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended clauses.The following section of the description consists of numbered paragraphs simply providing statements of the invention already described herein. The numbered paragraphs in this section are not claims. The claims are set forth below in the later section headed "claims".Clause 1. A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including: receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a list of frequencies; responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database; generating, by the estimator module, a power estimate based on the plurality of workload parameters and the calibration data; and providing, by the estimator module, the power estimate to a resource manager.Clause 2. The non-transitory computer readable storage medium of clause 1 , wherein the calibration data includes one or more of a maximum power, an average power, a maximum temperature, an average power, a performance metric, or a minimum required power.Clause 3. The non- transitory computer readable storage medium of clause 1, wherein the power estimate includes one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at each frequency of the list of frequencies, an average power consumed by each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, a maximum temperature of each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, an average power of each node on the list of selected nodes while running the workload at each frequency of the list of frequencies, a performance metric for the workload type while running the workload at each frequency of the list of frequencies, or a minimum required power for the nodes on the list of selected nodes to perform the workload at each frequency of the list of frequencies.Clause 4. The non-transitory computer readable storage medium of clause 1 , wherein the workload type includes a type of one of a small application, a portion of an application or a test script, wherein the small application, the portion of an application and the test script are used in a calibration of the nodes on the list of selected nodes.Clause 5. The non- transitory computer readable storage medium of clause 1, wherein the generating the power estimate includes scaling the calibration data, wherein the scaling adjusts the power consumed per node on the list of selected nodes when the distributed computer system performs a job of the workload type to consider a size of the list of selected nodes.Clause 6. A non-transitory computer readable storage medium having stored thereon instructions, the instructions being executable by one or more processors to perform operations including: receiving, by an estimator module executed by the one or more processors, a plurality of input parameters, the plurality of input parameters including (i) a workload type, (ii) a list of selected nodes belonging to a distributed computer system, and (iii) a power value available to the distributed computer system; responsive to receiving the plurality of workload parameters, retrieving, by the estimator module, calibration data from a calibration database; generating, by the estimator module, a frequency estimate based on the plurality of workload parameters and the calibration data; and providing, by the estimator module, the frequency estimate to a resource manager, wherein the frequency estimate includes a selected frequency at which the nodes should operate while processing a workload and a corresponding power the processing of the workload will consume at the frequency.Clause 7. The non-transitory computer readable storage medium of clause 6, wherein the calibration data includes one or more of a maximum power, an average power, a maximum temperature, an average power, or a performance metric.Clause 8. The non-transitory computer readable storage medium of clause 6, wherein the frequency estimate further includes the selected frequency and one or more of a maximum power consumed by each node on the list of selected nodes while running a workload of the workload type at the selected frequency, an average power consumed by each node on the list of selected nodes while running the workload at the selected frequency, a maximum temperature of each node on the list of selected nodes while running the workload at the selected frequency, an average power of each node on the list of selected nodes while running the workload at the selected frequency, a performance metric for the workload type while running the workload at the selected frequency, or a minimum required power for the nodes on the list of selected nodes to perform the workload at the selected frequency.Clause 9. The non-transitory computer readable storage medium of clause 8, wherein the selected frequency is a highest frequency at which the nodes on the list of selected nodes may operate such that a total power consumed by the nodes on the list of selected nodes does not exceed the available power while running the workload.Clause 10. The non- transitory computer readable storage medium of clause 8, wherein the frequency estimate further includes an average power consumed by each node on the list of selected nodes while running the workload at a second frequency, a maximum power consumed by each node on the list of selected nodes while running the workload at the second frequency, an average power consumed by each node on the list of selected nodes while running the workload at a third frequency, and a maximum power consumed by each node on the list of selected nodes while running the workload at the third frequency, wherein the second frequency is a next higher frequency than the selected frequency at which the nodes on the list of selected nodes were calibrated and the third frequency is a next lower frequency at which the nodes on the list of selected nodes were calibrated.Clause 11. The non-transitory computer readable storage medium of clause 6, wherein the workload type includes a type of one of a small application, a portion of an application or a test script, wherein the small application, the portion of an application and the test script are used in a calibration of the nodes on the list of selected nodes.Clause 12. The non-transitory computer readable storage medium of clause 6, wherein the input parameters further include an option for a selected frequency as one of a frequency resulting in a fastest performance metric or a frequency resulting in a most energy efficient metric.Clause 13. The non- transitory computer readable storage medium of clause 12, wherein the frequency resulting in the fastest performance metric is a frequency at which the nodes on the list of selected nodes operate to complete a workload of the workload type in a fastest time.Clause 14. The non-transitory computer readable storage medium of clause 12, wherein the frequency resulting in the most energy efficient metric is a frequency at which the nodes on the list of selected nodes operate to complete a workload of the workload type with a lowest aggregate power consumption among the nodes on the list of selected nodes.Clause 15. The non- transitory computer readable storage medium of clause 8, wherein the generating the frequency estimate includes scaling the calibration data, wherein the scaling adjusts the power consumed per node on the list of selected nodes when the distributed computer system performs a job of the workload type to consider a size of the list of selected nodes.Clause 16. A system doe generating a power estimate for a distributed computer system comprising: one or more processors; and a storage module communicatively coupled to the one or more processors, the storage module comprises an estimator module to: receive a plurality of input parameters, the plurality of input parameters including (i) a workload type, and (ii) a list of selected nodes belonging to the distributed computer system; determine a lowest frequency for each node on the list of selected nodes; generate the power estimate by (i) determining an average power consumption for each node on the list of selected nodes at the lowest frequency and (ii) scaling the average power consumption for each node on the list of selected nodes; and provide the power estimate to a resource manager, wherein the power estimate includes a minimum required power to start processing of a job of the workload type.Clause 17. The system of clause 16, wherein when the plurality of input parameters further includes a specified frequency, the lowest frequency is set to the specified frequency.Clause 18. The system of clause 16, wherein when the plurality of input parameters does not include a specified frequency, the lowest frequency is set to a lowest frequency for each node on the list of selected nodes that is associated with calibration data stored within a calibration database of the distributed computer system.Clause 19. The system of clause 16, wherein when data associated with calibration of the nodes on the selected list of nodes for the workload type is determined to be present in a calibration database included in the distributed computer system, (i) an average workload power is determined for each node on the selected list of nodes based on calibration data associated with the workload type stored in the calibration database, and (ii) the average workload power for each node on the selected list of nodes is scaled and summated, wherein the summation is provided in the power estimate as a startup power.Clause 20. The system of clause 16, wherein when data associated with calibration of the nodes on the selected list of nodes for the workload type is determined to not be present in a calibration database included in the distributed computer system, (i) an average maximum power is determined for each node on the selected list of nodes based on calibration data associated with a power- virus stored in the calibration database, and (ii) the average maximum power for each node on the selected list of nodes is scaled and summated, wherein the summation is provided in the power estimate as a startup power.
In one embodiment, the present invention includes a method for receiving a rounding instruction and an immediate value in a processor, determining if a rounding mode override indicator of the immediate value is active, and if so executing a rounding operation on a source operand in a floating point unit of the processor responsive to the rounding instruction and according to a rounding mode set forth in the immediate operand. Other embodiments are described and claimed.
1.A processor comprising:a plurality of registers including a first register and a second register;a control register having a first field to indicate a current floating point rounding mode and a second field to indicate whether abnormality is converted to zero;a status register having a third field to store a value indicating whether an inexact exception has occurred;a control unit for receiving a rounding instruction and for decoding a field of the rounding instruction, the rounding instruction for identifying a first register, the first register for storing a double precision floating point value having a plurality of packages a source operand, the rounding instruction is for indicating that a current floating point rounding mode is used, and the rounding instruction is for indicating a change in a value of a third field of the suppression status register;An execution unit coupled to the control unit and coupled to a plurality of registers, the execution unit being responsive to the rounding instruction:The abnormality of the source operand is converted to zero when the second field indicates that the abnormality is converted to zero;Performing a rounding operation according to the current floating point rounding mode to generate an encapsulated double precision floating point value of the integer value of the encapsulated double precision floating point value corresponding to the source operand;The integer-valued encapsulated double-precision floating-point value is stored in the second register.2.The processor of claim 1 wherein the bit of the rounding instruction has a value of one when the rounding instruction indicates a change in the value of the third field of the suppression status register.3.The processor of claim 1 wherein the current floating point rounding mode is one of:Rounding to negative infinity;Round to infinity;Round to zero; andRound to the nearest even number.4.The processor of claim 1 wherein the rounding instruction is included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes processing The device performs a rounding operation on the scalar value to generate an integer-valued floating-point value.5.A system comprising:Memory controller;a processor core coupled to the memory controller, the processor core comprising:a plurality of registers including a first register and a second register;a control register having a first field to indicate a current floating point rounding mode and a second field to indicate whether abnormality is converted to zero;a status register having a third field to store a value indicating whether an inexact exception has occurred;a control unit for receiving a rounding instruction and for decoding a field of the rounding instruction, the rounding instruction for identifying a first register, the first register for storing a double precision floating point value having a plurality of packages a source operand, the rounding instruction is for indicating that a current floating point rounding mode is used, and the rounding instruction is for indicating a change in a value of a third field of the suppression status register;An execution unit coupled to the control unit and coupled to a plurality of registers, the execution unit being responsive to the rounding instruction:The abnormality of the source operand is converted to zero when the second field indicates that the abnormality is converted to zero;Performing a rounding operation according to the current floating point rounding mode to generate an encapsulated double precision floating point value of the integer value of the encapsulated double precision floating point value corresponding to the source operand;The integer-valued encapsulated double-precision floating-point value is stored in the second register.6.The system of claim 5 wherein the bit of the rounding instruction has a value of one when the rounding instruction indicates a change in the value of the third field of the suppression status register.7.The system of claim 5 wherein the current floating point rounding mode is one of:Rounding to negative infinity;Round to infinity;Round to zero; andRound to the nearest even number.8.The system of claim 5 wherein the rounding instructions are included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes the processor Performs a rounding operation on the scalar value to generate an integer-valued floating-point value.9.The system of claim 5 further comprising a communication device coupled to the processor core.10.The system of claim 5 further comprising an I/O device coupled to the processor core.11.The system of claim 5 further comprising a graphics engine coupled to the processor core.12.The system of claim 5 further comprising a Peripheral Component Interconnect (PCI) Express bus coupled to the processor core.13.The system of claim 5 further comprising audio I/O coupled to the processor core.14.A system comprising:Memory;a processor coupled to the memory, the processor comprising:a plurality of registers including a first register and a second register;a control register having a first field to indicate a current floating point rounding mode and a second field to indicate whether abnormality is converted to zero;a status register having a third field to store a value indicating whether an inexact exception has occurred;a control unit for receiving a rounding instruction and for decoding a field of the rounding instruction, the rounding instruction for identifying a first register, the first register for storing a double precision floating point value having a plurality of packages a source operand, the rounding instruction is for indicating that a current floating point rounding mode is used, and the rounding instruction is for indicating a change in a value of a third field of the suppression status register;An execution unit coupled to the control unit and coupled to a plurality of registers, the execution unit being responsive to the rounding instruction:The abnormality of the source operand is converted to zero when the second field indicates that the abnormality is converted to zero;Performing a rounding operation according to the current floating point rounding mode to generate an encapsulated double precision floating point value of the integer value of the encapsulated double precision floating point value corresponding to the source operand;The integer-valued encapsulated double-precision floating-point value is stored in the second register.15.The system of claim 14 wherein the bit of the rounding instruction has a value of one when the rounding instruction indicates a change in the value of the third field of the suppression status register.16.The system of claim 14 wherein the current floating point rounding mode is one of:Rounding to negative infinity;Round to infinity;Round to zero; andRound to the nearest even number.17.The system of claim 14 wherein the rounding instructions are included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes the processor Performs a rounding operation on the scalar value to generate an integer-valued floating-point value.18.The system of claim 14 further comprising an audio I/O device coupled to the processor.19.The system of claim 14 further comprising a communication device coupled to the processor.20.The system of claim 14 further comprising an I/O device coupled to the processor.21.The system of claim 14 further comprising a mass storage device coupled to the processor to store the multimedia application.22.The system of claim 14 further comprising a Peripheral Component Interconnect (PCI) Express bus coupled to the processor.23.The system of claim 14 further coupled to a disk drive of the processor to store the multimedia application.24.The system of claim 14 further coupled to a graphics engine of the processor.25.A processor comprising:a source vector register for storing a plurality of encapsulated double precision floating point values associated with a source operand of the rounding instruction;An execution circuit for rounding a plurality of double-precision floating-point values to generate a double-precision floating-point value of a plurality of rounded integer packages to be stored in the destination vector register, the execution circuit for immediately based on having multiple bits Rounding is performed, the plurality of bits including a first set of one or more bits to specify a rounding mode to be used by the execution circuitry and a second set of one or more bits to indicate whether the precision anomaly will be suppressed.26.The processor of claim 25, further comprising:A circuit for decoding immediately to determine the rounding mode to be used by the execution circuitry and to determine if the precision anomaly will be suppressed.27.The processor of claim 25 wherein the execution circuit selects the closest rounding mode to generate a rounding result corresponding to the closest integer value in response to the first set of one or more bits having the first value .28.The processor of claim 27 wherein the circuitry selects a truncated rounding mode to generate a truncated result in response to the first set of one or more bits having the second value.29.The processor of claim 27, wherein the execution circuitry respectively selects a rounding mode to negative infinity or rounding to positive infinity, respectively, in response to the first set of one or more bits having a third value or a fourth value mode.30.The processor of claim 25 wherein the execution circuit is a floating point execution circuit.31.The processor of claim 25, further comprising:Multiple cores, the execution circuit is integrated into one of multiple cores.32.The processor of claim 25, further comprising:A communication interconnect for coupling a processor to one or more devices.33.The processor of claim 32 wherein the communication interconnect comprises a Peripheral Component Interconnect Express (PCI) interconnect.34.A method comprising:Storing a plurality of encapsulated double precision floating point values associated with the source operand of the rounding instruction in the source vector register;Rounding a plurality of double-precision floating-point values to generate a double-precision floating-point value that encapsulates a plurality of rounded integers stored in the destination vector register, wherein the plurality of double-precision floating-point values are performed according to an immediate execution rounding with a plurality of bits A bit includes a first set of one or more bits to specify a rounding mode and a second set of one or more bits to indicate whether a precision anomaly will be suppressed.35.The method of claim 34, further comprising:Decoding is performed immediately to determine the rounding mode to be used and to determine if the precision anomaly will be suppressed.36.The method of claim 34, wherein in response to the first set of one or more bits having the first value, the closest rounding mode is selected to generate a rounding result corresponding to the closest integer value.37.The method of claim 36 wherein the truncated rounding mode is selected to generate a truncated result in response to the first set of one or more bits having the second value.38.The method of claim 37, wherein in response to the first set of one or more bits having a third value or a fourth value, a rounding mode to negative infinity or a rounding mode to positive infinity is selected, respectively.39.A machine readable medium having program code stored thereon, the program code, when executed by a machine, causes the machine to perform the following operations:Storing a plurality of encapsulated double precision floating point values associated with the source operand of the rounding instruction in the source vector register;Rounding a plurality of double-precision floating-point values to generate a double-precision floating-point value that encapsulates a plurality of rounded integers stored in the destination vector register, wherein the plurality of double-precision floating-point values are performed according to an immediate execution rounding with a plurality of bits A bit includes a first set of one or more bits to specify a rounding mode and a second set of one or more bits to indicate whether a precision anomaly will be suppressed.40.The machine readable medium of claim 39, further comprising program code to cause the machine to:Decoding is performed immediately to determine the rounding mode to be used and to determine if the precision anomaly will be suppressed.41.The machine readable medium of claim 39, further comprising program code to cause the machine to: select the closest rounding mode to generate a correspondence in response to the first set of one or more bits having the first value Rounding result to the nearest integer value.42.The machine readable medium of claim 41, further comprising program code to cause the machine to: select a truncated rounding mode to generate a truncation in response to the first set of one or more bits having the second value result.43.The machine readable medium of claim 42 further comprising program code to cause the machine to: select to negative infinity respectively in response to the first set of one or more bits having a third value or a fourth value Rounding mode or rounding mode to positive infinity.44.A processor comprising:First register;Second register;a control register for storing an indicator of a default floating point rounding mode, and an abnormality as a zero indicator;a decoder for receiving a rounding instruction for identifying a first register, the first register storing a source operand having a plurality of encapsulated double precision floating point values, the rounding instruction having multiple possibilities a rounding mode replacement indicator that replaces the floating point rounding mode to indicate that the default floating point rounding mode is to be replaced, the rounding instruction being used to identify a replacement floating point rounding mode;An execution unit coupled to the decoder and coupled to the first and second registers, the execution unit responsive to the rounding instruction:Converts the abnormally packed double-precision floating-point value of the source operand to zero;Performs a rounding operation on the encapsulated double-precision floating-point value according to the identified replacement floating-point rounding mode to generate an integer-valued double-precision floating-point value;An integer-valued double-precision floating-point value is stored in the second register.45.The processor of claim 44 wherein the alternate floating point rounding mode is rounding to negative infinity.46.The processor of claim 44 wherein the alternate floating point rounding mode is rounding to positive infinity.47.The processor of claim 44 wherein the alternate floating point rounding mode is rounded to zero.48.The processor of claim 44 wherein the alternate floating point rounding mode is rounded to the nearest even number.49.The processor of claim 44 wherein the plurality of possible alternative floating point rounding modes comprise rounding the mode away from zero.50.The processor of claim 44 wherein the rounding mode replacement indicator is a single bit, and wherein the single bit is zero to indicate that the default floating point rounding mode will be replaced.51.The processor of claim 44 wherein the rounding instructions are included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes processing The device performs a rounding operation on the scalar value to generate an integer-valued floating-point value.52.The processor of claim 44, further comprising a status register including an inexact exception field, and wherein the execution unit is responsive to the rounding instruction, not updating the inexact exception if an inexact exception occurs during execution of the rounding instruction Field.53.The processor of claim 44 wherein the rounding instruction has a rounding mode control bit to identify a replacement floating point rounding mode.54.A system comprising:Memory controller;a processor core coupled to the memory controller, the processor core comprising:First register;Second register;a control register for storing an indicator of a default floating point rounding mode, and an abnormality as a zero indicator;a decoder for receiving a rounding instruction for identifying a first register, the register storing a source operand having a plurality of encapsulated double precision floating point values, the rounding instruction having a plurality of possible replacements A rounding mode replacement indicator of the floating point rounding mode to indicate that a default floating point rounding mode is to be replaced, the rounding instruction being used to identify a replacement floating point rounding mode;An execution unit coupled to the decoder and coupled to the first and second registers, the execution unit responsive to the rounding instruction:Converts the abnormally packed double-precision floating-point value of the source operand to zero;Performs a rounding operation on the encapsulated double-precision floating-point value according to the identified replacement floating-point rounding mode to generate an integer-valued double-precision floating-point value;An integer-valued double-precision floating-point value is stored in the second register.55.The system of claim 54 wherein the alternate floating point rounding mode is rounding to negative infinity.56.The system of claim 54 wherein the alternate floating point rounding mode is rounding to positive infinity.57.The system of claim 54 wherein the alternate floating point rounding mode is rounded to zero.58.The system of claim 54 wherein the alternate floating point rounding mode is rounded to the nearest even number.59.The system of claim 54 wherein the plurality of possible alternative floating point rounding modes comprises rounding the mode away from zero.60.The system of claim 54 wherein the rounding mode replacement indicator is a single bit, and wherein the single bit is zero to indicate that the default floating point rounding mode will be replaced.61.The system of claim 54 wherein the processor core performs a rounding operation in accordance with a rounding mode away from zero.62.The system of claim 54 wherein the rounding instruction is included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes the processor Performs a rounding operation on the scalar value to generate an integer-valued floating-point value.63.The system of claim 54 further comprising a status register comprising an inexact exception field, and wherein the execution unit is responsive to the rounding instruction, and the inexact exception field is not updated if an inexact exception occurs during execution of the rounding instruction .64.The system of claim 54 wherein the rounding instruction has a rounding mode control bit to identify a replacement floating point rounding mode.65.The system of claim 54 further comprising a communication device coupled to the processor core.66.The system of claim 54 further comprising an I/O device coupled to the processor core.67.The system of claim 54 further comprising a graphics engine coupled to the processor core.68.The system of claim 54 further comprising a Peripheral Component Interconnect (PCI) Express bus coupled to the processor core.69.The system of claim 54 further comprising a disk drive coupled to the processor core.70.The system of claim 54 further comprising a mass storage device coupled to the processor core.71.The system of claim 54 further comprising audio I/O coupled to the processor core.72.A system comprising:Memory;a processor coupled to the memory, the processor comprising:First register;Second register;a control register for storing an indicator of a default floating point rounding mode, and an abnormality as a zero indicator;a decoder for receiving a rounding instruction for identifying a first register, the first register storing a source operand having a plurality of encapsulated double precision floating point values, the rounding instruction having multiple possibilities a rounding mode replacement indicator that replaces the floating point rounding mode to indicate that the default floating point rounding mode is to be replaced, the rounding instruction being used to identify a replacement floating point rounding mode;An execution unit coupled to the decoder and coupled to the first and second registers, the execution unit responsive to the rounding instruction:Converts the abnormally packed double-precision floating-point value of the source operand to zero;Performs a rounding operation on the encapsulated double-precision floating-point value according to the identified replacement floating-point rounding mode to generate an integer-valued double-precision floating-point value;An integer-valued double-precision floating-point value is stored in the second register.73.The system of claim 72 wherein the alternate floating point rounding mode is rounding to negative infinity.74.The system of claim 72 wherein the alternate floating point rounding mode is rounding towards positive infinity.75.The system of claim 72 wherein the alternate floating point rounding mode is rounded to zero.76.The system of claim 72 wherein the alternate floating point rounding mode is rounded to the nearest even number.77.The system of claim 72 wherein the plurality of possible alternative floating point rounding modes comprises rounding the mode away from zero.78.The system of claim 72 wherein the rounding mode replacement indicator is a single bit, and wherein the single bit is zero to indicate that the default floating point rounding mode will be replaced.79.The system of claim 72 wherein the processor performs a rounding operation based on a rounding mode away from zero.80.The system of claim 72 wherein the rounding instruction is included in an instruction set architecture (ISA) having a second rounding instruction to indicate that the encapsulated data register stores a scalar value, and wherein the second rounding instruction causes the processor Performs a rounding operation on the scalar value to generate an integer-valued floating-point value.81.The system of claim 72, further comprising a status register including an inexact exception field, and wherein the execution unit is responsive to the rounding instruction, and the inexact exception field is not updated if an inexact exception occurs during execution of the rounding instruction .82.The system of claim 72 wherein the rounding instruction has a rounding mode control bit to identify a replacement floating point rounding mode.83.The system of claim 72 further comprising audio I/O coupled to the processor.84.The system of claim 72 further comprising a communication device coupled to the processor core.85.The system of claim 72 further comprising an I/O device coupled to the processor.86.The system of claim 72 further comprising a mass storage device coupled to the processor.87.The system of claim 72 further comprising a Peripheral Component Interconnect (PCI) Express bus coupled to the processor.88.The system of claim 72 further comprising a disk drive coupled to the processor.89.The system of claim 72 further comprising a graphics engine coupled to the processor.
Perform rounding operations in response to instructionsTechnical fieldThe present invention relates generally to mathematical operations performed by a processor on data, and more particularly to rounding operations.Background techniqueThe processor performs a variety of mathematical operations on the data. Data may belong to different types, including, for example, integer values and floating point (FP) values with different intrinsic precision. Among other such operations, the results of mathematical operations, such as multiplication or addition, are likely to produce results that need to be converted to a lower precision format. Accordingly, a rounding operation can be performed to round the FP result.Although such rounding operations can be performed as part of different mathematical operations, in some processor architectures, it is limited or impossible to perform rounding operations on data elements as independent operations, or without multiple complex steps. For example, the processor can be configured to perform rounding of FP values to integer values according to a default rounding mode. However, for various reasons, it may be necessary to round a given source operand according to different modes. In order to perform such an operation, complex steps of saving the current configuration state of the processor, loading a new configuration state including information about the required rounding mode, performing rounding operations, and restoring the original processor state may occur. . These operations can be time consuming, increase complexity, and consume excessive processing cycles. In addition, although with the development of new programming languages, it is desirable to support other rounding modes, but the rounding operations performed in the processor are usually still in accordance with a limited number of rounding modes, ie, already in electrical The rounding mode described in the Institute of Electrical Engineers (IEEE) Standard 754-1985 (published in 1985).Summary of the inventionAccording to a first aspect of the invention, a method is provided comprising:Receiving a rounding instruction and an immediate value in the processor;Determining whether the rounding mode replacement indicator of the immediate value is valid;If valid, a rounding operation is performed on the source operand in a floating point unit of the processor in response to the rounding instruction and in accordance with a rounding mode specified in the immediate value.According to a second aspect of the invention, there is provided an apparatus comprising:a controller receiving a rounding instruction and an immediate data element associated with the rounding instruction, wherein the controller determines whether to replace the default rounding mode based on the replacement indicator of the immediate data element;An execution unit coupled to the controller to perform a rounding operation in response to the rounding instruction, wherein if the default rounding mode is replaced, the execution unit is based on a rounding mode of the immediate data element The rounding operation is performed.According to a third aspect of the invention, a system is provided comprising:An execution unit that, if the replacement indicator is present in the control field, performs a rounding instruction on the first operand to obtain a rounded result according to a rounding mode specified in a control field associated with the rounding instruction;A dynamic random access memory (DRAM) coupled to the execution unit.According to a fourth aspect of the invention there is provided a machine readable medium having stored thereon instructions which, if executed by a machine, cause the machine to perform a method comprising the steps of:Performing a rounding operation according to a mode specified by the instruction;The result of the rounding operation is stored in the first storage area.DRAWINGS1 is a flow chart of a method in accordance with one embodiment of the present invention.2 is a block diagram of a portion of a processor in accordance with one embodiment of the present invention.3 is a block diagram of an immediate data element used in conjunction with an instruction, in accordance with one embodiment of the present invention.4 is a flow chart of a method for performing a rounding operation, in accordance with an embodiment of the present invention.Figure 5 is a block diagram of a system in accordance with one embodiment of the present invention.Detailed waysIn various embodiments, multiple rounding instructions of the instruction set architecture (ISA) may be used in a processor, such as in a floating point unit (FPU) of a processor, to efficiently perform rounding operations. In addition to the rounding mode set forth in the Institute of Electrical and Electronics Engineers (IEEE) Standard 754-1985 (published in 1985), where the IEEE standard is used for binary floating point arithmetic or IEEE Standard 754, embodiments can be used in accordance with Other rounding modes to perform rounding operations. For example, as described below, in some embodiments, the instructions may provide support for rounding operations that are partially away from zero and away from zero. In addition, these rounding operations can be used with many data types. In some implementations, rounding operations can be performed on single instruction multiple data (SIMD) data types so that instructions can be executed on extended data types, such as encapsulated data elements, where multiple data elements are encapsulated into a single location, such as a processor. In the expansion register.To provide flexibility and provide efficient instruction execution, embodiments can provide ISA-based instructions that can be executed on source operands. These ISA-based instructions may be different implementations of rounding operations for performing rounding to the nearest integer value of the source operand. Such source operands may already be in a finite precision format (ie, not the result of an arithmetic operation, but data read from a register/memory). Such instructions can be used for different applications, including multimedia applications, gaming applications, and the like. Furthermore, embodiments can be implemented on a basic basis of a compiler-based implementation to implement rounding operations that are applicable to different programming languages. Note that in various embodiments, the rounding instruction can take a floating point number as the source operand, round it to the nearest integer value, and store the result as a floating point value with an integer value.In various embodiments, control of the execution may be processed based at least in part on information received with the instruction, such as immediate data received with the instruction. In different implementations, such immediate data can override the default rounding mode currently used by the processor. In such an alternative case, the immediate data may further provide control of the rounding mode. In addition, the immediate data can be prepared for the replacement of precision anomalies (ie, precision suppression). This allows immediate data to be used to provide non-sticky control of a particular rounding operation so that the operation can be performed in a minimum cycle. This may be the case, as when the immediate data received in conjunction with the instruction includes rounding control information, it may not be necessary to update such information present in the configuration register, such as an extended control and status register (CSR), such as Multimedia Extension CSR (MXCSR) that exists in processors based on Intel® architecture (eg, IA-32 architecture). However, it is to be understood that these embodiments can be used in different processor types, and the scope of the invention is not limited in this respect.Referring now to Figure 1, shown is a flow chart of a method in accordance with one embodiment of the present invention. As shown in FIG. 1, method 100 first receives a rounding instruction among the processors and associated immediate data (step 110). For example, in many implementations, user-level instructions, such as ISA instructions, can be received in the processor. In addition to the instructions, immediate data is also available. As will be further described below, such immediate data can include multiple fields to control various aspects of the operation.Still referring to FIG. 1, control passes from step 110 to decision step 115. At decision step 115, it may be determined whether the immediate data replaces the rounding mode of the configuration register. That is, the field of immediate data may include a replacement indicator indicating whether to replace the default rounding mode. In various embodiments, such default rounding mode may exist in a configuration register, such as a CSR, for example, an MXCSR field, although the scope of the invention is not limited in this respect. If the immediate data includes a replacement indicator, then control passes to step 120. At step 120, the source operand identified by the instruction may be dispatched to a floating point unit (FPU), such as a processor. In addition, the source operand can be dispatched with information to control the rounding mode of the rounding operation. Control information can be obtained from immediate data, ie as specified in the Rounding Status field of the immediate data. As will be described further below, in some implementations, a control unit, such as a control selection unit of a processor, can receive instructions and immediate data, and decode the immediate data to determine whether to replace the default rounding mode, and if so , the rounding mode specified in the immediate data is obtained.Still referring to FIG. 1, if it is determined in decision step 115 that the immediate data does not include a replacement indicator, then control transfers to step 125. At step 125, the source operand can be dispatched for execution at the FPU. In addition, the rounding operation can be performed on the basis of, for example, a default rounding mode specified in the configuration register.In any event, control passes from two steps 120 and 125 to step 130 where a rounding operation can be performed. The rounding operation removes the fractional precision of the input (ie, the source operand) according to the rounding mode. In different embodiments, different ways of performing rounding operations can be implemented. For example, in many implementations, an FPU can include an adder and a rounding unit to perform a rounding operation. To perform the rounding mode in accordance with IEEE Standard 754, the adder can have the source operand as the first operand and a constant value, such as zero, as the second operand. The output of the adder can then be fed to a rounding unit that rounds the result according to the selected mode of operation. Therefore, the rounding unit can round its input value into an integer-valued floating-point result.In other embodiments, other rounding modes may be performed in addition to the IEEE Standard 754 rounding mode. In such an implementation, the source operand and the particular data value based on the value of the source operand and the rounding mode can be fed to the FPU adder as a second operand, as further described below. A rounding operation can then be performed on the result, wherein the rounding operation can be an IEEE Standard 754 operation. In other extended rounding mode implementations, source operands and zero values may be provided to the input of the FPU adder, and the resulting values may then be rounded according to control information sent to the rounding unit.After execution, the result of the rounding operation can be stored in the destination operand (step 140). In various embodiments, the destination operand may be an extended storage register of the processor, although the scope of the invention is not limited in this respect. Furthermore, it can be determined whether a precision anomaly occurred during the rounding operation (decision step 145). That is, it can be determined whether the rounding operation forms an inaccurate result that can cause an abnormality. If not, the method 100 can end.If a precision anomaly is generated, control may proceed to decision step 150. At decision step 150, it may be determined whether the immediate data includes a field for suppressing the accuracy anomaly. That is, in some implementations, the immediate data can include a suppression field. The value of this field indicates whether the associated rounding instruction should suppress the precision exception if an accuracy exception is generated. If the precision suppression indicator is present, even if an accuracy exception occurs, no further action is taken and the method 100 can end. If the immediate data does not include an indicator for suppressing the accuracy anomaly, then control may proceed to step 160. At step 160, a flag for the accuracy anomaly can be set in the status register. For example, in some implementations, the status register may correspond to an MXCSR, although the scope of the invention is not limited in this respect. Based on the state of this flag in the status register, a precision exception may occur (for example, if the flag is unmasked). If so, appropriate processing can be performed, for example via a software processor, to handle the exception. If the flag is masked, no action is taken relative to the set flag, even if a precision exception occurs and is marked in the status register. Although this specific implementation has been described in the embodiment of FIG. 1, it is to be understood that the scope of the present invention is not limited in this respect.Referring now to Figure 2, shown is a block diagram of a portion of a processor in accordance with one embodiment of the present invention. As shown in FIG. 2, processor 200 can include control selection unit 210 coupled to receive instruction information, such as input generated by micro-ops (μop), from register 205 (which may be a general purpose processor register) and associated therewith. Immediate data. The μop can be generated in response to an instruction for executing a single ISA for a given rounding operation. In various embodiments, control selection unit 210 may decode immediate data, where control selection unit 210 may be implemented in hardware, software, firmware, or a combination thereof. On the basis of the immediate data, it may be determined whether to replace the current rounding mode of the processor represented, for example, in the control or configuration register in which the current rounding control state 220 is stored. If so, control selection unit 210 can decode the mode field of the immediate data, ie, the rounding mode field, to determine the appropriate rounding mode.Control selection unit 210 may be coupled to floating point unit (FPU) 240 to provide control instructions thereto based on the input information. As further shown in FIG. 2, an extension register file, such as a so-called extended (XMM) register 230, may be present within processor 200, where the processor 200 may be included as a source and destination operand for rounding operations. The register identified in the instruction. Thus, XMM register 230 can be coupled to FPU 240 to provide source operands therefrom and receive destination operands therefrom.In various embodiments, FPU 240 may include various circuits to perform operations on data. In the embodiment of FIG. 2, FPU 240 includes an FPU adder 242. Specifically, as shown in FIG. 2, FPU adder 242 can be coupled to receive input operands, such as first source operands and second source operands (ie, operands S1 and S2). FPU 240 may also include an FPU rounder 244 that is coupled to the output of FPU adder 242. In various embodiments, FPU adder 242 can produce infinitely accurate results of operations. However, given memory and other constraints, the results may be rounded to provide the final result in a desired format, such as a single or double precision floating point element. Accordingly, FPU rounder 244 can receive infinitely accurate results from FPU adder 242 and perform rounding operations, such as indicated by the current rounding mode of processor 200, or based on control from immediate data, where This immediate data is obtained with the instruction, ie via the control selection unit 210. Note that although the FPU rounder 244 can generally receive infinitely accurate results that occur as a result of mathematical operations in the FPU adder 242, in different implementations, the source operands that have rounding instructions may already be in a finite precision format. In these cases, FPU rounder 244 can receive its input value (eg, the source operand corresponding to a given rounding instruction) and produce a rounding result, for example, corresponding to the closest integer value.Thus, based on a given rounding instruction, FPU 240 can perform a rounding operation on a given source operand, such as from one of XMM registers 230, controlled by information from control selection unit 210. In addition, when the rounding operation is complete, the results can be stored, for example, on different registers in the XMM register 230. If a precision exception occurs during the operation, a flag can usually be set in the FP status register 225 to indicate. However, in various embodiments, such flag is not set if the immediate data associated with the rounding instruction indicates precision suppression. Although this specific implementation has been described in the embodiment of FIG. 2, it will be understood that the scope of the present invention is not limited in this respect. For example, in some embodiments, control and status states, such as represented by rounding control state 220 and FP status register 225, may be stored in a single CSR, such as an MXCSR.Note that the immediate data can be provided to the control selection unit 210 in different forms. For example, in some implementations, the immediate data may be in the form of a single byte data element, although the scope of the invention is not limited in this respect. In addition, different ways of encoding control information can be implemented within the immediate data elements. Referring now to Figure 3, shown is a block diagram of an immediate data element in accordance with one embodiment of the present invention. As shown in FIG. 3, immediate data element 300 can be an 8-bit word that includes replacement indicator 310, mode control field 320, precision replacement indicator 330, and reserved field 340. Although this particular implementation is shown in the embodiment of Figure 3, the scope of the invention is not limited in this manner.In the embodiment of FIG. 3, the replacement indicator 310 can be used to determine an alternate status of the rounding instruction associated with the immediate data element 300. As shown in Table 1 below, the replacement indicator 310 can be set to a logic low level to indicate a replacement of the default rounding mode (eg, represented by a configuration register such as MXCSR). A logic high value indicates the use of the default mode.Table 1 Rounding mode replacement indicator 0: Use 1:3 bits of direct value 1: Use default rounding modeIf the replacement indicator 310 indicates that the default rounding mode is to be replaced, the rounding mode field 320 can be decoded to determine the rounding mode associated with the rounding instruction. As shown in Table 2 below, in some implementations, six rounding modes can be supported, including four rounding modes specified by IEEE Standard 754, and two extended rounding modes, which will be further developed below. have a discussion.Table 2 Rounding mode field 000: Nearest even number 001: Direction toward -∞ 010: Direction toward +∞ 011: Truncated (rounded to 0) 100: Partially rounded away from 0: 101: Far away from 0 Direction roundingThe immediate data element 300 also includes an accuracy suppression indicator 330 that can be set to indicate a tolerance for an inaccurate result such that an accuracy exception does not result in an anomaly flag in the status register even when occurring during the operation of the associated instruction Settings. Specifically, as shown in Table 3 below, the accuracy suppression indicator 330 may take the form:Table 3 Precision Suppression Indicator 1: No Inaccuracy (Accuracy) field is updated 0: Normal behaviorNote that the precision suppression indicator 330 can be used in conjunction with user-level instructions in different languages, such as C99, Fortran, and Java. Finally, in some embodiments, the reserved field 340 can be reserved for additional information. It is also noted that the specific values specified in Tables 1-3, as well as the specific locations and sizes of the indicators and fields, are not limited, and various changes, modifications, and extensions are within the scope of the present invention.As described above, in many implementations, rounding operations can be performed in response to a single instruction of the ISA. In this way, user-level support is provided and rounding operations can be performed efficiently. In a given ISA, several such rounding instructions can occur and be used to handle specific rounding operations, such as double and single precision floating point values, and rounding of encapsulation and scalar values. These rounding instructions can also be used to round off the fractional part of the floating point data element. In addition to the presence of ISA level instructions, immediate data or other control field information may allow for efficient local control of the rounding mode (along with other attributes) without having to modify the current default state of the processor.As shown in Table 4 below, different styles of rounding instructions can occur in the ISA to implement efficient rounding operations on different types of data elements.Table 4 Instruction Description ROUNDPD xmm1, xmm2/m128, imm8 Rounds the double-precision floating-point value encapsulated in xmm2/m128 and places the result in xmm1. The mode of rounding is determined by imm8. ROUNDPS xmm1, xmm2/m128, imm8 rounds the single-precision floating-point value encapsulated in xmm2/m128 and places the result in xmm1. The mode of rounding is determined by imm8. ROUNDSD xmm1, xmm2/m64, imm8 rounds the double-precision double-precision floating-point value in xmm2/m64 and places the result in xmm1. The mode of rounding is determined by imm8. ROUNDSS xmm1, xmm2/m32, imm8 rounds the low-precision single-precision floating-point values in xmm2/m32 and places the result in xmm1. The mode of rounding is determined by imm8.As an example of how these ISA instructions operate, the ROUNDPD instruction can be used in the source operand (that is, the second operand from the XMM register or memory) by the rounding mode specified in the immediate element (ie, IMM8). The two double-precision floating-point values are rounded and placed in the destination operand (that is, the first operand of the XMM register). Direct elements can specify control fields for rounding operations. Referring again to Tables 1-3, bit 4 of the immediate data (i.e., indicator 330 of FIG. 3) can control the behavior of the processor with abnormal accuracy, while bit 0 (ie, indicator 310 of FIG. 3) can be selected for rounding mode control. source. Finally, bit 3:1 (ie, field 320 of Figure 3) can specify a non-sticky rounding mode value. Note that in some embodiments, if any source operand is signaling instead of a number (SNaN), then it will be converted to a stationary NaN (QNaN). If the configuration register is set to zero (DAZ) abnormally, then the exception can be converted to zero before rounding. If the configuration register is set to zero (FTZ) for a flush denormal, then the exception can be converted to zero after rounding.As another example of how these ISA instructions operate, the ROUNDPS instruction can be used to round four encapsulated single-precision floating-point values in the source operand and place the result in the destination operand. For purposes of illustration, a specific rounding instruction may take the form:ROUNDPS xmm0, xmm1, imm8 (rounded to the nearest integer).This instruction takes the first register, the packed single precision value in xmml, rounds each value to the nearest integer value as specified by the rounding mode of the immediate data (ie imm8), and stores the result in the second Register, ie xmm0. Table 5 below shows typical values that exist in the source operand (ie xmml), each corresponding to a finite precision floating point value, and a floating point number stored in the destination operand (ie xmm0) corresponding to the integer value, ie The resulting rounded value of the integer value closest to the original source value.table 5Note that in further implementations, the rounding operation may respond to instructions that are integer values generated from source FP values (ie, as opposed to integer-valued FP values). Other embodiments may implement rounding to lower precision floating point representations. Such an embodiment may provide a valid component of the rounded source value according to a standard rounding mode or a special rounding mode, wherein the special rounding mode is by default rounding mode in the configuration register or immediate data associated with the instruction. The local rounding mode specified in the control is performed.In various embodiments, the immediate data may provide control information to perform a rounding mode that is different from the IEEE standard 754 rounding operation. These rounding modes may include partial rounding in a direction away from zero and rounding in a direction away from zero. Referring now to Figure 4, shown is a flow diagram of a method of performing a rounding operation in accordance with one embodiment of the present invention. As shown in FIG. 4, method 400 can be used to perform these extended rounding modes. Method 400 can begin by determining if the source operand is greater than or equal to zero (diamond step 410). If so, control may pass to step 420 where a predetermined value may be subtracted from the source operand (step 420). For example, the FP adder can subtract a given value from the source operand based on the particular rounding mode selected. Of course, this subtraction can be performed by adding a negative value to a predetermined value. The selected rounding operation can then be performed on the result of the FP addition (step 430). In some implementations, an IEEE standard 754 rounding operation, such as truncation (also known as rounding to zero), can be performed on the result to obtain an extended rounding mode result. If it is determined in diamond step 410 that the source operand is less than zero, then control transfers to step 440. At step 440, a predetermined value (which may be the same value as described above) may be added to the source operand in the FP adder. Then, at step 450, the selected rounding operation can be performed on the result to obtain the rounded value thus produced.Although the scope of the present invention is not limited in this respect, a partial rounding operation away from zero may use a value of 0.5 as its predetermined value, and a rounding operation away from zero may use 1", which corresponds to less than but The closest representable FP value that is not equal to one. For single and double precision FP values, 0.5 may correspond to 0x3f000000 and 0x3fe0000000000000, respectively. For single and double precision FP values, -0.5 may correspond to 0xbf000000 and 0xbfe000000000000, respectively. For single-precision and double-precision FP values, 1" may correspond to 0x3f7fffff and 0x3fefffffffffffff, respectively. For single-precision and double-precision FP values, -1" may correspond to 0xbf7fffff and 0xbfeffffffffffff, respectively. Table 6 below shows examples of source code for performing these operations.In these examples, the operation ROUND_TOWARD_ZERO is a truncation operation of IEEE Standard 754, which is performed on the result of the addition/subtraction operation. Note that in the operation of performing these extended rounding modes, a predetermined value may be provided as the second source operand to the FP adder (e.g., as in S2 in the embodiment of Fig. 2). On the other hand, in some embodiments, as with other rounding operations, the second source operand can be zero and control signals can be sent to the rounding unit to implement the selected extended rounding mode operation.Thus, in different embodiments, enhancements to performing rounding can be achieved. These enhancements can avoid the need to perform various operations, such as storing the state of a control register, performing an empty FP operation, and resetting the state, or even an approximate simplification of converting numbers to integers and back to floating point. By suppressing inaccurate precision anomalies, support for rounding in different languages can be simplified, and implementations can also follow standard rounding modes of some rounding functions, such as in the C99 language.Embodiments can be implemented in many different system types. Referring now to Figure 5, shown is a block diagram of a system in accordance with one embodiment of the present invention. As shown in FIG. 5, multiprocessor system 500 is a point-to-point interconnect system and includes a first processor 570 and a second processor 580 coupled via a point-to-point interconnect 550. As shown in FIG. 5, each of processors 570 and 580 can be a multi-core processor, including first and second processor cores (ie, processor cores 574a and 574b and processor cores 584a and 584b). Note that each core can perform a rounding operation in response to an ISA level instruction in accordance with one embodiment of the present invention.The first processor 570 additionally includes point-to-point (P-P) interfaces 576 and 578. Similarly, second processor 580 includes P-P interfaces 586 and 588. As shown in FIG. 5, memory controller hubs (MCH) 572 and 582 couple the processors to respective memories, namely memory 532 and memory 534, which may be portions of the main memory that are locally coupled to the respective processors.First processor 570 and second processor 580 can be coupled to chipset 590 via P-P interconnects 552 and 554, respectively. As shown in FIG. 5, chipset 590 includes P-P interfaces 594 and 598. In addition, chipset 590 includes interface 592 to couple chipset 590 with high performance graphics engine 538. In one embodiment, an Advanced Graphics Interface (AGP) bus 539 can be used to couple graphics engine 538 to chipset 590. The AGP bus 539 is compliant with the accelerated graphics port interface specification revision 2.0, which was published on May 4, 1998 by Intel Corporation of Santa Clara, California. On the other hand, a point-to-point interconnect 539 can couple these components.Chipset 590 can then be coupled to first bus 516 via interface 596. In one embodiment, the first bus 516 may be a Peripheral Component Interconnect (PCI) bus, as defined by the PCI Local Bus Specification, Production Release, Revision 2.1, dated June 1995, or such as the PCI ExpressTM bus. Or other third generation input/output (I/O) intermediate bus of the bus, but the scope of the present invention is not limited thereto.As shown in FIG. 5, various I/O devices 514 can be coupled to a first bus 516, and a bus bridge 518, wherein the bus bridge 518 couples the first bus 516 to the second bus 520. In one embodiment, the second bus 520 can be a less lead count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 520, including, for example, a keyboard/mouse 522, a communication device 526, and a data storage unit 528 such as a disk drive or other mass storage device that may include code 530. . Additionally, audio I/O 524 can be coupled to second bus 520. Note that other architectures are also possible. For example, instead of the point-to-point architecture of Figure 5, the system can implement a multi-drop bus or other such architecture.Embodiments may be implemented by encoding and may be stored on a storage medium on which instructions for programming a system to execute instructions are stored. The storage medium may include, but is not limited to, any type of disc including a floppy disk, an optical disk, a compact disk read only memory (CD-ROM), a rewritable optical disk (CD-RW), and a magneto-optical disk, a semiconductor device such as a read only memory ( ROM), random access memory (RAM) such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable Programmable read only memory (EEPROM), magnetic or optical card, or any other type of medium suitable for storing electronic instructions.Although the invention has been described in terms of a limited number of embodiments, those skilled in the art will recognize various modifications and changes. The appended claims are intended to cover all such modifications and alternatives
Various systems and methods for enhancing a distributed computing environment with multiple edge hosts and user devices, including in multi-access edge computing (MEC) network platforms and settings, are described herein. A device of a lifecycle management (LCM) proxy apparatus obtains a request, from a device application, for an application multiple context of an application. The application multiple context for the application is determined. The request from the device application for the application multiple context for the application is authorized. A device application identifier based on the request is added to the application multiple context. A created response for the device application based on the authorization of the request is transmitted to the device application. The response includes an identifier of the application multiple context.
1.A life cycle management (LCM) agent device, including:Processing circuitry; andA memory device, the memory device including instructions stored thereon, wherein the processing circuitry is configured to perform the following operations when the instructions are executed by the processing circuitry:Obtain requests for multiple contexts of the application from the device application;Determining multiple contexts of the application of the application;Authorizing requests from the device application for the application multiple contexts of the application;Adding the device application identifier based on the request to the multiple contexts of the application; andA response created for the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.2.The LCM proxy device according to claim 1, wherein in order to determine the multiple contexts of the application of the application, the processing circuit system is configured to perform the following operations: create multiple contexts of the application of the application , Wherein the multiple contexts of the application include references to the application and the device application.3.The LCM proxy device according to claim 1, wherein in order to determine the multiple contexts of the application of the application, the processing circuit system is configured to perform the following operations: determining that multiple existing applications of the application exist Context, wherein the identifier identifies multiple contexts of an existing application of the application.4.The LCM agent device of claim 1, wherein the processing circuit system is further configured to perform the following operations:Acquiring, from the device application, a request for deleting multiple contexts of the application of the application;Authorizing requests from the device application to delete the multiple contexts of the application for the multiple contexts of the application;Sending a request to delete the multiple contexts of the application of the device to the multi-access edge computing (MEC) coordinator; andA delete response of the device application is sent based on the authorization of the request, where the response includes identifiers of multiple contexts of the application.5.The LCM agent device of claim 1, wherein the processing circuit system is further configured to perform the following operations:Acquiring, from the device application, a request to update the multiple application contexts of the application, the request including multiple context identifiers and modified data;Determine the multiple contexts of the application of the application based on the multiple context identifiers;Updating the multiple contexts of the application based on the modified data; andAn updated response of the device application is encoded based on the authorization of the request, wherein the response includes identifiers of multiple contexts of the application.6.The LCM proxy device of claim 5, wherein the modified data is an updated callback reference.7.The LCM agent device according to any one of claims 1 to 6, wherein the processing circuit system is further configured to perform the following operations:Sending a publishing message to the MEC host, where the publishing message includes multiple contexts of the application of the application; andTransmitting the multiple application contexts of the application to the MEC host.8.A method of life cycle management (LCM) agent, including:Obtain requests for multiple contexts of the application from the device application;Determining multiple contexts of the application of the application;Authorizing requests from the device application for the application multiple contexts of the application;Adding the device application identifier based on the request to the multiple contexts of the application; andA response created for the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.9.The method according to claim 8, wherein determining the application multiple contexts of the application includes creating the application multiple contexts of the application, wherein the application multiple contexts include the application and the device Application reference.10.The method of claim 8, wherein determining the multiple contexts of the application of the application comprises determining that multiple contexts of the existing application of the application exist, wherein the identifier identifies the existing application of the application Multiple contexts.11.The method of claim 8, further comprising:Acquiring, from the device application, a request for deleting multiple contexts of the application of the application;Authorizing requests from the device application to delete the multiple contexts of the application for the multiple contexts of the application;Sending a request to delete the multiple contexts of the application of the device to the multi-access edge computing (MEC) coordinator; andThe deletion response of the device application is encoded based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.12.The method of claim 8, further comprising:Acquiring, from the device application, a request to update the multiple application contexts of the application, the request including multiple context identifiers and modified data;Determine the multiple contexts of the application of the application based on the multiple context identifiers;Updating the multiple contexts of the application based on the modified data; andAn updated response to the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.13.The method of claim 12, wherein the modified data is an updated callback reference.14.The method according to any one of claims 8 to 13, further comprising:Obtain a release message from the MEC host, where the release message includes multiple contexts of the application of the application; andTransmitting the multiple application contexts of the application to the MEC host.15.At least one type of machine-readable storage medium includes instructions stored thereon that, when executed by a processing circuit system of a computing device, cause the processing circuit system to be used for:Acquiring, from the device application, a request for multiple contexts of the application of the application;Determining multiple contexts of the application of the application;Authorizing requests from the device application for the application multiple contexts of the application;Adding the device application identifier based on the request to the multiple contexts of the application; andA response created for the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.16.The at least one machine-readable storage medium according to claim 15, wherein in order to determine the application context of the application, the processing circuit system is configured to create the application multiple contexts of the application Context, wherein the multiple contexts of the application include references to the application and the device application.17.The at least one machine-readable storage medium according to claim 15, wherein in order to determine the multiple contexts of the application of the application, the processing circuit system is configured to determine that there are multiple existing applications of the application. Contexts, wherein the identifier identifies multiple contexts of the existing application of the application.18.The at least one machine-readable storage medium of claim 15, wherein the processing circuit system is further configured to:Acquiring, from the device application, a request for deleting multiple contexts of the application of the application;Authorizing requests from the device application to delete the multiple contexts of the application for the multiple contexts of the application;Sending a request to delete the multiple contexts of the application of the device to the multi-access edge computing (MEC) coordinator; andA delete response of the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.19.The at least one machine-readable storage medium of claim 15, wherein the processing circuit system is further configured to:Acquiring, from the device application, a request to update the multiple application contexts of the application, the request including multiple context identifiers and modified data;Determine the multiple contexts of the application of the application based on the multiple context identifiers;Updating the multiple contexts of the application based on the modified data; andAn updated response to the device application is sent to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.20.The at least one machine-readable storage medium of claim 19, wherein the modified data is an updated callback reference.21.The at least one machine-readable storage medium according to any one of claims 15-20, wherein the processing circuit system is further configured to:Obtain a release message from the MEC host, where the release message includes multiple contexts of the application of the application; andTransmitting the multiple application contexts of the application to the MEC host.22.An agent device, including:Means for obtaining requests for multiple contexts of the application received from the device application;A device for determining multiple contexts of the application of the application;Means for authorizing requests for the multiple contexts of the application from the device application;Means for adding a device application identifier based on the request to the multiple contexts of the application; andAn apparatus for sending a response created for the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.23.The agent device according to claim 22, wherein the means for determining the multiple contexts of the application of the application comprises an operation for creating multiple contexts of the application of the application, wherein the multiple contexts of the application are Include references to the application and the device application.24.The agent device according to claim 22, wherein the means for determining the multiple contexts of the application of the application includes an operation for determining that multiple contexts of the existing application of the application exist, wherein the identifier identifies The existing application of the application has multiple contexts.25.The agent device according to any one of claims 22 to 24, further comprising:Means for obtaining a request to delete multiple contexts of the application of the application from the device application;Means for authorizing requests from the device application to delete the multiple contexts of the application for the multiple contexts of the application;Means for sending a request to delete the multiple contexts of the application of the device to a multi-access edge computing (MEC) coordinator; andAn apparatus for sending a delete response of the device application to the device application based on the authorization of the request, wherein the response includes identifiers of a plurality of contexts of the application.
MEC-based distributed computing environment with multiple edge hosts and user equipmentPriority claimThis application claims the right of priority to the U.S. Application Serial No. 16/235,685 filed on December 28, 2018, which claims the right of priority to the U.S. Provisional Patent Application Serial No. 62/738,964 filed on September 28, 2018, These two applications are hereby incorporated by reference in their entirety.Technical fieldThe embodiments described herein generally relate to edge computing and related distributed computing environments, and specifically relate to security, authentication, and management techniques that can be used with services that can operate at edge computing platforms.Background techniqueAt a general level, edge computing refers to the transformation of computing and storage resources closer to endpoint devices (for example, consumer computing devices, user equipment, etc.) in order to optimize the total cost of ownership, reduce application waiting time, improve service capabilities, and improve Compliance with security or data privacy requirements. In some scenarios, edge computing can provide cloud-like distributed services, which can provide applications with orchestration and management among many types of storage and computing resources. As a result, some implementations of edge computing are referred to as “edge cloud” or “fog” because the powerful computing resources previously only available in large remote data centers are moved closer to the endpoints and make them less accessible to users in the network. Is available in terms of consumer use at the "edge".As endpoint devices and gateways try to access network resources and applications that are moved closer to the "edge" of the network, edge computing can be further integrated with use cases and technologies developed for the Internet of Things (IoT) and the Internet of Fog . For example, an edge computing use case for integration with multi-access edge computing (MEC) that is being developed using mobile network settings has been designed, also known as "mobile edge computing." The MEC method is designed to allow application developers and content providers to access the computing power and IT service environment of the dynamic mobile network settings at the edge of the network. The European Telecommunications Standards Institute (ETSI) Industry Specification Group (ISG) has developed limited standards in an attempt to define common interfaces for the operation of MEC systems, platforms, hosts, services, and applications.Edge computing, MEC, and related technologies try to provide computing power with reduced latency, increased responsiveness, and greater availability compared to traditional cloud network services and wide area network connections. However, the integration of mobility and dynamically activated services into certain mobile usage and device handling use cases leads to limitations in orchestration, coordination, and resource management, especially involving many parties (devices, hosts, service providers, operators) Complex mobility settings. As a result, many proposed architectures do not realize all the benefits that edge computing is designed to provide.Description of the drawingsIn the drawings (the drawings are not necessarily drawn to scale), the same numbers may describe similar components in different views. The same number with different letter suffixes may indicate different instances of similar components. Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:Figure 1 shows devices and network entities in a dynamic communication environment according to an example;Figure 2 shows an operational arrangement of a network and mobile user equipment according to an example;Figure 3 includes a block diagram of generalized (and hierarchical) sub-cloud instantiation by the MEC coordinator according to an example;Figure 4 shows a process of creating multiple contexts for an application according to an example;Figure 5 shows the application multiple context deletion process according to an example;Figure 6 shows the application of multiple context updates according to an example;Fig. 7 shows a publish/subscribe process between MEC hosts applying multiple contexts according to an example.Fig. 8 shows an example resource URI structure of ETSI MEC Mx2 API according to an example;Figure 9 shows a flow chart of a life cycle management (LCM) agent according to an example;Figure 10 shows the MEC and fog network topology according to an example;Figure 11 shows the processing and storage layers in the MEC and fog network according to an example;Figure 12 shows a block diagram of an MEC system architecture according to an example;FIG. 13 shows a domain topology for each device network (for example, an Internet of Things (IoT) device network) coupled to a corresponding gateway through a link according to an example;Figure 14 shows a cloud computing network according to an example that communicates with a mesh network of IoT/endpoint devices operating as fog devices at the edge of the cloud computing network;Figure 15 shows a block diagram of a network according to an example, which shows communication between multiple IoT/endpoint devices; andFigure 16 shows a block diagram of an example device architecture according to an example on which any one or more of the techniques (eg, operations, procedures, methods, and methodology) discussed herein can be executed.Detailed waysIn the following description, methods, configurations, and related devices for enhancing a distributed computing environment with multiple edge hosts and user equipment included in a multi-access edge computing (MEC) network platform and settings are disclosed. With the disclosed distributed MEC application/service framework, customers (including consumers with low-end UEs, as well as companies deploying IoT devices and other types of UEs) can delegate tasks in the edge cloud without over-provisioning in the device itself Radio/computing/resources are generally not feasible in design due to strict requirements in terms of power consumption and/or long battery life constraints.Various examples describe enhancements based on MEC, thereby introducing data structures and processes (for example, Application Multiple Context) that can support the creation, addition, deletion, and update of application instance sub-clouds. The sub-cloud includes the device application set running on the user equipment and the MEC application set running on the MEC host. The sub-cloud allows sharing of information between device applications and MEC applications.Various examples solve complex and/or computationally-requiring requests from device applications via the Mx2 reference point in the MEC system in order to instantiate distributed cloud resources. The request may include requests from software instances/services running across multiple MEC hosts and multiple UEs. These cloud resources can either be copied or shared, and identify unique functions. In order to resolve such service consumption requests, a protocol that discovers information parts and collects telemetry information from each node that owns these information parts can be used. Various examples describe the flexible definition/replacement/update of the mentioned sub-cloud publish and subscribe protocol. The publish/subscribe method between MEC hosts in different locations allows the expansion or replacement of sub-cloud nodes. This expansion or replacement It can be activated by the MEC host for various reasons (such as UE application mobility, service migration, service backup, load balancing, energy consumption reduction, etc.).The present technology supports various edge computing installations by enabling the provision of verified services to application endpoints and investigation by the application endpoints, which provides improvements in security and operability. This technology can also expand the edge environment and the capabilities of each entity to improve the performance of computing and network resources, and obtain reliable edge services with low latency or high bandwidth.The following systems and technologies can be implemented in various distributed, virtualized, or managed edge computing systems or enhance various distributed, virtualized, or managed networking environments. These include environments where the MEC platform, network function virtualization (NFV), or fully virtualized 4G/5G network configuration is used to implement network services. Therefore, various references are made to the defined types of telecommunication equipment and architectures. In addition, in the present disclosure, LTE, 5G, eNB, gNB, and similar radio access network concepts are referred to, but it is intended that the present technology can be utilized through changes or substitutions of types of deployed networks. (For example, all described solutions with reference to LTE can also be applied to New Radio (NR)/5G or similar next-generation systems).Figure 1 shows devices and network entities in a multi-access communication environment. Figure 1 specifically shows the different layers of communication occurring within the environment, starting with endpoint sensors or things 110 (e.g. operating in IoT network topology); increasing the complexity of gateways (e.g. vehicles) or intermediate nodes 120, This helps to collect and process data from the endpoint 110; it increases the processing and connection complexity of the access or edge node 130 (for example, a roadside unit operating as an edge computing node), such as a base station (eNB), road Edge access point (RAP) or roadside unit (RSU), node or server to be embodied; and increase the connection and processing complexity of the core network or cloud setting 140. In fact, the processing at the core network or cloud setting 140 can be enhanced by network services executed by the remote application server 150 or other cloud services.As shown in the figure, in the scenario of FIG. 1, the endpoint 110 transmits various types of information to the gateway or intermediate node 120; however, due to the mobility of the gateway or intermediate node 120 (such as in a vehicle or a mobile computing device), This leads to multiple access points or multiple types of access points for network access, multiple different services and servers for computing operations, multiple different applications and data available for processing, and as network services become available And changes in the characteristics and functions of the network path provide a number of different network operations. Specifically, the environment may involve vehicle-to-infrastructure (V2X), vehicle-to-vehicle (V2V), and vehicle-to-infrastructure from vehicle user equipment (UE) or human-operated portable UEs (e.g., mobile smartphones and computing devices) (V2I) All aspects of services, which brings great complexity to computing services and network usage.Figure 2 shows an operational arrangement 200 of network and vehicle user equipment in which various embodiments can be practiced. In the arrangement 200, the vehicle user equipment (vUE) 210, 220 may (e.g., use LTE C-V2X WWAN or SRC/ETSIITS-G5 (WLAN) communication network, etc.) operate with a defined communication system. In an embodiment, the roadside unit (RSU) 232 may provide a processing service 240 through which the vUE 210 and 220 may communicate with each other (or with other services), perform services separately and with each other, or access coordinated or Similar aspects of device-specific edge computing services. In an embodiment, the processing service 240 may be provided by an MEC host (for example, an ETSI MEC host), an MEC platform, or other MEC entities implemented in or by the hardware of the RSU 232. In this example, the RSU 232 may be a fixed RSU, such as an eNB type RSU or other similar infrastructure. In other embodiments, the RSU 232 may be a mobile RSU or a UE-type RSU, which may be implemented by a vehicle (for example, a truck), a pedestrian, or some other device with such capabilities. In these cases, mobility issues can be managed to ensure proper radio coverage for applicable services. For example, each vUE 220, vUE 210 can transition from other RSUs (such as RSU 234, RSU 236, and other network nodes not shown) to other RSUs (such as RSU 234, RSU 236, and other network nodes not shown). The operation at the node) manages mobility during transition.FIG. 3 depicts a block diagram of generalized (and hierarchical) sub-cloud instantiation by the MEC coordinator 300 according to an example. This figure shows the general definition of a distributed computing system by logically introducing the sub-cloud 302 and the sub-cloud 304 (for example, equivalent to the "poker table" in a multi-player online poker game). In this example, it can be based on, for example, the end-to-end required to instantiate the MEC application or migrate the service from the service registry of the MEC host 310 of the sub-cloud 302 to another entity in the same sub-cloud (for example, another MEC host 312). End-to-end waiting time level, to "logically" define the area (and sub-cloud). However, such areas and sub-clouds are not necessarily determined on a "physical" common location.In the example, the sub-cloud is defined as including: a group (for example, running on each device (UE, for example, UE 320)) device application; and a group (for example, on different MEC hosts (for example, 310 and 312) ) MEC application running on. This sub-cloud is identified by appropriate multiple application contexts, such as can be combined with the ETSI standard GS MEC-016 "Multi-Access Edge Computing (MEC); UE Application Interface" or added to the ETSI standard GS MEC-016 "Multi-Access Edge Computing (MEC); UE Application Interface".In various contexts, different RATs (radio access technologies) or even non-3GPP networks can be considered for use in sub-clouds. For example, the user equipment may be associated with different MEC applications in parallel (for example, one connected via a cellular network and the other connected via a Wi-Fi network). This technology allows these and other combinations to utilize radio diversity as a key asset for converging different systems at the application layer.In another example, task/application migration can be provided in a multi-node environment. This configuration can be used for scenarios such as multi-device/multi-user interaction and multi-point AR/VR applications (including but not limited to multi-player games, advanced video conferencing, NB-IoT sensors, large-scale data management, industrial IoT, etc.) Enable the use case.In the example, MEC-based enhancements include the use of appropriate data structures and procedures to support the creation, addition, deletion, and update of sub-clouds of application instances (hereinafter also referred to as "application multiple contexts"). The purpose of this enhancement is to use the Mx2 reference point in the MEC system to solve complex and/or computing requirements from device applications to instantiate distributed cloud resources (including software instances running across multiple MEC hosts and multiple UEs) /service). These cloud resources can either be copied or shared, and unique functions can be identified.In order to solve the related service consumption request, the following defines the protocol for discovering the information part and collecting telemetry information from each node that owns the information part. In order to flexibly define/replace/update the mentioned sub-clouds, the following includes publish/subscribe methods between MEC hosts in different locations, allowing expansion or replacement of sub-cloud nodes, which can be extended or replaced by the MEC host. Various reasons (such as UE application mobility, service migration, service backup and load balancing, energy consumption reduction, etc.) are activated.Further, the proposed distributed MEC application/service framework enables computing hardware to be extended to the edge of the network, including not only MEC servers, but also terminals and different types of equipment. In addition, customers (including consumers with low-end UEs, as well as companies deploying IoT devices and other types of UEs) will be able to delegate tasks more easily in the edge cloud without over-provisioning radio/computing/resources in the device itself . Due to stringent requirements in terms of power consumption and/or long battery life constraints (for example, for NB-IoT or other types of sensors), this result is generally not feasible in design.In an example, sub-clouds can be expanded, contracted, or replaced according to environmental conditions (for example, presence/absence of equipment/MEC applications (ie, consumers) must use services), and/or sub-clouds can be formed when needed. When an entity discloses its application, it drives the sub-cloud according to business needs. For example, more UE applications can join multiple application contexts that have been set (and thus also join the sub-cloud), and provide other applications to the sub-cloud (for example, once these applications need to use services that have been registered with such applications). On the contrary, when, for example, it is not economically feasible to be a contributor to a given sub-cloud or is not contextually relevant to the locally running application, the sub-cloud device application may withdraw participation.At the same time, the mobility of the UE device may require the (source) MEC host to transparently migrate the service to another (target) MEC host. In this case, the physical entity (UE) on which the device application (i.e., one of the sub-cloud applications) runs may have moved to a different physical location (for example, on a radio interface co-located with the (target) MEC host). Within the radio coverage of the entry point (RAP)); however, the device application is still part of the same sub-cloud. For example, this may happen in a situation where this particular device application exposes a service that is essential for executing another application that contributes to the same sub-cloud.As a result, a mechanism for transparently managing sub-clouds or multiple application contexts can be used between MEC hosts of a given MEC system (or different MEC systems), while taking into account multiple standards related to the performance requirements of service consumers (such as , End-to-end waiting time). In an example, the mechanism can be provided through a publish/subscribe method or agreement between MEC hosts to perform these actions that are transparent to the UE application.In the example, a new data structure called AppMultipleContext is used to identify sub-clouds and related application instances. Using this data structure, a device application can request (through the Mx2 interface): instantiate (create) multiple contexts of this type of application, or just add multiple contexts to an existing application; or delete or update multiple contexts of the application again. The proposed new set of procedures is enhancing the Mx2 interface between the device application and the user application LCM agent.According to the current implementation of the ETSI MEC standard, the application context has been foreseen, and when it exists, the context includes an MEC application instance and one or more device applications (running on the UE) associated with the context. However, the current implementation does not clearly define whether these UE instances can communicate with each other, and it is impossible to create multiple application contexts with many MEC applications (which may also run on different MEC hosts). As a result, the following procedures and data flows describe the operations used to manage multiple contexts of the application, as well as the appropriate data structures to support these procedures.Figure 4 provides an illustration of the application multiple context creation process according to an example. Application multiple context creation provides a process for requesting to either join an available user application or instantiate a new user application.As shown in step 402, using this process, the device application 410 submits a request (such as a POST (publish) request) to the user application lifecycle management (LCM) agent 420. The message body contains the data structure of the multiple contexts of the application to be created. The user application lifecycle management agent 420 authorizes the request from the device application 410. The request can be forwarded to the Operation Support System (OSS). OSS can make decisions about granting multiple context creation requests. The multi-access edge coordinator triggers the creation and application of multiple contexts in the MEC system.As shown in 404, the user application lifecycle management agent returns a "201 Created" response to the device application, where the message body contains the data structure of the multiple contexts of the created application.Figure 5 provides an illustration of applying multiple context deletion procedures according to an example. Application of multiple context deletion provides a process in which the UE application requests to delete the application of multiple contexts.As shown in 502, the UE application 410 submits a delete request for the resource to be deleted to the user application lifecycle management (LCM) agent 420. The user application lifecycle management agent 420 authorizes the request from the UE application. The request can be forwarded to OSS. OSS can make a decision to approve the deletion. The multi-access edge coordinator triggers the deletion of multiple contexts of the application.As shown in 504, the user application lifecycle management agent returns a "204 No Content" response.Figure 6 provides an illustration of applying multiple context updates according to an example. Applying multiple context updates provides a process in which the user application lifecycle management agent 420 receives updates of ueAppMultipleContext (ue application multiple contexts).As shown in 602, the UE application 410 updates ueAppMultipleContext (ue application multiple contexts). The request includes a MultipleContextId (multiple context identifier) with a modified data structure of MultipleAppContext (multiple application contexts), where only the callback reference is allowed to be updated by the UE application. In some examples, process and hardware isolation techniques such as containers, OS processes, virtual machines, SGX, FPGA, virtual memory, hardware partitions, etc., can be used to protect the AppMultipleContext structure. Using the above technology, the user agent can be isolated from other tenants of the MEC system. Access to appMultipleContext (application multiple context) may follow MEC's general multi-tenant isolation and access control mechanism.As shown in 604, the user application lifecycle management agent 420 returns a "204 No Content" response.FIG. 7 provides an illustration of a publish/subscribe process between MEC hosts 710 applying multiple context processes according to an example. The MEC host 710 publish/subscribe provides a process that allows MEC hosts such as the MEC host 720 to communicate with each other in the context of applying multiple contexts. There may be various scenarios in which the MEC host currently hosting the sub-cloud must perform publish/subscribe type communication with other MEC hosts. Here are some examples:1)UEs that are part of multiple contexts of sub-clouds/applications are mobile. The MEC host identifying this information may need to transfer the service to another MEC host.2)The MEC host may need to start a backup MEC application or supplementary application required to support multiple contexts of the application.3)Participating MEC hosts may need to use more resources (such as storage) from neighboring low-latency MEC hosts.4)Context data (such as configuration files, best practices, etc.) can be provided to new UE and MEC applications that have joined the sub-cloud.Contrary to the protocol depicted in Figures 4-6, the protocol depicted in Figure 7 does not specifically refer to a specific action, but includes actions taken between MEC hosts to support multiple contexts. Therefore, the publish/subscribe method may include any one of the above-mentioned example actions (create, update, delete) as a sub-method.In a further example, a data structure called "AppMultipleContext" can be introduced to support the above process. Based on the following properties, this data structure type can represent information about multiple contexts of applications created by the MEC system:In a further example, the resource URI structure of the Mx2 API can be defined to allow the use of previous attributes and resources. Figure 8 depicts an example resource URI structure for ETSI MEC Mx2 API according to an example. It is obvious that the current resource URI structure of the UE application interface API (for example, as specified by ETSI GS MEC 016) can be utilized, but with enhanced content for different attributes and resources.In the example, the following resources and methods can be modified to be used with the resource URI:From the above example, you can clearly see the above process and other changes in the interface.Figure 9 shows a flowchart of a life cycle management (LCM) agent according to an example. At 902, a request for multiple contexts of the application from the device application is obtained. At 904, the application context of the application is determined. In the example, there are no multiple application contexts before the request, and multiple application contexts are created. Application multiple contexts can include references to applications and device applications. In other examples, multiple contexts of the application already existed before the request. In these examples, the LCM agent can determine that there are multiple contexts for the existing application of the application. The identifier can be used to identify multiple contexts of the existing application of the application.At 906, the LCM agent may authorize requests from the device application for multiple contexts of the application of the application. At 908, the request-based device application identifier is added to the application multiple contexts. At 910, the LCM agent sends a response created for the device application to the device application based on the requested authorization, where the response includes identifiers of multiple contexts of the application.In other examples, the LCM agent obtains a request to delete multiple contexts of the application from the device application. The request from the device application to delete the application multiple contexts of the application can be authorized by the LCM agent. The request from the LCM agent to delete the application multiple contexts of the device can be sent to the multi-access edge computing (MEC) coordinator. The delete response can be sent to the device application based on the requested authorization. The response may include identifiers for multiple contexts of the application.In another example, the LCM agent may obtain a request to update multiple contexts of the application from the device application. The request can include multiple context identifiers and modified data. The application multiple contexts of the application are determined based on the multiple context identifiers. The multiple contexts of the application are updated based on the modified data. Then, the LCM agent may send an update response for the device application to the device application based on the requested authorization. The response may include identifiers for multiple contexts of the application. In some examples, the modified data may include an updated callback reference.In another example, the LCM agent may obtain the publication message from the MEC host. The published message can include multiple contexts of the application of the application. The application context of the application can then be transferred to the MEC host.As pointed out in the above discussion, the technology discussed in this article can be adapted for use with MEC and similar fog architectures, including those defined by the ETSI MEC specification and similar standard bodies. MEC technology allows flexible and rapid deployment of innovative applications and services to mobile subscribers, enterprises or vertical market segments. For example, in the automotive field, applications such as V2X (for example, IEEE 802.11p or 3GPP LTE C-V2X) exchange data, provide data to aggregation points or access data in a database to determine whether it originates from multiple sensors (for example, An overview of the local conditions of various cars, roadside units, etc.). It will be understood that the currently described verification architecture and reputation service are suitable for integration within the scope of use of MEC-based or fog-based systems or facilities implemented with hardware and software resources.Figure 10 illustrates an MEC and fog network topology according to an example. This network topology (which includes multiple conventional networking layers) can be extended by using the tags and objects discussed in this article. Specifically, between endpoints (at the endpoint/things network layer 1050), between gateways (at the gateway layer 1040), between access or edge computing nodes (for example, at the neighborhood node layer 1030) The relationship between core networks or routers (for example, at the regional or central office level 1020) between core networks or routers can be represented by using linked objects and label attributes.The fog network (e.g., established at the gateway layer 1040) can represent a dense geographic distribution of near-user edge devices (e.g., fog nodes) that are equipped with storage capabilities (e.g., to avoid storing data in the cloud). Requirements in the data center), communication capabilities (for example, instead of being routed through the Internet backbone), control capabilities, configuration capabilities, measurement and management capabilities (instead of mainly network gateways such as those in the LTE core network) Control), etc. In this case, FIG. 10 shows a general architecture that integrates (based on their location, connectivity, processing power, etc.) multiple MEC and fog nodes classified in different layers. However, it will be understood that such fog nodes can be replaced or enhanced by edge computing processing nodes.The fog node can be classified depending on the topology and layer where the fog node is located. On the contrary, from the perspective of the MEC standard, each fog node can be regarded as a mobile edge (ME) host, or a simple entity that hosts ME applications and lightweight ME platforms. In the example, the MEC or fog node can be defined as a device (ME host) connected to the host ME platform or an application instance running on the device. Here, the application uses the MEC service and is associated with the ME host in the system. Nodes can be migrated, associated to different ME hosts, or use MEC services from other (for example, local or remote) ME platforms.In contrast to this approach, traditional clients, V2V, and other network applications rely on remote cloud data storage and processing to exchange and coordinate information. Cloud data arrangement allows long-term data collection and storage, but it is not optimal for highly time-varying data (such as collisions, traffic light changes, etc.), and may not be able to try to cope with waiting time challenges (such as when a child runs on the street) , Stop the vehicle). The data message conversion technology discussed in this article enables direct communication to occur between devices (for example, vehicles) in a low-latency manner using features that provide minimal overhead in existing MEC services.Depending on the real-time requirements in the applicable communication context, the hierarchical structure of data processing and storage can be defined. For example, including local ultra-low latency processing, regional storage and processing, and storage and processing based on remote cloud data centers. SLAs, KPIs, and other measures described in this article can be used to identify where the data is best delivered and where the data is processed or stored. This typically depends on the Open Systems Interconnection (OSI) layer dependency of the data. For example, lower layer (PHY, MAC, routing, etc.) data typically changes quickly and is better handled locally to meet latency requirements. Higher layer data (such as application layer data) is typically less time critical and can be stored and processed in remote cloud data centers.Figure 11 shows the processing and storage layers in the MEC and fog network according to an example. The illustrated data storage or processing hierarchy 1110 relative to cloud and fog/edge networks allows dynamic reconfiguration of elements to meet latency and data processing parameters.The lowest hierarchical structure level is on the vehicle level. This level stores data about past observations or data obtained from other vehicles. The second hierarchical level is distributed storage across multiple vehicles. Depending on the proximity of the vehicles to each other or the target location (for example, near the accident), the distributed storage may change in a short time. The third hierarchy level is in the local anchor points (such as MEC components) carried by the vehicles to coordinate the vehicles in the pool. The fourth level of the hierarchy is storage shared across MEC components. For example, data is shared between different vehicle pools within each other's range.The fifth level of the hierarchy is fixed infrastructure storage, such as storage in RSU. This level can aggregate data from entities in level 1 to level 4 of the hierarchy. The sixth level of the hierarchy is storage on a fixed infrastructure. For example, this level can be located in the core network of a telecommunications network or in the enterprise cloud. From this example, other types of layers and layer processing can be followed.Figure 12 depicts a block diagram of an example IoT processing system architecture in which any one or more of the techniques discussed herein (eg, operations, procedures, methods, and methodology) can be performed. In an example, the MEC system architecture can be defined according to specifications, standards, or other definitions (for example, according to ETSI GS MEC-003 specifications). In this figure, the Mp reference point refers to MEC platform functions; the Mm reference point refers to management; and Mx refers to the connection with external entities. The services, applications, coordinators, and other entities discussed in this article can be implemented at any number of entities in the MEC system architecture shown in FIG. 12, and the communication to perform network operations can be at any of the MEC system architecture shown in FIG. 12 A number of interfaces are implemented.Any of the radio links described herein may operate according to any one or more of the following radio communication technologies and/or standards, including but not limited to: Global System for Mobile Communications (GSM ) Radio communication technology, General Packet Radio Service (GPRS) radio communication technology, Enhanced Data Rate for GSM Evolution (EDGE) radio communication technology, and/or Third Generation Partnership Project (3GPP) radio communication technology, for example, General Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), 3GPP Long Term Evolution (LTE), 3GPP Long Term Evolution Advanced (LTE Advanced), Code Division Multiple Access 2000 (CDMA2000), Cellular Digital Packet Data (CDPD), Mobitex, 3rd Generation (3G), Circuit Switched Data (CSD), High Speed Circuit Switched Data (HSCSD), Universal Mobile Telecommunications System (Third Generation) (UMTS (3G)), Broadband Code Division Multiple Access (Universal Mobile Telecommunications System) (W- CDMA (UMTS)), High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), High Speed Packet Access Plus (HSPA+), Universal Mobile Telecommunications System- Time Division Duplex (UMTS-TDD), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-CDMA), Third Generation Partnership Program 8th Edition (pre-4th generation) (3GPP Rel.8 (Pre-4G)), 3GPP Rel.9 (Third Generation Partnership Project Rel. 9), 3GPP Rel.10 (Third Generation Partnership Project Rel. 10), 3GPP Rel.11 (Section Third Generation Partnership Project Rel. 11), 3GPP Rel.12 (Third Generation Partnership Project Rel. 12), 3GPP Rel.13 (Third Generation Partnership Project Rel. 13), 3GPP Rel.14 (Third Generation Partnership Project) Partnership Project Version 14), 3GPP Rel.15 (Third Generation Partnership Project Version 15), 3GPP Rel.16 (Third Generation Partnership Project Version 16), 3GPP Rel.17 (Third Generation Partnership Project) Plan version 17) and subsequent versions (such as version 18, 19, etc.), 3GPP 5G, 3GPP LTE Extra, LTE-Advanced Pro, LTE License Assisted Access (LAA), MuLTEfire, UMTS Terrestrial Radio Access (UTRA) ), Evolved UMTS Terrestrial Radio Access (E-UTRA), Long Term Evolution Advanced (4th Generation) (LTE Advanced (4G)), cdmaOne (2G), Code Division Multiple Access 2000 (3rd Generation) (CDMA2000 (3G) ), evolution data optimization or only evolution data (EV-DO), advanced mobile phone system (1st generation)) (AMPS(1G)), total connection Incoming communication system/extended total access communication system (TACS/ETACS), digital AMPS (second generation) (D-AMPS (2G)), push-to-talk (PTT), mobile phone system (MTS), improved mobile phone system (IMTS), Advanced Mobile Telephone System (AMTS), OLT (Offentlig Landmobil Telefoni in Norwegian, public land mobile phone), MTD (Swedish abbreviation of Mobiltelefonisystem D, or mobile phone system D), public automatic land mobile (Autotel/ PALM), ARP (Autoradiopuhelin in Finnish, "car radio telephone"), NMT (Nordic mobile phone), high-capacity version of NTT (Nippon Telegraph and Telephone) (Hicap), Cellular Digital Packet Data (CDPD), Mobitex, DataTAC , Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Circuit Switched Data (CSD), Personal Handyphone System (PHS), Broadband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA) , Also known as 3GPP Universal Access Network, or GAN standard), Zigbee, Bluetooth (r), Wireless Gigabit Alliance (WiGig) standard, generally mmWave standard (wireless systems operating at 10-300GHz and above, such as WiGig, IEEE 802.11ad, IEEE802.11ay, etc.), technologies that work in the above 300GHz and THz frequency bands, (based on 3GPP/LTE or IEEE 802.11p and others) vehicle-to-vehicle (V2V) and vehicle-to-X (V2X) and vehicles For infrastructure (V2I) and infrastructure-to-vehicle (I2V) communication technology, 3GPP cellular V2X, DSRC (dedicated short-range communication) communication systems such as intelligent transmission systems and others (usually operating at 5850MHz to 5925MHz), European ITS- The G5 system (ie, the European style of DSRC based on IEEE 802.11p, including ITS-G5A (ie, the operation of ITS-G5 in the European ITS band dedicated to ITS) for safety-related applications in the frequency range of 5875GHz to 5905GHz ), ITS-G5B (operating in the European ITS band dedicated to ITS non-safety applications in the frequency range of 5855GHz to 5875GHz), ITS-G5C (that is, ITS applications operating in the frequency range of 5470GHz to 5725GHz), Japan at 700MHz DSRC of the frequency band (including 715MHz to 725MHz), etc.The various aspects described in this article can be used in the context of any spectrum management scheme, including dedicated licensed spectrum, unlicensed spectrum, (licensed) shared spectrum (such as in 2.3GHz-2.4GHz, 3.4GHz-3.6GHz) , LSA in 3.6GHz-3.8GHz and additional frequencies=licensed shared access and SAS in 3.55GHz-3.7GHz and additional frequencies=spectrum access system/CBRS=civil broadband radio system). Applicable spectrum frequency bands include IMT (International Mobile Telecommunications) spectrum and other types of spectrum/frequency bands, such as nationally allocated frequency bands (including 450MHz-470 MHz, 902MHz-928 MHz (Note: For example, allocated in the United States) (FCC Part 15) ), 863Mhz-868.6Mhz (Note: For example, allocated in the European Union (ETSI EN 300 220)), 915.9MHz-929.7MHz (Note: For example, allocated in Japan), 917MHz-923.5MHz (Note: For example, allocated in Korea), 755MHz- 779 MHz and 779MHz-787 MHz (Note: For example, allocated in China), 790MHz-960 MHz, 1710MHz-2025MHz, 2110MHz-2200 MHz, 2300MHz-2400 MHz, 2.4GHz-2.4835GHz (Note: it is a globally available ISM band and it is composed of Wi-Fi technology series (11b/g/n/ax) and also used by Bluetooth), 2500MHz-2690 MHz, 698MHz-790 MHz, 610MHz-790 MHz, 3400MHz-3600 MHz, 3400MHz-3800MHz, 3.55GHz-3.7GHz (Note: For example, allocated for civilian broadband radio services in the United States), 5.15GHz-5.25GHz and 5.25GHz-5.35GHz and 5.47GHz-5.725GHz and 5.725GHz-5.85GHz bands (Note: For example, in the United States Allocation (FCC Part 15), which consists of four U-NII frequency bands in a total of 500MHz spectrum), 5.725GHz-5.875GHz (note: for example, allocated in the European Union (ETSI EN 301 893)), 5.47GHz-5.65GHz ( Note: For example, allocated in South Korea), 5925MHz-7125 MHz and 5925MHz-6425 MHz (note: considered in the United States and EU respectively), IMT-Advanced Spectrum, IMT-2020 spectrum (expected to include 3600MHz-3800 MHz, 3.5GHz band, 700MHz Frequency bands, frequency bands in the range of 24.25GHz-86 GHz, etc.), spectrums that become available under the FCC’s "Frequency Frontier" 5G initiative (including 27.5GHz-28.35GHz, 29.1GHz-29.25GHz, 31GHz-31.3GHz, 37GHz- 38.6GHz, 38.6GHz-40 GHz, 42GHz-42.5GHz, 57GHz-64 GHz, 71GHz-76 GHz, 81GHz-86 GHz, 92GHz-94GHz, etc.), 5.9GHz (pass Often 5.85GHz-5.925GHz) and 63GHz-64 GHz ITS (Intelligent Transmission System) frequency bands, currently allocated to WiGig frequency bands, such as WiGig Band 1 (57.24GHz-59.40GHz), WiGig Band 2 (59.40GHz-61.56GHz) ) And WiGig Band 3 (61.56GHz-63.72GHz) and WiGig Band 4 (63.72GHz-65.88GHz), 57GHz-64/66GHz (for example, with almost the world's named multi-gigabit wireless system (MGWS)/WiGig; in The United States (FCC Part 15) allocates a total of 14GHz spectrum, while the European Union (ETSI EN 302 567 and ETSI EN301217-2 for fixed P2P) allocates a total of 9GHz spectrum), the 70.2GHz-71 GHz band, between 65.88GHz and 71GHz Any frequency bands, such as 76GHz-81 GHz currently allocated for automotive radar applications, and future frequency bands including 94GHz-300 GHz and above. In addition, this solution can be used as a secondary basis on frequency bands such as the TV white space frequency band (usually below 790 MHz), of which the 400 MHz and 700 MHz frequency bands are particularly promising candidates. In addition to cellular applications, it can also solve specific applications for vertical markets, such as PMSE (program production and special events), medical, health, surgery, automotive, low latency, drones and other applications.The various aspects described in this article can also implement hierarchical applications of the solution. For example, it is possible to use hierarchical priority scheduling for different types of user references through access to the spectrum based on priority scheduling (e.g., Low/medium/high priority, etc.) For example, it has the highest priority for Tier 1 users, followed by Tier 2 users, then Tier 3 users, and so on.The various aspects described herein can also be applied to different single carrier or OFDM styles (CP-OFDM, SC-FDMA, SC-OFDM, filter bank based multi-carrier (FBMC), OFDMA, etc.), and in particular through The OFDM carrier data bit vector is allocated to the corresponding symbol resource and can be applied to 3GPP NR (New Radio). Some of the features in this document are defined for the network side, such as access point, eNodeB, New Radio (NR) or next-generation node B (gNodeB or gNB)-such as for the 5th generation (5G) of 3GPP The context of the communication system is medium. Nevertheless, User Equipment (UE) can also take on this role and act as an access point, eNodeB, gNodeB, etc. That is, some or all of the features defined for the network equipment can be implemented by the UE or mobile computing device.In a further example, the aforementioned examples of network communication and operation (eg, deployment with edge devices) can be integrated with IoT and similar device-based network architectures. Figure 17 illustrates an example domain topology for various IoT networks coupled to various gateways through links. IoT is a concept in which a large number of computing devices are interconnected to each other and to the Internet to provide functions and data collection at a very low level. Therefore, as used herein, IoT devices may include semi-autonomous devices that perform functions (such as sensing or control, etc.), communicate with other IoT devices and wider networks (such as the Internet).It has been envisaged to integrate MEC and other edge computing use cases into many network and application settings, including those that support network arrangements for IoT deployment. IoT devices are physical or virtual objects that can communicate on the network (usually at the edge or endpoint of the network), and can include sensors, actuators, and other input/output components, such as to collect from the real-world environment Data or perform actions. For example, IoT devices may include low-power devices embedded or attached to everyday objects (such as buildings, vehicles, packages, etc.) to provide sensors, data, or processing functions. Recently, IoT devices have become more and more popular, so applications and use cases using these devices have proliferated.Various standards have been proposed to more efficiently interconnect and operate IoT devices and IoT network use cases, including those with MEC and mobile network architectures. In addition to the specialized IoT application interaction architecture and configuration standards distributed by working groups such as the Open Connectivity Foundation (OCF), some related communication and network architecture standards include standards such as ETSI, the Third Generation Partnership Project (3GPP) ), communications and network architecture standards distributed by groups such as the Institute of Electrical and Electronics Engineers (IEEE).IoT devices are often limited in terms of memory, size, or functionality, allowing the deployment of a larger number of devices to achieve a cost similar to a smaller number of larger devices. However, the IoT device can be a smart phone, a laptop device, a tablet device, or a PC, or other larger devices. Moreover, the IoT device may be a virtual device, such as an application on a smart phone or other computing device. IoT devices may include IoT gateways, which are used to couple IoT devices to other IoT devices and to cloud applications for data storage, process control, and so on.The network of IoT devices may include commercial and home automation equipment, such as water supply systems, power distribution systems, plumbing control systems, factory control systems, light switches, thermostats, locks, cameras, alarms, motion sensors, and so on. IoT devices can be accessible through remote computers, servers, and other systems, for example to control the system or access data.The future growth of the Internet and similar networks may involve a very large number of IoT devices. Accordingly, in the context of the technology discussed in this article, a large number of innovations for such future networking will address the barrier-free growth of all these layers, discover and create accessible connected resources, and support the hiding and separation of connected resources. The need for capacity. Any number of network protocols and communication standards can be used, where each protocol and standard is designed to solve a specific goal. In addition, the protocol is the part of the structure that supports human-accessible services that operate regardless of location, time, or space. Innovations include: service delivery and associated infrastructure, such as hardware and software; security enhancements; and service provision based on QoS provisions specified in SLAs and service delivery agreements. As will be understood, the use of IoT devices and networks presents a number of new challenges in heterogeneous connectivity networks that include a combination of wired and wireless technologies.FIG. 13 specifically provides a simplified diagram of a domain topology that can be used for a large number of IoT networks including IoT devices 1304, where IoT networks 1356, 1358, 1360, 1362 are coupled to corresponding gateways 1354 through backbone links 1302. For example, a large number of IoT devices 1304 may communicate with the gateway 1354, and may communicate with each other through the gateway 1354. To simplify the diagram, not every IoT device 1304 or communication link (e.g., link 1316, 1322, 1328, or 1332) is labeled. The backbone link 1302 may include any number of wired or wireless technologies (including optical networks), and may be part of a local area network (LAN), a wide area network (WAN), or the Internet. In addition, such communication links facilitate the optical signal path between the IoT device 1304 and the gateway 1354, including the use of multiplexing/demultiplexing components that facilitate the interconnection of various devices.The network topology may include any of a variety of types of IoT networks, such as a mesh network provided by using a Bluetooth low energy (BLE) link 1322 using a network 1356. Other types of IoT networks that may exist include: wireless local area network (WLAN) networks 1358 for communicating with IoT devices 1304 via IEEE802.11 links 1328; for communicating with IoT via LTE/LTE-A (4G) or 5G cellular networks The cellular network 1360 through which the device 1304 communicates; and the low-power wide area (LPWA) network 1362, for example, the LPWA network compatible with the LoRaWan specification promulgated by the LoRa Alliance; or the IPv6 on the low-power wide area network (LPWAN) network, which is compatible with the Internet The specifications promulgated by the Engineering Task Force (IETF) are compatible. Further, each IoT network can use any number of communication links to communicate with external network providers (for example, layer 2 or layer 3 providers), such as LTE cellular links, LPWA links, or based on IEEE 802.15 .4 standard link (such as). Each IoT network can also operate with the use of various network and internet application protocols, such as the Constrained Application Protocol (CoAP). Each IoT network may also be integrated with coordinator devices, which provide link chains that form a cluster tree of linked devices and networks.Each of these IoT networks can provide opportunities for new technical features, such as those described herein. Improved technologies and networks can achieve exponential growth of devices and networks, including the use of IoT networks in fog devices or systems. As the use of such improved technologies grows, IoT networks can be developed to achieve self-management, functional evolution, and collaboration without direct human intervention. Improved technology can even enable IoT networks to operate without a centralized controlled system. Accordingly, the improved technology described in this article can be used to automate and enhance network management and operation functions far beyond current implementations.In an example, communications between IoT devices 1304 (such as on the backbone link 1302) may be protected by a decentralized system for authentication, authorization, and accounting (AAA). In a decentralized AAA system, distributed payment, credit, audit, authorization and authentication systems can be implemented across interconnected heterogeneous network infrastructures. This allows systems and networks to move towards autonomous operation. In these types of autonomous operations, machines can even enter into human resource contracts and negotiate partnerships with other machine networks. This may allow for common goals and balanced service delivery to be achieved for generalized planned service level agreements, and to achieve solutions that provide metering, measurement, traceability, and traceability. The creation of new supply chain structures and methods can enable a large number of services to be generated, valued and collapsed without any human involvement.Such IoT networks can be further enhanced by integrating sensing technologies (such as sound, light, electronic traffic, face and pattern recognition, smell, vibration) into autonomous organizations between IoT devices. The integration of sensor systems can allow for systematic and autonomous communication and coordination of service delivery for contractual service objectives, clustering based on orchestration and QoS, and resource convergence. Some of the individual examples of network-based resource processing include the following examples.The mesh network 1356 can be enhanced by, for example, a system that performs serial data-information transformation. For example, a self-formed chain including processing resources of a multi-link network can distribute the transformation of raw data to information in an efficient manner, the ability to distinguish between assets and resources, and the associated management of each. In addition, trust and service indexes based on appropriate components of infrastructure and resources can be inserted to improve data integrity, quality, assurance, and deliver measures of data confidence.The WLAN network 1358 may use, for example, a system that performs standard conversion to provide multi-standard connectivity, thereby realizing IoT devices 1304 that communicate using different protocols. Further systems can provide seamless interconnectivity across a multi-standard infrastructure that includes both visible Internet resources and hidden Internet resources.The communication in the cellular network 1360 can be enhanced by, for example, both a system that transfers data, a system that extends communication to more remote devices, or a system that transfers data, and a system that extends communication to more remote devices. The LPWA network 1362 may include systems that perform non-Internet (IP) to IP interconnection, addressing, and routing. Further, each of the IoT devices 1304 may include an appropriate transceiver for wide area communication with that device. Further, each IoT device 1304 may include other transceivers for communication using additional protocols and frequencies. This point is further discussed with regard to the communication environment and hardware of the IoT processing device depicted in FIG. 12 and FIG. 15.Finally, clusters of IoT devices can be equipped to communicate with other IoT devices and with cloud networks. This may allow IoT devices to form an ad-hoc network between devices, allowing them to act as a single device that may be referred to as a fog device, fog platform, or fog network. This configuration will be discussed below with further reference to FIG. 14.FIG. 14 illustrates a cloud computing network communicating with a mesh network of IoT devices (device 1402) operating as a fog platform in a networked scenario. The mesh network of IoT devices may be referred to as a fog network 1420, which is established from a network of devices operating at the edge of the cloud 1400. In order to simplify the diagram, each IoT device 1402 is not labeled.The fog network 1420 can be considered as a large-scale interconnected network, in which several IoT devices 1402 communicate with each other via a radio link 1422, for example. The fog network 1420 can establish a horizontal resource platform, a physical resource platform, or a virtual resource platform that can be regarded as between the IoT edge device and the cloud or data center. In some examples, the fog network can support vertically isolated, latency-sensitive applications through hierarchical computing, joint computing, or distributed computing, storage, and network connection operations. However, the fog network can also be used to distribute resources and services at the edge and the cloud and between the edge and the cloud. Therefore, references to "edge", "fog", and "cloud" in this document are not necessarily discrete or mutually exclusive.As an example, the fog network 1420 may be promoted using interconnection specifications issued by the Open Connectivity Foundation™ (OCF). This standard allows devices to discover each other and establish communication for interconnection. Other interconnection protocols can also be used, including, for example, the Optimal Link State Routing (OLSR) protocol, or the Better Way to Mobile Ad Hoc Networking (BATMAN) routing protocol, or the OMA Lightweight M2M (LWM2M) protocol, etc. Wait.Although three types of IoT devices 1402 are shown in this example: gateway 1404, data aggregator 1426, and sensors 1428, any combination of IoT devices 1402 and functions can be used. The gateway 1404 may be an edge device that provides communication between the cloud 1400 and the fog 1420, and may also provide back-end processing functions for data obtained from the sensor 1428 (such as motion data, streaming data, temperature data, etc.). The data aggregator 1426 can collect data from any number of sensors 1428 and perform back-end processing functions for analysis. The result, raw data, or both can be passed to the cloud 1400 through the gateway 1404. The sensor 1428 may be, for example, a complete IoT device 1402 capable of both collecting and processing data. In some cases, the sensor 1428 may be more restricted in functionality, such as collecting data and allowing the data aggregator 1426 or gateway 1404 to process the data.Communication from any IoT device 1402 can be passed along a convenient path (eg, the most convenient path) between any of the IoT devices 1402 to reach the gateway 1404. In these networks, the number of interconnections provides a large amount of redundancy, which allows communication to be maintained even in the case of the loss of several IoT devices 1402. In addition, the use of a mesh network may allow the use of IoT devices 1402 with very low power or located at a certain distance from the infrastructure, because the range of connecting to another IoT device 1402 may be much smaller than the range of connecting to the gateway 1404.The fog 1420 provided from these IoT devices 1402 may be presented to devices in the cloud 1400 (such as the server 1406) as a single device located at the edge of the cloud 1400, for example, a fog device. In this example, an alert from a fog device may be sent without being recognized as coming from a specific IoT device 1420 within the fog 1402. In this way, the fog 1420 can be regarded as a distributed platform that provides computing and storage resources to perform processing or data-intensive tasks, such as data analysis, data aggregation, and machine learning, among others.In some examples, an imperative programming style may be used to configure the IoT devices 1402, for example, each IoT device 1402 has a specific function and communication partner. However, IoT devices 1402 forming fog devices can be configured in a declarative programming style, allowing IoT devices 1402 to reconfigure their operations and communications, such as determining required resources in response to conditions, queries, and device failures. As an example, a query from a user located at the server 1406 regarding the operation of the subset of equipment monitored by the IoT device 1402 may cause the fog 1420 device to select the IoT device 1402 required to answer the query, such as a specific sensor 1428. Then, before the fog 1420 device continues to send to the server 1406 to answer the query, any combination of the sensors 1428, the data aggregator 1426, or the gateway 1404 can aggregate and analyze the data from these sensors 1428. In this example, the IoT device 1402 in the fog 1420 may select the sensor 1428 to use based on the query, such as adding data from a flow sensor or a temperature sensor. In addition, if some of the IoT devices 1402 are not operable, other IoT devices 1402 in the fog 1420 devices can provide similar data (if available).In other examples, the operations and functions of the above-described embodiments may be embodied by IoT devices in the exemplary form of an electronic processing system in which a sequence or set of instructions can be executed to make the electronic processing system Perform any of the methods discussed in this article according to the example. The machine can be an IoT device or an IoT gateway, including a machine embodied by multiple aspects of the following: personal computer (PC), tablet PC, personal digital assistant (PDA), mobile phone or smart phone, or capable of executing specified requirements Any machine with instructions (sequentially or otherwise) of actions taken by that machine.Further, these examples and similar examples with processor-based systems should be considered to include a processor, a collection of processors, or a processing circuit system (for example, a machine in the form of a computer, UE, MEC processing equipment, IoT Processing equipment, etc.) control or operate any collection of one or more machines to execute instructions individually or in combination to execute any one or more of the methodologies discussed herein. Therefore, in each example, an applicable device for processing (for example, processing, controlling, generating, evaluating, etc.) may be embodied by such a processing circuit system.Figure 15 illustrates a diagram of a cloud computing network or cloud 1500 communicating with several IoT devices. The cloud 1500 may represent the Internet, or may be a local area network (LAN), or a wide area network (WAN), such as a dedicated network for a company. IoT devices can include any number of different types of devices grouped in various combinations. For example, the traffic control group 1506 may include IoT devices along streets in the city. These IoT devices can include parking lights, traffic flow monitors, cameras, weather sensors, and so on. The traffic control group 1506 or other subgroups may communicate with the cloud 1508 through a wired or wireless link 1500 (such as an LPWA link, an optical link, etc.). Further, the wired or wireless subnet 1512 may allow IoT devices to communicate with each other, such as through a local area network, a wireless local area network, or the like. The IoT device may use another device (such as a gateway 1510 or 1528) to communicate with a remote location (such as the cloud 1500); the IoT device may also use one or more servers 1530 to facilitate communication with the cloud 1500 or with the gateway 1510. For example, one or more servers 1530 may act as intermediary network nodes to support local edge cloud or fog implementation between local area networks. Further, the depicted gateway 1528 can operate in cloud-gateway-many edge device configurations such as with various IoT devices 15, 1520, 1524, and various IoT devices 15, 1520, 1524 allocate resources in the cloud 1500 And use is constrained or dynamic.Other example groups of IoT devices may include remote weather stations 1514, local information terminals 1516, alarm systems 1518, automated teller machines 1520, alarm panels 1522, or mobile vehicles (such as emergency vehicles 1524 or other vehicles 1526), among others. Each of these IoT devices can communicate with other IoT devices, with the server 1504, with another IoT fog platform or system, or a combination thereof. These groups of IoT devices can be deployed in various residential, commercial and industrial settings (including both private and public environments).As can be seen from FIG. 15, a large number of IoT devices can communicate through the cloud 1500. This may allow different IoT devices to autonomously request information or provide information to other devices. For example, a group of IoT devices (eg, a traffic control group 1506) may request the current weather forecast from a group 1515 of remote weather stations that can provide forecasts without human intervention. Further, the automated teller machine 1520 may warn the emergency vehicle 1524 that the theft is in progress. When the emergency vehicle 1524 is moving toward the automated teller machine 1520, it can access the traffic control group 1506 to request that the location be cleared, for example, by turning on a red light for sufficient time to stop the cross traffic flow at the intersection, so that the emergency vehicle 1524 can enter the intersection unobstructed.A cluster of IoT devices (such as a remote weather station 1515 or a traffic control group 1506) can be equipped to communicate with other IoT devices and with the cloud 1500. This may allow IoT devices to form an ad-hoc network among multiple devices, allowing them to act as a single device, which may be referred to as a fog platform or system (e.g., as described above with reference to FIG. 14).Figure 16 is a block diagram of an example of components that may be present in an edge processing device 1650 (eg, computer, IoT device, edge server, etc.) for implementing any of the technologies described herein. The device 1650 may include any combination of components shown in the examples in the above disclosure or referenced in the above disclosure. These components can be embodied as ICs, parts of ICs, discrete electronic devices, or other modules, logic, hardware, software, firmware, or combinations thereof suitable for use in IoT devices 1650, or as incorporated into larger components in other ways The components in the rack of the system. In addition, the block diagram of FIG. 16 is intended to depict a high-level view of the components of the device 1650. However, some of the illustrated components may be omitted, additional components may be present, and different arrangements of the illustrated components may occur in other implementations.The device 1650 may include a processing circuit system in the form of a processor 1652. The processor may be a microprocessor, a multi-core processor, a multi-threaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. The processor 1652 may be part of a system on chip (SoC) in which the processor 1652 and other components are formed into a single integrated circuit or a single package, such as EdisonTM (EdisonTM) or GalileoTM (GalileoTM) from Intel ) SoC board. As an example, the processor 1652 may include a processor based on the architecture CoreTM (CoreTM) (such as QuarkTM, AtomTM, i3, i5, i7 or MCU type processors), or another available from a company in Santa Clara, California One such processor. However, any number of other processors can be used, such as processors available from Advanced Micro Devices (AMD) of Sunnyvale, California, and MIPS-based processors from MIPS Technologies, Inc. of Sunnyvale, California. Designed and licensed from ARM-based designs of ARM Holdings Co., Ltd., or processors obtained from the customers, licensees or adopters of the aforementioned companies. The processor may include, for example, the following units: A5-A10 processor from the company, SnapdragonTM processor from a technology company, or OMAPTM processor from Texas Instruments.The processor 1652 may communicate with the system memory 1654 through an interconnect 1656 (e.g., a bus). Any number of memory devices can be used to provide a given amount of system memory. As an example, the memory may be a random access memory (RAM) designed according to the Joint Electronic Devices Engineering Committee (JEDEC), such as DDR or mobile DDR standards (eg, LPDDR, LPDDR2, LPDDR3, or LPDDR4). In various implementations, the individual memory devices can be any number of different package types, such as single die package (SDP), dual die package (DDP), or quad die package (Q17P). In some examples, these devices can be soldered directly to the motherboard to provide a lower profile solution, while in other examples, the devices are configured as one or more memory modules, which in turn pass through a given connector Coupled to the motherboard. Any number of other memory implementations may be used, such as other types of memory modules, for example, different types of dual in-line memory modules (DIMMs), including but not limited to microDIMMs (micro DIMMs) or MiniDIMMs (mini DIMMs).In order to provide persistent storage of information (such as data, applications, operating systems, etc.), the storage 1658 may also be coupled to the processor 1652 via an interconnect 1656. In an example, storage 1658 may be implemented via a solid state disk drive (SSDD). Other devices that can be used to store 1658 include flash memory cards (such as SD cards, microSD cards, xD picture cards, etc.) and USB flash drives. In a low-power implementation, the storage 1658 may be an on-die memory or register associated with the processor 1652. However, in some examples, storage 1658 may be implemented using a micro hard disk drive (HDD). Furthermore, in addition to or in place of the described technology, any number of new technologies can be used for storage 1658, such as resistive change memory, phase change memory, holographic memory, or chemical memory, among others.Components can communicate via interconnect 1656. Interconnect 1656 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extension (PCIx), PCI express (PCIe), or any number of Other technologies. The interconnect 1656 may be, for example, a proprietary bus used in SoC-based systems. Other bus systems may be included, such as I2C interface, SPI interface, point-to-point interface, power bus, and so on.The interconnect 1656 may couple the processor 1652 to the mesh transceiver 1662 for communication with other mesh devices 1664, for example. The mesh transceiver 1662 can use any number of frequencies and protocols, such as 2.4 gigahertz (GHz) transmissions under the IEEE 802.16.4 standard, using low energy (BLE) standards, or standards as defined by special interest groups, and many more. Any number of radios configured for a particular wireless communication protocol can be used for connection to the mesh device 1664. For example, the WLAN unit can be used to implement Wi-Fi™ communication according to the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, for example, wireless wide area communication according to cellular or other wireless wide area protocols can occur via WWAN units.The mesh transceiver 1662 can communicate using a variety of standards or radios for different ranges of communication. For example, the device 1650 may use a BLE-based or another low-power radio-based local transceiver to communicate with nearby (e.g., within about 10 meters) devices to save power. More distant (e.g., within about 50 meters) mesh devices 1664 can be reached via ZigBee or other intermediate power radios. These two communication technologies can occur over a single radio at different power levels, or can occur through separate transceivers, such as a local transceiver using BLE and a separate mesh transceiver using ZigBee.A wireless network transceiver 1666 may be included to communicate with devices or services in the cloud 1600 via a local area network protocol or a wide area network protocol. The wireless network transceiver 1666 may be an LPWA transceiver that complies with the IEEE 802.15.4 or IEEE 802.15.4g standard or the like. The device 1650 can use LoRaWANTM (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance to communicate over a wide area. The technologies described herein are not limited to these technologies, but can be used with any number of other cloud transceivers that implement long-distance, low-bandwidth communications (such as Sigfox and other technologies). Further, other communication technologies may be used, such as time division channel hopping described in the IEEE 802.15.4e specification.In addition to the systems mentioned for the mesh transceiver 1662 and wireless network transceiver 1666 as described herein, any number of other radio communications and protocols may be used. For example, the radio transceivers 1662 and 1666 may include LTE or other cellular transceivers that use spread spectrum (SPA/SAS) communication to achieve high-speed communication. Further, any number of other protocols may be used, such as networks for medium-speed communication and supply network communication.The radio transceivers 1662 and 1666 may include any number of 3GPP (Third Generation Partnership Project) specifications (especially Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A) and Long Term Evolution-Advanced Plus (LTE- A Pro)) compatible radio. It may be noted that a radio compatible with any number of other fixed, mobile or satellite communication technologies and standards can be selected. These may include, for example, any cellular wide-area wireless communication technology, which may include, for example, the 5th generation (5G) communication system, the global mobile communication (GSM) radio communication system, the general packet radio service (GPRS) radio communication technology, or the GSM evolution ( EDGE) enhanced data rate radio communication technology, UMTS (Universal Mobile Telecommunications System) communication technology, in addition to the standards listed above, any number of satellite uplink technologies can be used for the wireless network transceiver 1666, including, for example, compliance with the ITU (International Telecommunications Union) or ETSI (European Telecommunications Standards Institute) publishes standard radios, etc. The examples provided herein can therefore be understood as applicable to various existing and undeveloped various other communication technologies.A network interface controller (NIC) 1668 may be included to provide wired communication to the cloud 1600 or to other devices, such as a grid device 1664. Wired communication can provide Ethernet connection, or can be based on other types of networks, such as controller area network (CAN), local interconnection network (LIN), device network (DeviceNet), control network (ControlNet), data highway +, field bus (PROFIBUS) or Industrial Ethernet (PROFINET), etc. An additional NIC 1668 may be included to allow connection to a second network, for example, the NIC 1668 provides communication to the cloud through Ethernet, and the second NIC 1668 provides communication to other devices through another type of network.In view of the diversity of applicable communication types from the device to another component or network, the applicable communication circuit system used by the device may include any one or more of the components 1662, 1666, 1668, or 1670 or be composed of the components 1662, 1666, 1668, or 1668. Any one or more of 1670 to materialize. Therefore, in various examples, suitable devices for communication (for example, reception, transmission, etc.) may be embodied by such communication circuit systems.The interconnect 1656 may couple the processor 1652 to an external interface 1670, which is used to connect external devices or subsystems. The external device may include a sensor 1672, such as an accelerometer, a level sensor, a flow sensor, an optical light sensor, a camera sensor, a temperature sensor, a global positioning system (GPS) sensor, a pressure sensor, an air pressure sensor, and so on. The external interface 1670 can be further used to connect the device 1650 to an actuator 1674 (such as a power switch, a valve actuator, an audible sound generator, a visual warning device, etc.).In some optional examples, various input/output (I/O) devices may exist within the device 1650 or may be connected to the device 1650. For example, a display or other output device 1684 may be included to display information, such as sensor readings or actuator positions. An input device 1686 (such as a touch screen or keypad) may be included to accept input. The output device 1684 may include any number of audio or visual display forms, including: simple visual output, such as a binary status indicator (e.g., LED); multi-character visual output; or more complex output, such as a display screen (e.g., LCD screen) ), which has the output of characters, graphics, multimedia objects, etc., generated or generated from the operation of the device 1650.The battery 1676 may power the device 1650, but in an example where the device 1650 is installed in a fixed location, the device 1650 may have a power source coupled to the power grid. The battery 1676 may be a lithium ion battery, a metal-air battery (such as a zinc-air battery, an aluminum-air battery, a lithium-air battery), or the like.A battery monitor/charger 1678 may be included in the device 1650 to track the state of charge (SoCh) of the battery 1676. The battery monitor/charger 1678 may be used to monitor other parameters of the battery 1676 to provide failure prediction, such as the state of health (SoH) and state of function (SoF) of the battery 1676. The battery monitor/charger 1678 may include a battery monitoring integrated circuit, such as the LTC4020 or LTC2990 from Linear Technologies, the ADT7488A from ON Semiconductor of Phoenix, Arizona, or the ADT7488A from ON Semiconductor, The UCD90xxx family of ICs from Texas Instruments in Dallas, Sas. The battery monitor/charger 1678 can pass the information on the battery 1676 to the processor 1652 via the interconnect 1656. The battery monitor/charger 1678 may also include an analog-to-digital (ADC) converter that enables the processor 1652 to directly monitor the voltage of the battery 1676 or the current from the battery 1676. Battery parameters can be used to determine actions that the device 1650 can perform, such as transmission frequency, mesh network operation, sensing frequency, and so on.The power block 1680 or other power source coupled to the grid may be coupled with the battery monitor/charger 1678 to charge the battery 1676. In some examples, the power block 1680 can be replaced with a wireless power receiver to obtain power wirelessly, for example, through a loop antenna in the device 1650. A wireless battery charging circuit (such as the LTC4020 chip from Linear Technology Inc. of Miupida, California, etc.) may be included in the battery monitor/charger 1678. The particular charging circuit selected depends on the size of the battery 1676 and therefore on the current required. It can be implemented using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, the Rezence charging standard promulgated by the Alliance for Wireless Power, etc. Recharge.Storage 1658 may include instructions 1682 in the form of software, firmware, or hardware commands for implementing the techniques described herein. Although such instructions 1682 are shown as code blocks included in the memory 1654 and the memory 1658, it is understood that any one of the code blocks may be replaced with, for example, hard-wired circuits built into an application specific integrated circuit (ASIC) .In an example, the instructions 1682 provided via the memory 1654, the storage 1658, or the processor 1652 may be embodied as a non-transitory machine-readable medium 1660. The non-transitory machine-readable medium 1660 includes a device 1650 for instructing the processor 1652 to execute The code of the electronic operation in. The processor 1652 can access the non-transitory machine-readable medium 1660 through the interconnect 1656. For example, the non-transitory machine-readable medium 1660 may be embodied by the device described for storage 1658, or may include a specific storage unit, such as an optical disc, flash drive, or any number of other hardware devices. The non-transitory machine-readable medium 1660 may include a specific sequence or flow of actions for instructing the processor 1652 to perform, for example, the flowchart(s) and block diagram(s) described with reference to the operations and functions described above Instructions.In a further example, the machine-readable medium also includes any tangible medium that can store, encode, or carry instructions for execution by a machine and cause the machine to perform any one or more of the methods of the present disclosure, or the Tangible media can store, encode, or carry data structures utilized by or associated with such instructions. "Machine-readable medium" may therefore include, but is not limited to, solid-state memory, optical media, and magnetic media. Specific examples of machine-readable media include non-volatile memory, as examples, including but not limited to: semiconductor memory devices (eg, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) And flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. Transmission media can be used, via network interface devices, using any of a variety of transmission protocols Protocol (for example, HTTP), further through the communication network to transmit or receive instructions embodied by the machine-readable medium.The machine-readable medium may be provided by a storage device or other device capable of hosting data in a non-transitory format. In an example, information stored on a machine-readable medium or otherwise provided on a machine-readable medium may represent instructions, such as the instructions themselves or a format from which the instructions can be derived. The format from which instructions can be derived can include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., divided into multiple packages), and the like. The information representing the instructions in the machine-readable medium can be processed into instructions by a processing circuit to implement any operation discussed herein. For example, deriving instructions from information (for example, processed by the processing circuit) may include: (for example, from source code, object code, etc.) compiling, interpreting, loading, organizing (for example, linking dynamically or statically) , Encode, decode, encrypt, decrypt, pack, unpack, or otherwise manipulate information into instructions.In an example, the derivation of instructions may include (for example, by processing circuitry) compiling, compiling, or interpreting information to create instructions from some intermediate or preprocessed format provided by a machine-readable medium. When information is provided in multiple parts, it can be combined, unpacked, and modified to create instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code package can be encrypted when transmitted over the network, and can be decrypted at the local machine, decompressed, (if necessary) assembled (for example, linked), and compiled or interpreted (for example, compiled or Interpreted as a library, independent executable file, etc.), and executed by the local machine.It should be understood that the functional units or capabilities described in this specification may be referred to or marked as components or modules, so that the independence of their implementation is particularly emphasized. Such components can be embodied in any number of software or hardware forms. For example, the components or modules may be implemented as hardware circuits that include customized very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. Components or modules can also be implemented in programmable hardware devices (such as field programmable gate arrays, programmable array logic, programmable logic devices, etc.). Components or modules can also be implemented in software for execution by various types of processors. The identified components or modules of the executable code may, for example, include one or more physical or logical blocks of computer instructions, which, for example, may be organized into, for example, objects, procedures, or functions. However, the executable files of the identified components or modules do not need to be physically together, but may include different instructions stored in different locations. When these instructions are logically combined together, they include the components or modules and are targeted at The component or module achieves the stated purpose.In fact, a component or module of executable code may be a single instruction or many instructions, and may even be distributed on several different code segments, among different programs, and distributed across several memory devices or processing systems. Specifically, some aspects of the described process (such as code rewriting and code analysis) can occur in a processing system different from the processing system in which the code is deployed (e.g., in a computer embedded in a sensor or robot) ( For example, in a computer in a data center). Similarly, operational data can be identified and illustrated in components or modules here, and can be embodied in any suitable form and organized in any suitable type of data structure. Operational data can be collected as a single data set, or can be distributed in different locations (including on different storage devices), and can exist at least in part only as an electronic signal on a system or network. Components or modules can be passive or active, including agents for performing required functions.Additional examples of the currently described method, system, and device embodiments include the following non-limiting configurations. Each of the following non-limiting examples may exist independently, or may be combined with one or more of the other examples provided below or throughout the present disclosure in any permutation or combination.Example 1 is a life cycle management (LCM) agent device, including: a processing circuit system; and a memory device including instructions stored thereon, wherein the instruction when executed by the processing circuit system configures the processing circuit system for Perform the following operations: get requests for multiple contexts of the application from the device application; determine multiple contexts of the application; authorize requests from the device application for multiple contexts of the application; based on the requested device application The identifier is added to the multiple contexts of the application; and a response created for the device application is sent to the device application based on the requested authorization, where the response includes the identifiers of the multiple contexts of the application.In Example 2, the subject of Example 1 includes, in order to determine multiple contexts of the application of the application, the processing circuitry is configured to perform the following operations: create multiple contexts of the application of the application, wherein the multiple contexts of the application include Reference to the device application.In Example 3, the subject matter of Example 1-2 includes, in order to determine the multiple contexts of the application of the application, the processing circuit system is configured to perform the following operations: determine that there are multiple contexts of the existing application of the application, where the identifier identifies The existing application of the application has multiple contexts.In Example 4, the subject matter of Examples 1-3 includes: wherein the processing circuit system is further configured to perform the following operations: obtaining a request to delete the application multiple contexts of the application from the device application; multiple contexts for the application application Authorize the request from the device application to delete multiple contexts of the application; send a request to delete multiple contexts of the device application to the multi-access edge computing (MEC) coordinator; and send the delete response of the device application based on the requested authorization, where the response is Include identifiers for multiple contexts of the application.In Example 5, the subject matter of Examples 1-4 includes, wherein the processing circuit system is further configured to perform the following operations: a request to obtain a plurality of application contexts of an updated application from a device application, the request including a plurality of context identifiers And modified data; determine the application multiple contexts of the application based on multiple context identifiers; update the multiple contexts of the application based on the modified data; and send the updated response to the device application based on the authorization of the request, wherein The response includes the identifiers of the multiple contexts of the application.In Example 6, the theme of Example 5 includes where the modified data is an updated callback reference.In Example 7, the subject matter of Examples 1-6 includes, wherein the processing circuitry is further configured to perform the following operations: sending a publishing message to the MEC host, where the publishing message includes multiple contexts of the application; and The multiple application contexts of the application are transmitted to the MEC host.Example 8 is a method of life cycle management (LCM) agent, including: obtaining a request for multiple contexts of the application from the device application; determining the multiple contexts of the application; authorizing multiple applications from the device application to the application A request for a context; adding the device application identifier based on the request to multiple contexts of the application; and sending a response created for the device application to the device application based on the authorization of the request, where the response includes the identifiers of the multiple contexts of the application.In Example 9, the subject of Example 8 includes where determining the application multiple contexts of the application includes creating the application multiple contexts of the application, wherein the application multiple contexts include references to applications and device applications.In Example 10, the subject matter of Examples 8-9 includes where determining the multiple contexts of the application of the application includes determining that multiple contexts of the existing application of the application exist, and wherein the identifier identifies the multiple contexts of the existing application of the application.In Example 11, the topics of Examples 8-10 include: obtaining a request to delete multiple contexts of an application from a device application; authorizing multiple contexts of an application from a device application to delete multiple contexts of an application request; The MEC coordinator sends a request to delete the multiple contexts of the device application; and encodes the delete response of the device application based on the authorization of the request, where the response includes the identifier of the multiple contexts of the application.In Example 12, the subject matter of Examples 8-11 includes obtaining a request from a device application to update multiple application contexts of the application, the request including multiple context identifiers and modified data; and determining the application context based on the multiple context identifiers. Apply multiple contexts; update the application multiple contexts based on the modified data; and send an updated response to the device application to the device application based on the requested authorization, where the response includes the identifier of the application multiple contexts.In Example 13, the subject matter of Example 12 includes where the modified data is an updated callback reference.In Example 14, the topics of Examples 8-13 include obtaining a publication message from the MEC host, where the publication message includes multiple application contexts of the application; and transmitting multiple application contexts of the application to the MEC host.Example 15 is at least one type of machine-readable storage device including instructions stored thereon, which when executed by the processing circuitry of the computing device causes the processing circuitry to: obtain requests for multiple contexts of the application from the device application Determine the application multiple contexts of the application; authorize requests from the device application for the application multiple contexts of the application; add the request-based device application identifier to the application multiple contexts; and send the target to the device application based on the requested authorization A response created by a device application, where the response includes identifiers of multiple contexts of the application.In Example 16, the subject of Example 15 includes, in order to determine the application multiple contexts of the application, the processing circuit system is configured to create the application multiple contexts of the application, wherein the multiple application contexts include references to applications and device applications .In Example 17, the subject matter of Examples 15-16 includes, in order to determine the multiple contexts of the application of the application, the processing circuit system is configured to determine that there are multiple contexts of the existing application of the application, wherein the identifier identifies the existing multiple contexts of the application Apply multiple contexts.In Example 18, the subject matter of Examples 15-17 includes, wherein the processing circuit system is further configured to: obtain from the device application a request to delete the application multiple contexts of the application; and authorize the multiple contexts of the application from the device application Request to delete multiple contexts of the application; send a request to delete multiple contexts of the device to the multi-access edge computing (MEC) coordinator; and send a delete response of the device application based on the requested authorization, where the response includes the application of multiple contexts The identifier.In Example 19, the subject matter of Examples 15-18 includes, wherein the processing circuitry is further configured to: obtain a request from the device application to update a plurality of application contexts of the application, the request including a plurality of context identifiers and modified data Determine the application multiple contexts of the application based on multiple context identifiers; update multiple contexts of the application based on the modified data; and send an updated response to the device application to the device application based on the requested authorization, where the response Include identifiers for multiple contexts of the application.In Example 20, the subject matter of Example 19 includes where the modified data is an updated callback reference.In Example 21, the subject matter of Examples 15-20 includes, wherein the processing circuitry is further configured to: obtain a published message from the MEC host, where the published message includes multiple contexts of the application; and multiple application contexts of the application Transfer to the MEC host.Example 22 is a life cycle management (LCM) proxy device, including: a device for obtaining a request for multiple contexts of the application received from a device application; a device for determining multiple contexts of the application of the application; Means for authorizing requests for multiple contexts of the application from a device application; means for adding a request-based device application identifier to multiple contexts of the application; and means for sending a device application based on the authorization of the request A means of creating a response, where the response includes the identifier of the application multiple contexts.In Example 23, the subject matter of Example 22 includes, wherein the means for determining the multiple contexts of the application of the application includes an operation for creating multiple contexts of the application of the application, wherein the multiple contexts of the application include references to applications and device applications.In Example 24, the subject matter of Examples 22-23 includes, wherein the means for determining the multiple contexts of the application of the application includes an operation for determining the multiple contexts of the existing application of the application, wherein the identifier identifies the existing application of the application Multiple contexts.In Example 25, the subject matter of Examples 22-24 includes: a device for obtaining a request to delete an application's multiple contexts from a device application; a device for authorizing multiple contexts of the application to delete an application multiple contexts from the device application Means for sending a request to delete multiple contexts of the device application to the Multi-Access Edge Computing (MEC) coordinator; and means for sending a delete response of the device application to the device application based on the requested authorization , Where the response includes identifiers for multiple contexts of the application.Example 26 is at least one machine-readable medium that includes instructions that, when executed by the processing circuitry, cause the processing circuitry to perform operations to implement any one of Examples 1-25.Example 27 is at least one type of machine-readable storage medium including information representing instructions, which when executed by the processing circuitry, causes the processing circuitry to perform the operations of any one of Examples 1-25.Example 28 may include one or more non-transitory computer-readable media, including instructions that, when executed by one or more processors of an electronic device, cause the electronic device to perform any one of Examples 1-25. One or more elements of the method described or related to any of Examples 1-25, or other methods or processes described herein.Example 29 may include a device for performing the method described in any one of Examples 1-25 or related to any one of Examples 1-25, or other methods described herein, or The logic, module, or circuit system of one or more elements of a process.Example 30 may include the method, technique, or process as described in any one of Examples 1-25 or related to any one of Examples 1-25, or include parts or fragments thereof.Example 31 may include a device that includes: one or more processors; and one or more computer-readable media including instructions that, when executed by the one or more processors, cause the one or more The processor executes the method, technique, or process as described in any one of Examples 1-25 or related to any one of Examples 1-25, or executes a part thereof.Example 32 may include the signal as described in any one of Examples 1-25 or related to any one of Examples 1-25, or include parts or fragments thereof.Example 33 may include a signal in a wireless network as described in or related to any of Examples 1-25 or otherwise shown and described herein.Example 34 may include as described in any of Examples 1-25 or related to any of Examples 1-25 or otherwise shown and described herein for coordinating wireless networks Method of communication.Example 35 may include a device for processing communications as described in any one of Examples 1-25 or related to any one of Examples 1-25 or otherwise shown and described herein .Example 36 is a network that includes various devices and device communication media for performing any of the operations of any of Examples 1-25 or otherwise shown and described herein.Example 37 is an edge cloud computing device implementation that includes processing nodes and computing units suitable for performing any of the operations in any of Examples 1-25 or otherwise shown and described herein.Example 38 is an apparatus including means for implementing any one of Examples 1-37.Example 39 is a system for implementing any of Examples 1-37.Example 40 is a method for implementing any of Examples 1-37.In the above specific embodiments, various features can be combined to make the present disclosure smooth. However, the claims may not state every feature disclosed herein, because an embodiment may characterize a subset of the features. Further, embodiments may include fewer features than those disclosed in specific examples. Therefore, the appended claims are hereby incorporated into the detailed description, and one of the claims independently becomes a separate embodiment.
Apparatus and methods of testing and assembling fine ball grid array (FBGA) packages having circuit-bearing interconnect components. In one embodiment, a circuit-bearing interconnect component includes a substrate having a plurality of first conductive members disposed therethrough, a plurality of conductive traces coupled to the first conductive members and extending away from the first conductive members to a distal portion of the substrate, and a plurality of second conductive members disposed on the distal portion and coupled to the conductive traces. The substrate may be rigid or flexible. The first conducting members are located within an engagement area that is adapted to be engageable with a semiconductor component having a plurality of conductive bumps wherein each conductive bump engages one of the first conductive members. The first conductive members may include conductively-plated via or conductive pins. In an alternate embodiment, an apparatus further includes a semiconductor component having a plurality of conductive bumps disposed thereon. The circuit-bearing interconnect component may permit efficient, accurate, and reliable testing of the semiconductor component when the semiconductor component is attached to a semiconductor device, such as a printed circuit board.
What is claimed is: 1. A method of testing a semiconductor component having a plurality of conductive bumps, comprising:engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps, each first conductive element being electrically coupled with a first end of a conductive trace, the conductive trace having a second end extending away from the conductive bump to a distal portion substantially beyond an edge of the semiconductor component; transmitting a test signal through at least one second end to the corresponding conductive bump of the semiconductor component; and receiving a feedback signal from the semiconductor component indicative of a performance characteristic of the semiconductor component. 2. The method of claim 1, further comprising engaging a test probe with at least one of the second ends.3. The method of claim 1 wherein engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps comprises attaching a plurality of conductive via with the plurality of conductive bumps.4. The method of claim 1 wherein engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps comprises attaching a plurality of conductive pins with the plurality of conductive bumps.5. The method of claim 1, further comprising bending at least the distal portion of the circuit-bearing interconnect component.6. The method of claim 5 wherein bending at least the distal portion of the circuit-bearing interconnect component comprises bending the distal portion through an approximately 180 degree arc so that the second ends of the conductive traces are proximate a backside surface of the semiconductor component.7. The method of claim 1 wherein receiving a feedback signal from the semiconductor component comprises receiving a feedback signal through at least one of the conductive traces.8. The method of claim 1, further comprising engaging a semiconductor device having a plurality of contact pads with the circuit-bearing interconnect component, each contact pad being electrically coupled with one of the first conductive elements.9. The method of claim 1 wherein the distal portion includes a frangible section, further comprising removing the distal portion of the circuit-bearing interconnect component.10. A method of testing a semiconductor component having a plurality of conductive bumps, comprising:engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps, each first conductive element having a first end electrically coupled with one of the conductive bumps and with a conductive trace, each conductive trace having a second end that extends away from the first conductive element to a distal portion substantially beyond an edge of the semiconductor component; engaging a semiconductor device having a plurality of contact pads with the circuit-bearing interconnect component, each contact pad being electrically coupled with one of the first conductive elements; transmitting a test signal from the semiconductor device through at least one first conductive element to the corresponding conductive bump of the semiconductor component; and receiving a feedback signal from the semiconductor component indicative of a performance characteristic of the semiconductor component. 11. The method of claim 10, further comprising engaging a test probe with at least one of the second ends.12. The method of claim 10 wherein engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps comprises attaching a plurality of conductive via with the plurality of conductive bumps.13. The method of claim 10 wherein engaging a circuit-bearing interconnect component having a plurality of first conductive elements disposed therethrough with the plurality of conductive bumps comprises attaching a plurality of conductive pins with the plurality of conductive bumps.14. The method of claim 10, further comprising bending at least the distal portion of the circuit-bearing interconnect component.15. The method of claim 14 wherein bending at least the distal portion of the circuit-bearing interconnect component comprises bending the distal portion through an approximately 180 degree arc so that the second ends of the conductive traces are proximate a backside surface of the semiconductor component.16. The method of claim 10 wherein the distal portion includes a frangible section, further comprising removing the distal portion of the circuit-bearing interconnect component.
TECHNICAL FIELDThe present invention relates to apparatus and methods of testing and assembling semiconductor packages, and more specifically, to testing and assembling fine ball grid array (FBGA) packages having circuit-bearing interconnect components.BACKGROUND OF THE INVENTIONAs the trend toward decreasing the size of microelectronic packages continues, challenges associated with packaging and testing it semiconductor devices are continuously encountered. Fine ball grid array (FBGA) semiconductor packages, for example, offer reduced package volumes and desirable performance characteristics. Testing of FBGA semiconductor packages, however, may be difficult, and the difficulty may increase as the size of the FBGA package decreases.FIG. 1 is a cross-sectional elevational view of a typical FBGA package 10. The FBGA package 10 includes a bumped die 12 having a plurality of bond pads 14 formed thereon. An electrically conductive ball or bump 16 (typically composed of solder or a gold alloy) is formed on each bond pad 14, and is attached to associated contact pad 18 formed on a substrate 20, such as a test carrier or a printed circuit board. A conductive trace 22 is formed on the surface of the substrate 20 and is attached to one of the contact pads 18. The conductive traces 22 typically fan out from the bumped die 12 and may be connected to other electronic components or to test equipment. FBGA packages of the type shown in FIG. 1 are more fully described, for example, in U.S. Pat. Nos. 5,663,106 and 5,777,379 to Karavakis et al, and in U.S. Pat. No. 5,821,608 to DiStefano et al, which patents are incorporated herein by reference.As mentioned above, for testing of the FBGA package 10, the substrate 20 may be a test carrier that temporarily engages the conductive bumps 14 of the bumped die 12. Suitable test carriers for testing unpackaged die 12 are described, for example, in U.S. Pat. No. 5,519,332 to Wood et al, incorporated herein by reference. Generally, such carriers are suitable for use with automated equipment and assembling procedures utilized in high-volume semiconductor manufacturing. Design considerations of such test carriers include the carrier's ability to transmit and receive electrical signals over a wide temperature range, thermal management characteristics, power and signal distribution characteristics, cost and reusability.Testing of the bumped die 12 generally includes four levels of testing. A first or "standard probe" level includes the standard tests for gross functionality of die circuitry. A second or "speed probe" level includes testing the speed performance of the die for the fastest speed grades. A third or "burn-in die" level involves thermal cycling tests intended to drive contaminants into the active circuitry and to detect early failures. And a fourth or "known good die (KGD)" level includes testing to provide a reliability suitable for final products.To ensure proper transmission of the test signals and output signals, the conductive bumps 16 may be temporarily connected with the contact pads 18 of the substrate 20 by reflowing the bumps, thereby soldering the bumps to the contact pads. After the testing is complete, the conductive bumps 16 may be reflowed to disconnect the bumps from the contact pads 18. After testing, the bumped die 12 is usually placed in operation by attaching the bumped die 12 to a printed circuit board or another semiconductor component. Again, the conductive bumps 16 are placed in contact with the contact pads 18 of the printed circuit board, and are reflowed to bond the conductive bumps 16 to the contact pads 18, thereby attaching the bumped die 12 to the printed circuit board.Connecting and disconnecting the conductive bumps 16 from the contact pads 18, however, involves time-consuming processes and may damage the conductive bumps 16 or the contact pads 18. Also, even though a bumped die 12 tests successfully using the test carrier, the final connection between the conductive bumps 16 of the bumped die 12 and the contact pads 18 of the printed circuit board may not always be good. Therefore, it is usually desirable to conduct additional testing of the bumped die 12 after it has been assembled with the printed circuit board in the final FBGA package 10.Currently, conductive bumps 16 may range in height h (FIG. 1) from 0.75 mm down to 0.15 mm depending upon the die or semiconductor component. Typical "pitch" or spacing between adjacent conductive bumps 16 may range from 0.65 mm down to 0.13 mm (130 micron) or less. Furthermore, a semiconductor die or memory chip may have several hundred conductive bumps. Due to the extremely small sizes of the conductive bumps, the spacing between bumps, and the large number of bumps in typical FBGA packages, testing of such packages presents extreme challenges. For example, when the conductive bumps 16 of the bumped die 12 are attached to the contact pads 18 of the substrate 20, it is often not possible to measure a test signal at (or near) each of the conductive bumps 16 to determine the performance of a particular circuit or connection within the bumped die 12. This is especially true for those conductive bumps 16 which are not along the edges of the FBGA package 10. As the trend toward reducing the size of FBGA packages continues, the difficulties associated with testing such packages will continue to increase.SUMMARY OF THE INVENTIONThe present invention is directed to apparatus and methods of testing and assembling fine ball grid array (FBGA) packages having circuit-bearing interconnect components. In one aspect, a circuit-bearing interconnect component includes a substrate having a plurality of first conductive members disposed therethrough, a plurality of conductive traces coupled to the first conductive members and extending away from the first conductive members to a distal portion of the substrate, and a plurality of second conductive members disposed on the distal portion and coupled to the conductive traces. The substrate may be rigid or flexible. The first conducting members are located within an engagement area that is adapted to be engageable with a semiconductor component having a plurality of conductive bumps wherein each conductive bump engages one of the first conductive members. The first conductive members may include conductively-plated via or conductive pins. The circuit-bearing interconnect component may advantageously permit efficient, accurate, and reliable testing of the semiconductor component when the semiconductor component is attached to a semiconductor device, such as a printed circuit board.In an alternate aspect, a semiconductor package includes a semiconductor component having a plurality of conductive bumps disposed thereon, a substrate including a plurality of first conductive members disposed within an engagement area, each first conductive member being attached to one of the conductive bumps and extending through a thickness of the substrate, the substrate further having at least one distal portion extending away from the engagement area, a plurality of conductive traces each having a first end electrically coupled with one of the first conductive members and a second end An extending away from the one first conductive member and onto the at least one distal portion, and a plurality of second conductive members disposed on the at least one distal portion, each second conductive member being coupled to one of the conductive traces.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a side cross-sectional view of an FBGA package in accordance with the prior art.FIG. 2 is a side cross-sectional view of a semiconductor package having a bumped die engaged with a circuit-bearing interconnect component in accordance with an embodiment of the invention.FIG. 3 is an upper plan view of an embodiment of a circuit-bearing interconnect component of FIG. 2 having a flexible substrate.FIG. 4 is an upper plan view of a semiconductor assembly having a plurality of semiconductor components attached to a printed circuit board and including a plurality of circuit-bearing interconnect components in accordance with an alternate embodiment of the invention.FIG. 5 is a side cross-sectional view of the semiconductor assembly of FIG. 4.FIG. 6 is a side cross-sectional view of a semiconductor package having a bumped die engaged with a pair of circuit-bearing interconnect components in accordance with an alternate embodiment of the invention.FIG. 7 is an upper plan view of embodiments of circuit-bearing interconnect components of FIG. 5 having flexible substrates.DETAILED DESCRIPTION OF THE INVENTIONThe present invention is generally directed to apparatus and methods of testing and assembling fine ball grid array (FBGA) packages having circuit-bearing interconnect components. Many specific details of certain embodiments of the invention are set forth in the following description and in FIGS. 2-7 to provide a thorough understanding of such embodiments. One skilled in the art will understand, however, that the present invention may have additional embodiments, or that the present invention may be practiced without several of the details described in the following description.Unless otherwise stated, the construction and operation of various components of the embodiments described below may be of conventional design. Such components will be referred to using the same names and designation numbers as were used in the preceding background discussion. For the sake of brevity, such components will not be described in further detail herein, as these components are within the understanding of those skilled in the relevant art.FIG. 2 is a side cross-sectional view of a semiconductor package 110 having a bumped die 12 engaged with a circuit-bearing interconnect component 100 in accordance with an embodiment of the invention. The circuit-bearing interconnect component 100 includes a substrate 102 having a first surface 120 and a second surface 121. As shown in the upper plan view of FIG. 3, the substrate 102 may be a flexible substrate 102 that is shapeable into a flattened position (described more fully below). A plurality of inner vias 104 are distributed throughout an inner area 105 of the substrate 102 and disposed therethrough. The inner vias 104 are positioned within the inner area 105 such that each inner via 104 is aligned with one of the conductive bumps 16 of the bumped die 12. A layer of conductive material 106 is formed within each of the inner via 104 and forms an inner contact ring 108 at each end of the respective inner via 104. As best shown in FIG. 2, the inner contact rings 108 may project slightly beyond, and may slightly overlap, the first and second surfaces 120, 121 of the substrate 102.In the embodiment shown in FIGS. 2 and 3, the substrate 102 further includes a pair of outer areas 112 each having a plurality of outer vias 114 disposed therethrough. A conductive layer 116 is disposed within each of the outer vias 114, each conductive layer 116 forming an outer contact ring 118 at the end of the respective outer via 114. A plurality of conductive leads (or "traces") 122 are formed on the first surface 120 of the substrate 102. Each lead 122 has an inner end coupled to one of the inner contact rings 108, and an outer end coupled to one of the outer contact rings 118.The substrate 102 may be composed of any suitable insulative material that prevents electrical shorts between the various conductive components (vias, leads, etc.). For example, the substrate 102 may be a rigid material (FIG. 2), such as a thermoplastic or fiberglass material. Alternately, the substrate 102 may be composed of a flexible material (FIGS. 2 and 3). Suitable flexible materials for the substrate 102 include, for example, a flexible dielectric elastomeric or polymeric material, such as a polyimide material. In other embodiments, the substrate 102 may have both rigid and flexible portions. Furthermore, although the conductive leads 122 are shown as being disposed on the first surface 120 of the substrate 102, the leads 122 could be disposed on the second surface 121, on both surfaces 120, 121, or could be embedded within the interior of the substrate 102.The embodiment of the circuit-bearing interconnect component 100 shown in FIG. 3 is configured for an FBGA package having 60 conductive bumps (four rows of 15 bumps each) at a pitch of approximately 0.50 mm. Of course, a variety of alternate embodiments of circuit-bearing interconnect components may be designed to accommodate a wide variety of different bump configurations, patterns, spacing, and may accommodate semiconductor components having virtually any number of conductive bumps.During assembly, the conductive bumps 16 of the bumped die 12 may be aligned with respective inner vias 104 of the circuit-bearing interconnect component 100. The conductive bumps 16 may then be attached to respective inner contact rings 108 on the first surface 120 of the substrate 104, such as by reflowing the conductive bumps 16, to form an electrical contact between the bumped die 12 and the circuit-bearing interconnect component 100. Similarly, the contact pads 18 of the printed circuit board 20 may be aligned with respective inner vias 104 and attached to respective inner contact rings 108 on the second surface 121 using a variety of known attachment techniques, including soldering, thermal bonding, or using a layer of conductive adhesive (not shown). Thus, each inner via 104 is electrically coupled between its associated conductive bump 16 and contact pad 18, and each outer via 114 is electrically coupled with its associated inner via 104.It should be understood that although the circuit-bearing interconnect component 100 is shown in FIG. 2 as extending beyond opposing sides of the bumped die 12, in alternate embodiments, the component may be modified to extend outwardly from beneath the bumped die 12 in a single direction (i.e. out a single side), in two directions (FIG. 2), in three directions or in four or more directions. For example, the circuit-bearing interconnect component 100 could be split into two separate components along a dividing line 140 (FIG. 3), with each resulting circuit-bearing interconnect component extending beyond the edge of the bumped die 12 in only a single direction. Therefore, depending on the spacing of the bumped die 12 on the printed circuit board 20 in relation to other components or structures, a variety of circuit-bearing interconnect components may be conceived to meet the particular needs of the application.The circuit-bearing interconnect component 100 advantageously allows the bumped die 12 to be tested easily and efficiently after it has been assembled with the substrate 20. Because the substrate 102 may be shaped to position the outer vias 114 proximate the back side surface of the bumped die 12, the outer vias 114 are easily accessible. As shown in FIG. 2, both the rigid and flexible substrate embodiments may be shaped to bend through an approximately 180 degree angle so that the outer vias 114 are positioned over a backside surface 130 of the bumped die 12. A test probe 150 may then be easily inserted into any desired outer via 114 for transmitting test signals to, or receiving signals from, one of the conductive leads 122 on the substrate 102 coupled to a desired conductive bump 16. In this way, using one or more test probes 150, the internal circuitry of the bumped die 12 may be selectively tested after the bumped die 12 has been coupled with the substrate 20. Alternately, electrical signals may be transmitted to the bumped die 12 using the substrate 20, and appropriate output signals from the bumped die 12 to the substrate 20 may be monitored at the conductive bumps 16 using the circuit-bearing interconnect component 100.Another advantage afforded by the circuit-bearing interconnect component 100 is that the test signals measured using the test probe 150 inserted into the outer via 114 may be of higher quality than test signals monitored using conventional methods. Because the conductive traces 122 are relatively short, the signals measured at the outer via 114 may be stronger and less subject to interference effects. The circuit-bearing interconnect component 100 may thereby provide more accurate and reliable test results compared with conventional test methods.In alternate embodiments, the size and spacing of the outer vias 114 may be adjusted so that as the outer areas 112 are positioned over the back side surface of the bumped die 12, the pattern of outer vias 114 matches a standard pattern, such as the pattern of inner vias 104 (or conductive bumps) within the engagement area. This aspect may allow the outer vias 114 of the circuit-bearing interconnect component 100 to be engaged with standardized test apparatus having multiple test probes (not shown), thereby permitting the bumped die 12 to be subjected to rapid, multi-functional testing using conventional, mass-production test equipment.FIG. 4 is an upper plan view of a semiconductor assembly 210 having a plurality of semiconductor components 212 attached to a printed circuit board 220 and including a plurality of circuit-bearing interconnect components 100 in accordance with an alternate embodiment of the invention. FIG. 5 is a side cross-sectional view of the semiconductor assembly 210 of FIG. 4. In this embodiment, each semiconductor component 212 is attached to the printed circuit board 220 in the manner described above with reference to FIGS. 2 and 3. The circuit-bearing interconnect components 100 may include a rigid substrate 102, or may have a flexible substrate of the type shown in FIG. 3 As best shown in FIG. 5, however, the outer areas 112 of each circuit-bearing interconnect component 100 are not shaped or bent around the associated bumped die 212A, but rather, the substrate 102 extends upwardly and outwardly from the bumped die 212A so that the outer areas 112 and the outer vias 114 are positioned at least partially over a backside surface of an adjacent bumped die 212B.As shown in FIG. 4, circuit-bearing interconnect components100 advantageously allow testing of semiconductor components 212 even when the semiconductor components 212 are tightly spaced on the printed circuit board 220. The substrate 102 may be shaped so that the outer via 114 may be positioned above the backside surface 130 of an adjacent semiconductor component 212 (FIG. 4), providing easy access for one or more test probes 150 to be inserted into the outer via 114. Therefore, the above-described advantages provided by the circuit-bearing interconnect components 100 may be realized in the semiconductor assembly 210 having a plurality of tightly spaced semiconductor components 212.FIG. 6 is a side cross-sectional view of a semiconductor package 310 having a bumped die 12A engaging a pair of circuit-bearing interconnect components 300A, 300B in accordance with an alternate embodiment of the invention. As shown in the upper plan view of FIG. 7, each of the circuit bearing interconnect components 300A, 300B may include a flexible substrate 302 shapeable into a flattened position and having an aft portion 303 and a lateral portion 305. The substrate 302 may be flexible, as shown in FIGS. 6 or 7, or rigid, as shown in FIG. 6. A plurality of conductive pins (or posts) 304 are disposed within an engagement area 307 in a pattern corresponding to the pattern of at least some of the conductive bumps 16 on the bumped die 12A. The conductive pins 304 extend through each substrate 302 from a first outer surface 320 to a second surface 321. A pointed tip 309 is disposed at each end of the conductive pins 304.A plurality of conductive leads 322 are disposed on the first outer surface 320 of each substrate 302. Each conductive lead 322 is electrically coupled to a conductive pin 304 and extends outwardly from the engagement area 307. Some of the conductive leads 320 extend across the aft portion 303, while other conductive leads extend across the lateral portion 305. A plurality of test pads 314 are formed along outer (peripheral) edges 315 of the substrate 302. Each test pad 314 is coupled to one of the conductive leads 322.During assembly, the conductive bumps 16 of the bumped die 12A may be aligned with the conductive pins 304 of each respective circuit-bearing interconnect component 300A, 300B. Also, the contact pads 18 of a printed circuit board 20A may be aligned with the conductive pins 304. Using a compressive force, the pointed tips 309 of the conductive pins 304 that protrude above the first and second surfaces 320, 321 may be driven into the surfaces of the conductive bumps 16 and the contact pads 18, respectively, thereby piercing a layer of oxide that may reside on the surfaces of these components and improving the electrical connection therebetween. Alternately, the pointed tips 309 may be eliminated, and the conductive bumps 16 and the contact pads 18 may be attached to the conductive pins 304 using other attachment techniques, such as by soldering, thermal bonding, or using conductive adhesive compounds. Following assembly, each conductive bump 16 of the bumped die 12A is electrically coupled with the corresponding contact pad 18 of the printed circuit board 20A by one of the conductive pins 304, and each conductive pin 304 is electrically coupled with its associated test pad 314 by one of the conductive leads 322.The circuit-bearing interconnect components 300A, 300B having the plurality of test pads 314 distributed along lateral edges 315 of the substrate 302 further improves the ease, accuracy, and reliability of the testing process. Because the test pads 314 are distributed on the lateral edges 315, the lateral edges 315 may be inserted into a suitable socket or receptacle (not shown) for sending and receiving signals to and from the conductive bumps 16. Testing of the semiconductor package 310 may be done more rapidly than alternate methods involving applying a test probe to each test pad 314 (or outer via 114) individually. Furthermore, accuracy of the testing may be increased because the possibility of contacting the test probe with the wrong test pad 314 (or outer via 114) is reduced or eliminated. As described above, the circuit-bearing interconnect components 300A, 300B allow the bumped die 12A to be tested easily and efficiently after it has been assembled with the printed circuit board 20A.Another advantage is that the circuit-bearing interconnect components 300A, 300B may be used to test bumped die 12A in a variety of tight spaces and packaging configurations. Because the aft portion 303 of the substrate 302 includes some of the test pads 314 while the lateral portion 305 includes the remainder of the test pads 314, the circuit-bearing interconnect components 300A, 300B provide a useful alternative to embodiments having all of the conductive leads extending away from only one side of the semiconductor component being tested. In other words, if the position of the semiconductor component within a particular package does not permit all of the test leads of the circuit-bearing component to extend from one side of the component to permit testing, alternate embodiments of circuit-bearing interconnect components having conductive leads extending from two or more sides of the component to be tested may be used.In an alternate embodiment, the circuit-bearing interconnect component 300B includes a pair of frangible sections 400 (FIGS. 6 & 7). One of the frangible sections 400 is disposed within the aft portion 303 of the substrate 302, and the other frangible section 400 is disposed within the lateral portion 305 of the substrate 302. The frangible sections 400 are weakened, breakable sections across the aft and lateral portions 303, 305, including the conductive leads 322, that may be broken, torn apart, or otherwise parted without damaging the other components of the circuit-bearing interconnect component 300B, the bumped die 12A, or printed circuit board 20A.It should be understood that the frangible sections 400 may be positioned at a variety of locations along the aft and lateral portions 303, 305. Also, it is not necessary that both the aft and lateral portions 303, 305 include a frangible section 400, but rather, may be included in only a single distal portion of the substrate. Of course, the frangible sections 400 may be included in either rigid or flexible substrates.One advantage of the frangible sections 400 is that after installation and testing of the bumped die 12A using the circuit-bearing interconnect component 300B, the aft and lateral portions 303, 305 of the substrate 302 may be removed from the semiconductor package 310. This may advantageously allow the resulting package to take up less space than the semiconductor package 310 having the aft and lateral portions 303, 305 (FIGS. 2 & 6).The detailed descriptions of the above embodiments are not exhaustive descriptions of all embodiments contemplated by the inventors to be within the scope of the invention. Indeed, persons skilled in the art will recognize that certain elements of the above-described embodiments may variously be combined or eliminated to create further embodiments, and such further embodiments fall within the scope and teachings of the invention. It will also be apparent to those of ordinary skill in the art that the above-described embodiments may be combined in whole or in part to create additional embodiments within the scope and teachings of the invention.Thus, although specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The teachings provided herein can be applied to other apparatus and methods of apparatus and methods of testing and assembling FBGA packages having circuit-bearing interconnect components, and not just to the embodiments described above and shown in the accompanying figures. Accordingly, the scope of the invention should be determined from the following claims.
An embodiment of a method is disclosed for protecting sensitive data from discovery during an operation performed on input data (104) with the sensitive data (106). This emobodiment of the method includes performing the operation on a first quantity of random data with the sensitive data before performing the operation with the sensitive data on the input data. After performing the operation with the sensitive data on the first quantity of the random data, the operation is performed with the sensitive data on the input data. After performing the operation with the sensitive data on the input data, the operation is performed with the sensitive data on a second quantity of random data.
CLAIMS What is claimed is: 1 . A method for protecting sensitive data from discovery during an operation performed on input data with the sensitive data, comprising: performing the operation on a first quantity of random data with the sensitive data before performing the operation with the sensitive data on the input data; after performing the operation with the sensitive data on the first quantity of the random data, performing the operation with the sensitive data on the input data; and after performing the operation with the sensitive data on the input data, performing the operation with the sensitive data on a second quantity of random data. 2. The method of claim 1 , further comprising repeating for each of a plurality of blocks of input data, the performing of the operation on the first quantity of random data before performing the operation on the block, and the performing of the operation on the second quantity of random data after performing the operation on the block. 3. The method of claim 1 or 2, further comprising: checking success of the operation performed on the input data with the sensitive data; and in response to the operation on the input data with the sensitive data being successful, continuing processing of input data. 4. The method of any one of claims 1 -3, further comprising: wherein the input data includes an encrypted data stream; wherein the sensitive data is a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting a block of the input data into a decrypted block; checking consistency of the decrypted block; and in response to the decrypted block being inconsistent with an expected value, continuing decryption of the encrypted data stream using an alternative key, or continuing decryption using random data in place of the encrypted data stream. 5. The method of any one of claims 1 -3, further comprising: before completing the operation with the sensitive data on the input data, checking consistency between an expected data and a portion of the input data on which the operation was performed; and in response to an inconsistency between the expected data and the portion of the input data on which the operation was performed, continuing the operation with an alternative sensitive data on the input data or continuing the operation with the sensitive data on an alternative input data. 6. The method of any one of claims 1 -3, further comprising: wherein the input data includes an encrypted data stream; wherein the sensitive data is a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting an encrypted block of the input data into a decrypted block; and applying a modification function to each input encrypted block of the data stream prior to decrypting the encrypted block, wherein the modification function generates a block to be decrypted as a function of the input encrypted block of the data stream and previously decrypted blocks. 7. The method of any one of claims 1 -3, further comprising: wherein the input data includes an encrypted data stream; wherein the sensitive data is a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting a block of the input data into a decrypted block; generating a pseudo-random number; after decrypting a first block of the data stream, determining whether or not the decrypted first block contains the pseudo-random number; and in response to the decrypted block not containing the pseudo-random number, continuing decryption using random data instead of the encrypted data stream. 8. The method of any one of claims 1 -7, further comprising: determining a ratio of 1 values and 0 values in a memory after the memory has transitioned from a power-off state to a power-on state; and delaying performing the operation on the first quantity of random data until the ratio reaches a threshold. 9. A circuit arrangement, comprising: a controller configured to provide input data, sensitive data, and random data; and a processing circuit coupled to receive the input data, the sensitive data, and the random data, the processing circuit configured to perform an operation with the sensitive data on a first quantity of the random data before performing the operation on the input data with the sensitive data and perform the operation with the sensitive data on a second quantity of the random data after performing the operation with the sensitive data on the input data. 10. The system of claim 9, wherein the processing circuit is further configured to perform the operation on a quantity of random data with the sensitive data before and after performing the operation with the sensitive data on each block of the input data. 1 1 . The system of claim 9 or 10, further comprising: wherein the input data includes an encrypted data stream; wherein the sensitive data is a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting the input data into decrypted data; a consistency check circuit coupled to receive the decrypted data from the processing circuit and coupled to the controller, wherein the consistency check circuit is configured to check consistency, before the processing circuit completes decryption of the input data, between a portion of the decrypted data and expected data, and generate a tampering signal indicating tampering is suspected in response to finding an inconsistency; and wherein the controller, responsive to the tampering signal, selects an alternative key instead of the decryption key for input to the processing circuit, or selects an alternative data stream instead of the encrypted data stream, for input to the processing circuit. 12. The system of claim 1 1 , further comprising: a consistency check circuit coupled to receive the input data from the processing circuit and coupled to the controller, wherein the consistency check circuit is configured to check consistency, before the processing circuit completes the operation with the sensitive data on the input data, between an expected data and a portion of the input data on which the operation was performed, and generate a tampering signal indicating tampering is suspected in response to finding an inconsistency; and wherein the controller, responsive to the tampering signal, selects an alternative sensitive data instead of the sensitive data for input to the processing circuit, or selects an alternative input data instead of the encrypted data stream, for input to the processing circuit. 13. The system of claim 1 1 , wherein the processing circuit is further configured to apply a modification function to each input encrypted block of the encrypted data stream prior to decrypting the encrypted block, wherein the modification function generates a block to be decrypted as a function of the input encrypted block of the encrypted data stream and previously decrypted blocks. 14. The system of claim 9 or 10, further comprising: wherein the input data includes an encrypted data stream; wherein the sensitive data is a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting the input data into decrypted data blocks; a consistency check circuit coupled to receive the decrypted data from the processing circuit and coupled to the controller, wherein the consistency check circuit is configured to generate a pseudo-random number and determine whether or not a decrypted first block of the decrypted data contains the pseudorandom number, and in response to the decrypted first block not containing the pseudo-random number, generate a tampering signal indicating tampering is suspected; wherein the controller, responsive to the tampering signal, selects an alternative key instead of the decryption key for input to the processing circuit, or provides random data instead of the encrypted data stream for input to the processing circuit. 15. The system of any one of claims 9-14, further comprising: a memory coupled to the controller; wherein the controller is configured to: determine a ratio of 1 values and 0 values in the memory after the memory has transitioned from a power-off state to a power-on state; and delay processing of the input data until the ratio reaches a threshold.
PROTECTING AGAINST DIFFERENTIAL POWER ANALYSIS ATTACKS ON SENSITIVE DATA FIELD OF THE INVENTION An embodiment generally relates to protecting against attacks that attempt to discover a decryption key through differential power analysis. BACKGROUND Programmable logic circuits are integrated circuits (ICs) that are user configurable and capable of implementing digital logic operations. There are several types of programmable logic ICs, including Complex Programmable Logic Devices (CPLDs) and Field Programmable Gate Arrays (FPGAs). CPLDs include function blocks based on programmable logic array (PLA) architecture and programmable interconnect lines to route and transmit signals between the function blocks. FPGAs include configurable logic blocks (CLBs) arranged in rows and columns, input/output blocks surrounding the CLBs, and programmable interconnect lines that route and transmit signals between the CLBs. Each CLB includes look-up tables and other configurable circuitry that is programmable to implement a logic function. The function blocks of CPLDs, CLBs of FPGAs and interconnect lines are configured by data stored in a configuration memory of the respective devices. Designs implemented in programmable logic have become complex. Due to the time and investment required for design and debugging, it is desirable to protect the design from unauthorized copying. Efforts have been made to encrypt designs and provide the encrypted designs to the target devices. Several encryption algorithms, for example, the standard Data Encryption Standard (DES) and the more secure Advanced Encryption Standard (AES) algorithms, are known for encrypting blocks of data. Additionally, a one-time encryption pad may be used as a cipher for encrypting blocks of data by XORing blocks of data with blocks of the one-time pad (OTP). These approaches require provision of a key to the structure that decrypts the design, and the key must be protected from unauthorized discovery A decryption key can be stored in nonvolatile memory of a programmable integrated circuit. An encrypted bitstream can then be loaded into the IC and decrypted using the key within the programmable logic. A configuration controller circuit is included in the IC to decrypt each frame of the encrypted bitstream and program configuration memory of the IC using the decrypted frames. In this manner, an attacker is prevented from reading the bitstream as it is being loaded into the programmable logic IC. However, this structure must also protect from modes of attack in which the attacker attempts to obtain the decryption key stored in the programmable IC. If the attacker obtains the decryption key, the attacker can decrypt an intercepted bitstream to reveal the unencrypted design. One method through which an attacker may attempt to discover the decryption key is known as power analysis. In a power analysis attack, current used by a device is monitored while the device is operating. During normal operation, the amount of power used by a device varies depending on the logic gates activated at a given time. By monitoring variations in the power consumption while the device is performing some operation with sensitive data, for example decrypting a configuration bitstream, the attacker can identify operations that are performed and determine the decryption key or other sensitive data. One or more embodiments may address one or more of the above issues. SUMMARY One or more embodiments provide approaches for protecting sensitive data from discovery during an operation performed with the sensitive data on input data. In one embodiment, a method can perform the operation on a first quantity of random data with the sensitive data using a circuit arrangement before performing the operation with the sensitive data on the input data using the circuit arrangement. After performing the operation with the sensitive data on the first quantity of the random data, the method can perform the operation with the sensitive data on the input data using the circuit arrangement. After performing the operation with the sensitive data on the input data, the method can perform the operation with the sensitive data on a second quantity of random data using the circuit arrangement. In this embodiment, the method can further comprise repeating for each of a plurality of blocks of input data, the performing of the operation on the first quantity of random data before performing the operation on the block, and the performing of the operation on the second quantity of random data after performing the operation on the block. The method can further comprise: checking success of the operation performed on the input data with the sensitive data; and in response to the operation on the input data with the sensitive data being successful, continuing processing of input data. The method can further comprise: the input data can include an encrypted data stream; the sensitive data can be a decryption key, the operation includes decrypting with the key, and the performing the operation on the input data with the sensitive data includes decrypting a block of the input data into a decrypted block; checking consistency of the decrypted block; and in response to the decrypted block being inconsistent with an expected value, continuing decryption of the encrypted data stream using an alternative key, or continuing decryption using random data in place of the encrypted data stream. In this embodiment, the method can further comprise: before completing the operation with the sensitive data on the input data, checking consistency between an expected data and a portion of the input data on which the operation was performed; and in response to an inconsistency between the expected data and the portion of the input data on which the operation was performed, continuing the operation with an alternative sensitive data on the input data or continuing the operation with the sensitive data on an alternative input data. The method can further comprise: the input data can include an encrypted data stream; the sensitive data can be a decryption key, the operation can include decrypting with the key, and the performing the operation on the input data with the sensitive data can include decrypting an encrypted block of the input data into a decrypted block; and applying a modification function to each input encrypted block of the data stream prior to decrypting the encrypted block, where the modification function can generate a block to be decrypted as a function of the input encrypted block of the data stream and previously decrypted blocks. In this embodiment, the method can further comprise: generating a pseudo-random number; determining whether or not the input data contains the pseudo-random number; and in response to the input data not containing the pseudo-random number, generating a signal indicating tampering is suspected. The method can further comprise: the input data can include an encrypted data stream; the sensitive data can be a decryption key, the operation can include decrypting with the key, and the performing the operation on the input data with the sensitive data can include decrypting a block of the input data into a decrypted block; generating a pseudo-random number; after decrypting a first block of the data stream, determining whether or not the decrypted first block contains the pseudo-random number; and in response to the decrypted block not containing the pseudo-random number, continuing decryption using random data instead of the encrypted data stream. In this embodiment, the method can further comprise: determining a ratio of 1 values and 0 values in a memory after the memory has transitioned from a power-off state to a power-on state; and delaying performing the operation on the first quantity of random data until the ratio reaches a threshold. The first quantity of the random data can be a random quantity. The second quantity of the random data can be a random quantity. A circuit arrangement is provided in another embodiment. The circuit arrangement can include a controller configured to provide input data, sensitive data, and random data. A processing circuit can be coupled to receive the input data, the sensitive data, and the random data. The processing circuit can be configured to perform an operation with the sensitive data on a first quantity of the random data before performing the operation on the input data with the sensitive data. The processing circuit can be further configured to perform the operation with the sensitive data on a second quantity of the random data after performing the operation with the sensitive data on the input data. In this emobodiment, the processing circuit can be further configured to perform the operation on a quantity of random data with the sensitive data before and after performing the operation with the sensitive data on each block of the input data. The circuit arrangement can further comprise: the input data can include an encrypted data stream; the sensitive data can be a decryption key, the operation can include decrypting with the key, and the performing the operation on the input data with the sensitive data can include decrypting the input data into decrypted data; and a consistency check circuit can be coupled to receive the decrypted data from the processing circuit and coupled to the controller, wherein the consistency check circuit is configured to check consistency, before the processing circuit completes decryption of the input data, between a portion of the decrypted data and expected data, and generate a tampering signal indicating tampering is suspected in response to finding an inconsistency; and the controller, responsive to the tampering signal, can select an alternative key instead of the decryption key for input to the processing circuit, or can select an alternative data stream instead of the encrypted data stream, for input to the processing circuit. In this embodiment, a consistency check circuit can be coupled to receive the input data from the processing circuit and can be coupled to the controller, where the consistency check circuit can be configured to check consistency, before the processing circuit completes the operation with the sensitive data on the input data, between an expected data and a portion of the input data on which the operation was performed, and can generate a tampering signal indicating tampering is suspected in response to finding an inconsistency; and the controller, responsive to the tampering signal, can select an alternative sensitive data instead of the sensitive data for input to the processing circuit, or can select an alternative input data instead of the encrypted data stream, for input to the processing circuit. In this embodiment, the processing circuit can be further configured to apply a modification function to each input encrypted block of the data stream prior to decrypting the encrypted block, where the modification function can generate a block to be decrypted as a function of the input encrypted block of the data stream and previously decrypted blocks. In this embodiment, the input data can include an encrypted data stream; the sensitive data can be a decryption key, the operation can include decrypting with the key, and the performing the operation on the input data with the sensitive data can include decrypting the input data into decrypted data blocks; a consistency check circuit can be coupled to receive the decrypted data from the processing circuit and can be coupled to the controller, where the consistency check circuit can be configured to generate a pseudo-random number and determine whether or not a decrypted first block of the decrypted data contains the pseudo-random number, and in response to the decrypted first block not containing the pseudo-random number, can generate a tampering signal indicating tampering is suspected; where the controller, responsive to the tampering signal, can select an alternative key instead of the decryption key for input to the processing circuit, or can provide random data instead of the encrypted data stream for input to the processing circuit. In this embodiment, the circuit arrangement can further comprise: a memory coupled to the controller; where the controller is configured to: determine a ratio of 1 values and 0 values in a memory after the memory has transitioned from a power-off state to a power-on state; and delay processing of the input data until the ratio reaches a threshold. At least one of the controller and processing circuit can be implemented in a microprocessor. An embodiment of a method for protecting a key from discovery during decryption of a data stream, can comprise: decrypting the data stream with the key; before completing decryption of the data stream, checking consistency between a decrypted portion of the data stream and expected data using a circuit arrangement; and in response to an inconsistency between the decrypted portion and the expected data, generating a tampering signal that indicates tampering is suspected. In this embodiment, the method can further comprise in response to the tampering signal indicating that tampering is suspected, continuing decryption of the data stream using one or more of an alternative key or an alternative data stream. The method can further comprise: determining a ratio of 1 values and 0 values in a memory after the memory has transitioned from a power-off state to a power-on state; and delaying decryption of a data stream until the ratio reaches a threshold. The data stream can include blocks that begin with a first block and the checking for consistency can include performing a cyclic redundancy check on the first block of the data stream. In this embodiment, the checking for consistency can verify the presence of a correct password in the first block. The checking for consistency can verify the presence of one or more expected instructions in the first block. The checking for consistency can verify the presence of an expected hash value in each block of the data stream. The checking for consistency can include checking consistency of only blocks,*,, + 1 of the data stream, where 0 < / < the number of blocks in the data stream, and n≥ 2. The checking for consistency can include checking consistency of only blocks,*,, + 1 of the data stream, where 0 < /' < the number of blocks in the data stream, and n is a different pseudo-random number at different times during decrypting of blocks,. The alternative data stream can be a stream of pseudo-random numbers. The alternative data stream can be a stream of constant values. The method can further include modifying the key during decryption of the data stream by rotating bits of the key. The method can further comprise modifying the key during decryption of the data stream by passing the key through a linear feedback shift register. The method can further comprise modifying the key during decryption of the data stream by XORing bit values of the key with bit values from a linear feedback shift register. The method can further comprise modifying the key during decryption of the data stream by using decrypted data as a new key. The method can further comprise modifying the data stream during decryption of the data stream by rotating bits of the data stream. The method can further comprise modifying the data stream during decryption of the data stream by XORing bit values of the data stream with bit values from a linear feedback shift register. The method can further comprise applying a modification function to each input encrypted block of the data stream prior to decrypting the encrypted block, wherein the modification function generates a block to be decrypted as a function of the input encrypted block of the data stream and previously decrypted blocks. The method can further comprise: decrypting a first quantity of random data before decrypting one or more blocks of the encrypted data stream; after decrypting the first quantity of the random data, decrypting a block of the encrypted data stream; and after decrypting the block of the encrypted data stream, decrypting a second quantity of random data. An embodiment of a decryption system can comprise: a decryption controller configured to provide an input encrypted data stream and a decryption key; a decryptor circuit coupled to receive the input data stream and the decryption key, the decryptor circuit configured to decrypt the input data stream with the decryption key and generate a decrypted data stream; and a consistency check circuit coupled to receive the decrypted data stream from the decryptor circuit and coupled to the decryption controller, where the consistency check circuit can be configured to check consistency, before the decryptor circuit completes decryption of the input data stream, between a portion of the decrypted data stream and expected data, and can generate a tampering signal indicating tampering is suspected in response to finding an inconsistency. It will be appreciated that various other embodiments are set forth in the Detailed Description and Claims which follow. BRIEF DESCRIPTION OF THE DRAWINGS Various aspects and advantages will become apparent upon review of the following detailed description and upon reference to the drawings in which: FIG. 1 is a block diagram of a decryption circuit arrangement in accordance with an embodiment; FIG. 2 is a block diagram of a decryption circuit arrangement in accordance with another embodiment; FIG. 3 is a flowchart of an example process for decrypting data in accordance with an embodiment; FIG. 4 is a flowchart of an example process for performing decryption in accordance with an embodiment that is directed to protecting against attacks that involve large data streams having legitimate data; FIG. 5 shows an example in which the key bits have been rotated right; FIG. 6 shows an example in which bits of the key or data stream are XORed with taps from an LFSR; FIG. 7 shows a flowchart of an example process for limiting repeated trials of data streams in order to reduce exposure to iterative attacks such as trial-and- error brute force attacks or differential power analysis attacks; FIG. 8 is a flowchart of a process for hiding sensitive data while an operation is being performed on that sensitive data with a circuit arrangement; FIG. 9 is a flowchart of a process for decrypting data in accordance with an embodiment; FIG. 10 is a flowchart of a process for decrypting data in accordance with another embodiment; FIG. 1 1 is a block diagram of a decryption circuit arrangement in accordance with an embodiment; and FIG. 12 is a block diagram of an example programmable logic integrated circuit that may be used in implementing a decryption circuit arrangement in accordance with an embodiment. DETAILED DESCRIPTION OF THE DRAWINGS During configuration of programmable logic, the configuration bitstream data can be intercepted and used to make unauthorized copies of the design. Although the configuration bitstream can be encrypted, the decryption key or other sensitive data may be vulnerable to discovery through brute-force trial-and- error attacks or side-channel attacks such as analysis of electromagnetic radiation or power analysis. In a power analysis attack, current used by a device is monitored over time. During normal operation, the amount of power used by a device varies depending on the logic gates activated at a given time. By monitoring variations in the power consumption during the decryption process, the attacker can identify operations that are performed and determine the decryption key. One or more embodiments provide countermeasures that may be implemented with software or hardware to improve resistance to power analysis attacks. In a simple power analysis (SPA) attack, current used by a device is monitored over time. During normal operation, the amount of power used by a device varies depending on the logic gates activated at a given time. By monitoring variations in the power consumption, the attacker can identify different operations that are performed. For example, if a programmable IC implements DES encryption, sixteen rounds of encryption/decryption are performed on each block of data. Because similar operations are performed for each round, power consumption data can be identified for each round. Comparison of power consumption of different rounds can identify key- dependent operations and, ultimately, the key used for decryption. For example, the DES key schedule is produced by rotating 28-bit key registers. The rotations are generally implemented using a right shift operation where a zero is shifted into the most significant bit by default. If the bit of the key shifted out of the register is a one, an additional operation is needed to cause the most significant bit to be equal to one. Therefore, a different power signature will be produced for each rotation depending on the bit of the decryption key. In a differential power analysis (DPA) attack the difference in the power consumption between decrypting two different blocks of ciphertext can be used to extract information about the key. For example, in one step in many encryption and decryption operations, the ciphertext, or a value deternninistically derived from the ciphertext, is EXCLUSIVE-ORed (XOR) with the key or a subkey derived deterministically from the key. An attacker can observe the ciphertext and can watch for the difference in power consumption between those ciphertext values expected to produce a 1 output from the XOR versus those expected to produce a 0, for some assumption of the key value. The attacker may attempt a large number of executions of the decryptor by providing a large amount of data to be decrypted. The attacker guesses a key value and averages together the subset of the power traces of those executions of the decryptor that are expected to produce the same value from the XOR function if the guess was correct. If that result differs significantly from the average of all executions of the decryptor, the attacker can conclude that the guess of the key value was correct. If incorrect, the attacker assumes a different key value and averages different subsets of the power traces from the large number of trials. This attack requires a large number of trials to ensure success. As used herein, a power signature may be referred to as power fluctuations, a power consumption signature, or a power consumption waveform, and such terms are used interchangeably herein. Other encryption ciphers, including both symmetric and asymmetric ciphers, also include key dependent operations that are susceptible to power analysis. One skilled in the art will recognize that one or more embodiments are applicable to protecting key data or other sensitive data used by a number of synchronous and asynchronous encryption and decryption algorithms such as DES, DES-3, Blowfish, RSA, DSA, etc. as well as other algorithms that merely handle decrypted sensitive data. Throughout this description references are made to keys or key data. Those skilled in the art will recognize that key data is one example of data that is intended to be protected. Other kinds of data fall within the scope of one or more embodiments. Also, reference is made to encryption and decryption throughout the description. Those skilled in the art will recognize that these are examples of operations performed using key data, where the key data is an example of data that must be kept secret. Though an example application and embodiment involving decryption is described, operations other than decryption fall within the scope of one or more embodiments. One or more embodiments provide protection against attempts to learn the decryption key by way of differential power analysis (DPA) while attempting to decrypt a large data stream as well as attempts to learn the decryption key by way of repeatedly attempting to decrypt using small data streams. An example scenario in which one or more embodiments protect against attempts to discover a decryption key by way of differential power analysis involves configuration bitstreams directed to programmable integrated circuits (ICs) such as field programmable gate arrays (FPGAs). The bitstream for an FPGA is of a known length, and this amount of data is provided as input for configuring the FPGA. The length of the bitstream is indicated by a value in the bitstream. In an attempt to gather a sufficient amount of power analysis data, an attacker may specify a length value in the bitstream that is greater than the number of bits required to fully configure the target FPGA. In order to circumvent this type of attack, one or more embodiments perform consistency checks on decrypted data near the beginning of an input bitstream. In response to detecting an inconsistency, countermeasures are taken. For example, an alternative key may be used while continuing the decryption process, thereby hiding use of the true key and making any data gathered from differential power analysis irrelevant to the true key. In another embodiment, an alternative data stream may be substituted for the attacker's data stream, resulting in the true key being hidden from differential power analysis. To protect against attempts to learn the decryption key by way of repeatedly attempting to decrypt using small data streams, an embodiment responds to a failure to configure by waiting for a selected period of time before permitting another attempt to configure with an encrypted data stream. Since thousands of failed decryption attempts would be required for an attacker to learn the key, the accumulation of the delays between attempts may make a differential power attack infeasible. In another embodiment, protection is provided against repeated attacks using small data streams. In these types of attacks on a programmable integrated circuit (IC) such as a field programmable gate array (FPGA), the attacker inputs a short data stream to the device and gathers the data through power analysis while the device is operating on the input data stream. The device is then rebooted and another data stream input. The rebooting may entail cycling power to the device or another device-specific action that results in resetting the device such that the device reinitializes and process the next data of an input data stream as the beginning of a data stream. For an FPGA, the rebooting causes the FPGA to load a configuration bitstream for configuring the programmable logic and routing resources. To protect against these types of attacks, decryption of the actual first block of data is obscured by first decrypting some number of blocks of random or pseudo-random data, decrypting the first block, followed by decrypting some number of blocks of random or pseudorandom data again, and then checking consistency of the decrypted data. If the decrypted data are inconsistent, the countermeasures described above may be invoked. FIG. 1 is a block diagram of a decryption circuit arrangement in accordance with an embodiment. The decryption circuit arrangement 100 uses an alternative key in response to the decryption controller finding that decrypted data fails a consistency check. The decryptor circuit arrangement includes decryptor 102 that decrypts an input data stream 104 using key 106. In one embodiment, the decrypted data stream 108 is stored in memory 1 10. In another embodiment, the decrypted data stream may be transmitted to other circuitry. Consistency checker 1 12 determines whether or not the decrypted data is consistent with expected results. The decryption controller 1 14 is coupled to the consistency checker 1 12 and controls selection of either the actual key 106 or the alternative key 1 16 for use by the decryptor 102. Initially, the decryption controller 1 14 selects the key 106 for use by the decryptor 102, and the decryptor 102 continues use of that key until the consistency check of the decrypted data fails. In response to the consistency checker finding that the decrypted data is not as expected and signaling the occurrence to the decryption controller, which signals tampering is suspected, the decryption controller selects the alternative key for input to the decryptor via selector 1 16. Further decryption of the input data stream by decryptor is performed using the alternative key. Thus, unbeknownst to the attacker the decryptor circuit arrangement 100 switches to use the alternative key while the attack is underway, which causes meaningless data to be gathered by the attacker. The alternative key 1 16 may be a pre-programmed constant value, a random number, or the output of the decryptor 102, for example. In another embodiment, the effect of an alternative key is to interfere with the key schedule used by the decryptor. For example, the scheduled key values may set to 0's or overwritten with alternative data. In one embodiment, the consistency checker performs a cyclic redundancy check (CRC) early in the input data stream. For example, the data stream may include a CRC code in the first block of the data stream, and the consistency checker checks whether or not a CRC code computed on the first 5 block matches the value in the block. Thus, early in the decryption process the consistency checker is able to signal whether or not a decryption attack is likely to be underway. For an attack in which a long data stream is used, the approach responds to the attack early in the data stream, thereby reducing the likelihood that an attacker will be able to discover the key with ongoing power analysis of a 10 long data stream. In another embodiment, a CRC code or a hash value for authentication may be included in every block of the encrypted data stream. In another embodiment, the first block of the data stream may include a password to be verified by the consistency checker 1 12. If the consistency checker finds that the first block of the decrypted data stream does not include a i s password that matches the expected password, the consistency checker signals the failure of the consistency check to the decryption controller. In response, the decryption controller 1 14 selects the alternative key 1 16 for use by the decryptor 102. The desired password is loaded onto the device containing the arrangement 100 using a boundary scan interface, for example. For a legal data 20 stream, the tools (e.g., electronic circuit design tools) that generate the data stream include the password in the data stream and encrypt the data stream. An alternative embodiment has the consistency checker 1 12 configured to check the first decrypted block of the data stream for an expected sequence of instructions. In a data stream such as a configuration bitstream directed to a 25 programmable logic integrated circuit (IC), the first block, or the first few blocks, of the bitstream is expected to have a particular sequence of instructions for commencing decryption and configuring the programmable logic. An attacker's data stream may lack the specific sequence of instructions, and the absence of the sequence of instructions in the decrypted bitstream is detected by the 30 consistency checker. In response to detecting that the expected sequence of instructions is not present in the decrypted data stream, the consistency checker signals the decryption controller 1 14 as to the failed consistency check, which indicates tampering is suspected. In response to the signal of the failed consistency check, the decryption controller selects the alternative key 1 16 for use by the decryptor. In another embodiment, the consistency checker 1 12 is configured to periodically check for consistency of the decrypted data stream. In one embodiment, the consistency checker is configured to check every nth block for consistency. In another embodiment, the first block of the decrypted data stream may indicate the value of n. The consistency check may be performed irregularly in an alternative embodiment. Rather than checking the consistency of every nth block where n remains the same throughout the decryption, performing the consistency check irregularly varies n in a pseudo-random manner during the decryption. The irregular pattern of consistency checks may be controlled by a linear feedback shift register (LFSR) or a random number generator, for example. FIG. 2 is a block diagram of a decryption circuit arrangement 130 in accordance with another embodiment. The arrangement shows the use of an alternative data stream 122 in response to the consistency checker 1 12 finding that decrypted data fails a consistency check. Instead of using an alternative key when there is a failure in consistency of the decrypted data stream as in the decryption circuit arrangement 100 of FIG. 1 , the decryption controller 1 14 of decryption circuit arrangement 130 selects an alternative data stream 122, via selector 124, to be decrypted by decryptor 102. Thus, unbeknownst to the attacker the decryptor circuit arrangement 130 switches to use the alternative data stream while the attack is underway, which causes meaningless data to be gathered by the attacker. The decryption circuit arrangement 130 may be configured according to the different embodiments of the consistency checker 1 12 as described above in association with FIG. 1 . In one embodiment, the alternative data stream 122 may be generated by a random number generator (not shown). Alternatively, the alternative data stream may be a stream of constant values such as all 0's or alternating 1 's and 0's. The use of an alternative key and an alternative data stream may both be used in another embodiment. FIG. 3 is a flowchart of an example process for decrypting data in accordance with an embodiment. The process shows an early consistency check performed on decrypted data. In response to a failed consistency check, the decryption process continues using an alternative key instead of the real key, or using an alternative data stream instead of the encrypted data stream. The use of an alternative key or use of an alternative data stream are alternative countermeasures. At step 152 the first block of the encrypted data stream is input and the block is decrypted at step 154. Step 156 checks the consistency of the decrypted block using one of the alternatives described above in association with in FIG. 1 . If the block is not consistent, different countermeasures may be taken as described above in association with FIGs. 1 and 2. According to one embodiment, in response to detecting an inconsistency between the decrypted data and expected data, an alternative key is input at step 160 for use in continuing decryption. At step 162, the process continues decryption with the alternative key. After completing decryption with the alternative key, a waiting period may be enforced in order to delay the attacker in repeating the attack. In another embodiment, the countermeasure may be to continue decryption using alternative data. At step 166, the alternative data are input, and step 168 continues the decryption using the alternative data instead of the attacker's data stream. After completing decryption of the alternative data stream, a waiting period may be enforced in order to delay the attacker in repeating the attack. Both an alternative key and an alternative data stream may be used in combination in another embodiment. Additional countermeasures that may be taken in combination with use of an alternative key or alternative data include ceasing to store decrypted data (for example in a configuration memory of a programmable IC), burning an e-fuse to indicate an attack was detected, and/or clearing the real decryption key. Returning now to decision step 158, if the first block of decrypted data passes the consistency check, the next block of the data stream is input at step 182 and decrypted and stored at step 184. One or more blocks of the input data stream may be checked for consistency depending on the desired implementation. If only the first block is checked for consistency, then decision step 186 directs the process to step 182 to input the next block of the encrypted data stream since the first block was checked at step 156. In embodiments where there are multiple consistency checks during the decryption, decision step 186 determines whether or not the decrypted block should be checked. As described above, the consistency check may be performed periodically (every nth block) or irregularly (pseudo-randomly check). If the decrypted block is to be checked, decision step 186 directs the process to step 156 to check the consistency as described above. FIG. 4 is a flowchart of an example process for performing decryption in accordance with an embodiment that is directed to protecting against attacks that involve large data streams having legitimate data. A scenario of attack that is addressed by the embodiments of FIG. 4 is that of the attacker using a large data stream in which an early consistency check would not detect an inconsistency. The long data stream may afford the attacker the opportunity to gather enough data through differential power analysis to discover the key. The embodiments of FIG. 4 mask either the key or the encrypted data stream during the decryption process. One embodiment shown in FIG. 4 modifies the key during the decryption process. The alternative embodiment modifies the data during the decryption process. The processes of FIG. 4 further describe the decryption step 184 of FIG. 3. One embodiment in FIG. 4 illustrates how the decryption may be obscured from differential power analysis attacks by modifying the key during decryption. The other embodiment in FIG. 4 shows how the decryption may be obscured by modifying the data during decryption. According to one embodiment, the key is changed after every n blocks have been decrypted. If it is time to change the key, decision step 188 directs the process to step 190 to modify the key. In one embodiment, the key bits may be rotated such as in a circular shift register. Alternative modifications include passing the key value through an LFSR, XORing the bits of the key with an LFSR value, or using the decrypted data as the new key. At step 192, the block of data is decrypted and stored. It will be recognized that, while not shown, a count of blocks decrypted with a current key is maintained and reset when the key is modified. In an alternative embodiment, the encrypted data stream may be changed after decrypting every n blocks. If it is time to modify the encrypted data stream, decision step 194 directs the process to step 196 where the data stream is modified. In one embodiment, an input block of the data stream is rotated. Alternatively, the block of the data stream may be modified by XORing the bits of the block with bits of an LFSR. In yet another embodiment, each block of the encrypted data stream may be modified by XORing that block with some other previously decrypted block of the data stream. For example, each of encrypted blocks n+1 through n+1000 is XORed with decrypted block n. For decrypting blocks n+1001 through n+2000, those encrypted blocks n+1001 through n+2000 are XORed with decrypted block n+1000. This process is extended to the remaining blocks of the data stream. At step 198, the block of data is decrypted and stored. It will be recognized that, while not shown, a count may be maintained of blocks decrypted between modifying of a block, and the count is reset when the last block to be modified with a particular decrypted block has been modified. To support the embodiments of FIG. 4, the encrypted data stream must have been constructed to accommodate the modifications to the key or encrypted data that occur during the decryption process. FIG. 5 shows an example in which the key bits have been rotated right by 1 1 bits. For ease of illustration, the key is shown with only 32 bits. The value from bit 0 is rotated to bit 1 1 , the value from bit 1 is rotated to bit 12, the value from bit 2 is rotated to bit 13, the value from bit 21 is rotated to bit 0, and the value from bit 31 is rotated to bit 10 The bits of the key may be rotated by a different number of bits and/or rotated left according to implementation requirements. FIG. 6 shows an example in which bits of the key or data stream 252 are XORed with taps from an LFSR 254. Other example embodiments, which are not illustrated, include passing the key through the LFSR (the key bits are taps off the LFSR). An example LFSR is described in U.S. Patent 6,181 ,164 to Miller. FIG. 7 shows a flowchart of an example process for limiting repeated trials of data streams in order to reduce exposure to iterative attacks such as trial-and- error brute force attacks or differential power analysis attacks. In the process of FIG. 7, an attempted attack is detected at power up of a device by way of examining the number of bits having a value 1 versus the number of bits having a value 0 in a memory. If there is a sufficient proportion of bits with value 0, then fast power cycling of the device is less likely to have occurred, and the bits of the memory are all set to the value 1 in order to put the contents of the memory in a state suitable for detecting a subsequent fast cycling of power. If there is not a sufficient proportion of bits with value 0, then fast power cycling of the device is likely to have occurred, and countermeasures may be taken. In one embodiment, the decryption controller (FIG. 1 , 2, #1 14) includes a volatile memory dedicated for use in identifying fast cycling of power to the device. The process shows four example alternatives for responding to a detected attack. At step 302, the process counts the number of bits with value 1 and the number of bits of value 0 in the memory. If a sufficient number of the bits have the value 0, then the device is presumed to have been without power for a period of time that would not imply that a DPA is underway. The process proceeds to step 305, where all the bits of the memory are set to the value 1 . At step 306, the decryption and configuration process continues. In response to the memory not having a sufficient number of bits with the value 0, countermeasures may be taken to slow down the cycling of power and repeated attempts at DPA or brute-force attacks. It will be appreciated that the particular proportion of 0 values to 1 values that would trigger countermeasures may vary between different memories. Specifically, once power is removed the rate at which bits of one memory revert to 0 values may be greater than or less than the rate at which the bits of another memory revert to 0 values. If the bits of a memory are slow to revert to 0 values after power is removed, a lesser number of 0 values would be desired to trigger the countermeasures. In contrast, if the bits of a memory are fast to revert to 0 values after power is removed, a greater number of 0 values would be desired to trigger the countermeasures in order to ensure a sufficient period of time has passed before allowing the next decryption and configuration attempt. In one embodiment, one countermeasure is to report the failed configuration attempt and halt the configuration process as shown by step 308. An alternative countermeasure is to continue decrypting using an alternative key and/or alternative data as shown in step 310 and described above. Step 312 shows another countermeasure in which a new decryption attempt is not permitted until a prescribed period of time has passed. Instead of waiting for a prescribed period of time, another countermeasure simply returns the process to decision step 302 to once again check the number of 0 values in the memory. FIG. 8 is a flowchart of a process for hiding sensitive data while an operation is being performed on that sensitive data with a circuit arrangement. The embodiment of FIG. 8 may be applied to checking a password, for example, while the embodiment shown in FIG. 9 is directed to decryption operations. The password is the sensitive data being protected in FIG. 8, and the decryption key is the sensitive data being decrypted in FIG. 9. In the embodiment of FIG. 8, the operation is hidden by performing an operation on a quantity of random or pseudo-random data prior to performing the operation on the first block, then performing the operation on the input block, and then performing the operation on some additional quantity of random or pseudorandom data. In this embodiment, "random data" will be used to refer to implementations using random data and to implementations using pseudo- random data. Data are input at step 322, and a first quantity of random data is generated (or provided as input if previously stored) at step 324. The input data may contain only the password or the password in combination with additional application data. At step 326, the operation is performed on the first quantity of random data, and after that the operation is performed on the input data at step 328. In an example application and embodiment, the operation is checking whether the password in the input data is correct. The correct password is the sensitive data, and the operation involves comparing the correct password to the password in the input data. An additional quantity of random data is generated (or provided as input if previously stored) at step 330, and the operation is performed on those m blocks of random data at step 332. In one embodiment, n and m may be constant values. Alternatively, n and m may be pseudo-random values. Step 334 checks whether the operation on the input block was successful. For example, the success may be indicated by a password check as performed at step 328. If the operation was not successful (e.g., the password was incorrect), decision step 334 directs the process to step 336 where countermeasures (previously described) are taken in continuing to perform operations on additional input data. If the operation on the input data was successful, the process continues according to application requirements. FIG. 9 is a flowchart of a process for decrypting data in accordance with an embodiment. The process of FIG. 9 may be used alone or in combination with the process of FIG. 3. One embodiment of FIG. 9 hides the decryption for the first block of the data stream. Another embodiment shown in FIG. 9 hides the decryption for the other blocks of the data stream as well. The decryption is hidden by decrypting a quantity of random or pseudo-random data prior to decrypting the first block, then decrypting the block, and then decrypting some additional quantity of random or pseudo-random data. In this embodiment, "random data" will be used to refer to implementations using random data and to implementations using pseudo-random data. The consistency of the decrypted data is checked after decrypting the random data. Another embodiment shown in FIG. 9 requires that the first block of the data stream contains a value that matches a pseudo-random number generated by the decrypting device or some other data consistency check, such as a checksum or message authentication code (MAC). The first block of the encrypted data stream is input at step 352, and n blocks of pseudo-random data are generated at step 354. At step 356, the n blocks of pseudo-random data are decrypted, and after that decryption, the input block of the data stream is decrypted at step 358. An additional m blocks of pseudo-random data are generated at step 360, and those m blocks of pseudorandom data are decrypted at step 362. In one embodiment, n and m may be constant values. Alternatively, n and m may be pseudo-random values. Step 364 checks the consistency of the decrypted block. If the decrypted block is not consistent, decision step 366 directs the process to step 368 where countermeasures (previously described) are taken in continuing decryption. If the decrypted block is consistent, one alternative is to continue decryption of the remaining blocks of encrypted data as shown by step 370. If the attacker knows the correct first block and inserts his data in the second block, this attack may be addressed by repeating the process of decrypting pseudo-random data both before and after decrypting an all input blocks, as shown by steps 372 and 354-362. In another alternative embodiment that addresses the scenario in which the attacker knows the correct first block and inserts his data in the second block, the decrypting circuit may generate a pseudo-random number and require that the first block include an encrypted version of the pseudo-random number. The process flow from step 362 to step 372 illustrates this embodiment. If the decrypted block is the first block, decision step 372 directs the process to step 374 where a pseudo-random number is generated. If the generated pseudorandom number matches the number present in the first block, decision step 376 directs the process back to step 364. Otherwise, the process is directed to step 368 for continuing decryption while taking counter measures. FIG. 10 is a flowchart of a process for decrypting data in accordance with another embodiment. The embodiment of FIG. 10 modifies the process of FIG. 8 by employing a method similar to cipher block chaining, though applied to ciphertext, in combination with hiding the decryption of an input block amongst decryption of pseudo-random data. The embodiment of FIG. 10 addresses the scenario in which the attacker knows the correct first block and inserts his data in the second block. Generally, in the embodiment of FIG. 10, the encrypted value of each block is dependent on the decrypted value of a previous block. During encryption, each block of ciphertext is XORed with a preceding block of plaintext and the result is the encrypted block of the data stream. For decryption, each block ciphertext is XORed with a previous block of plaintext, and that resulting block is decrypted. At step 402, block x of the encrypted data stream in input. As described in the embodiment of FIG. 9, steps 404 and 406 generate n blocks of pseudo- random data and then decrypt that pseudo-random data. At step 408, the encrypted block x of the data stream is XORed with a previously decrypted block, for example block (x-1 ) of the data stream. Then at step 410, the resulting block from step 408 is decrypted. The decrypted block is saved for the XOR operation in the next iteration. Steps 412 and 414 generate and decrypt m blocks of pseudo-random data as explained in the description of FIG. 9. Step 416 checks the consistency of the decrypted block. If the block is consistent, decision step 418 directs the process to step 402 to input the next block of encrypted data. Otherwise, decryption continues while taking one or more of the previously described counter measures at step 420. FIG. 11 is a block diagram of a decryption circuit arrangement in accordance with an embodiment. The decryption circuit arrangement 500 of FIG. 1 1 is an example implementation of the embodiments described in FIGs. 9 and 10. The decryption circuit arrangement 500 includes decryptor 502, consistency checker 1 10, decryption controller 504, and memory 108. The consistency checker 1 10 functions as described in previous embodiments. The decryption circuit arrangement 500 further includes random number generator 510. Decryption controller 504 controls the input of either the encrypted data stream 106, pseudo-random numbers from random number generator 510, or an alternative data stream 122 to the decryptor 502 via selector 512. In one embodiment, the decryptor inputs a block of the encrypted data stream followed by n pseudo-random numbers from the random number generator. Before decrypting the block of the encrypted data stream, the decryptor decrypts the n pseudo-random numbers. Then the block from the encrypted data stream is decrypted, and the decryption controller inputs m additional pseudo-random numbers to the decryptor. The decryptor then decrypts the m additional pseudo- random numbers. The consistency checker 1 10 checks the consistency of the decrypted block and signals the decryption controller as to whether the decrypted block is consistent or inconsistent. If the block is inconsistent, the decryption controller takes countermeasures by selecting the alternative key 1 14, the alternative data stream 122, or both for input to the decryptor. The decryptor then continues decryption with the selected inputs. If the decrypted block is consistent, the decryption controller selects another block from the encrypted data stream and the key 102 for input to the decryptor. The decryption controller 504, decryptor 502, and consistency checker 1 10 may be alternatively configured according to the embodiments shown in FIGs. 9 and 10. In one embodiment, only the first block of the encrypted data stream has pseudo-random data decrypted before and after decrypting of the first block. Alternatively, the decryption controller may select pseudo-random data for the decryptor to decrypt around each block of the encrypted data stream. In another embodiment, the consistency checker may check for the presence of a pseudo-random number in the first block of the encrypted bitstream. The decryptor may be configured to XOR each input encrypted block of the data stream with the previous decrypted block of the data stream in another embodiment. FIG. 12 is a block diagram of an example programmable logic integrated circuit that may be used in implementing a decryption circuit arrangement in accordance with an embodiment. The decryption circuit arrangement and other processes, as previously described, may be implemented on the programmable logic and interconnect resources of a programmable IC or as a hard-wired circuit on the programmable IC. Those skilled in the art will recognize that in alternative embodiments, programmed processors may be used to implement the processes described herein, and the embodiments are not limited to FPGA configuration bitstreams. FPGAs can include several different types of programmable logic blocks in the array. For example, FIG. 12 illustrates an FPGA architecture (600) that includes a large number of different programmable tiles including multi-gigabit transceivers (MGTs 601 ), configurable logic blocks (CLBs 602), random access memory blocks (BRAMs 603), input/output blocks (lOBs 604), configuration and clocking logic (CONFIG/CLOCKS 605), digital signal processing blocks (DSPs 606), specialized input/output blocks (I/O 607), for example, e.g., clock ports, and other programmable logic 608 such as digital clock managers, analog-to- digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks (PROC 610) and internal and external reconfiguration ports (not shown). In some FPGAs, each programmable tile includes a programmable interconnect element (INT 61 1 ) having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element INT 61 1 also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of FIG. 12. For example, a CLB 602 can include a configurable logic element CLE 612 that can be programmed to implement user logic plus a single programmable interconnect element INT 61 1 . A BRAM 603 can include a BRAM logic element (BRL 613) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured embodiment, a BRAM tile has the same height as four CLBs, but other numbers (e.g., five) can also be used. A DSP tile 606 can include a DSP logic element (DSPL 614) in addition to an appropriate number of programmable interconnect elements. An IOB 604 can include, for example, two instances of an input/output logic element (IOL 615) in addition to one instance of the programmable interconnect element INT 61 1 . As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 615 are manufactured using metal layered above the various illustrated logic blocks, and typically are not confined to the area of the input/output logic element 615. In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 12) is used for configuration, clock, and other control logic. Horizontal areas 609 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA. Some FPGAs utilizing the architecture illustrated in FIG. 12 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, the processor block PROC 610 shown in FIG. 12 spans several columns of CLBs and BRAMs. Note that FIG. 12 is intended to illustrate only an exemplary FPGA architecture. The numbers of logic blocks in a column, the relative widths of the columns, the number and order of columns, the types of logic blocks included in the columns, the relative sizes of the logic blocks, and the interconnect/logic implementations included at the top of FIG. 12 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic. In combination with or as an alternative to the circuit arrangements described above, the processes described herein may be implemented by a programmed microprocessor or an arrangement of two or more microprocessors. One or more embodiments is thought to be applicable to a variety of systems for protecting decryption keys and other data intended to be kept secret. Other aspects and embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims.
A computer subassembly is provided. The provision of a bulkhead component and bulkhead connectors allows for storage drives to be replaced by simply removing a S-ATA storage drive and inserting another storage drive in its place. Furthermore, the bulkhead component requires no electric interconnection, as may be the case when an electronic backplane is used.
CLAIMS What is claimed: 1. A computer subassembly, comprising: a frame; supports secured to the frame for inserting a plurality of information storage drives; a bulkhead component mounted to the frame; a plurality of bulkhead connectors mounted on the bulkhead component, each for engaging with a respective drive connector on a respective storage drive when the respective storage drive is inserted; an electronic board; a plurality of board connectors on the board, each board connector being connected to a respective bulkhead connector. 2. The computer subassembly of claim 1, wherein the bulkhead connectors and the board connectors are serial ATA connectors. 3. The computer subassembly of claim 2, further comprising: a plurality of flexible signal cables interconnecting a respective board connector with a respective bulkhead connector. 4. The computer subassembly of claim 3, wherein the flexible cables are serial <Desc/Clms Page number 12> ATA cables. 5. The computer subassembly of claim 1, wherein the bulkhead component has a plurality of mounting openings therein and each bulkhead connector forms part of a bulkhead connector assembly, each bulkhead connector assembly further including at least one formation on the bulkhead connector thereof, the formation having a retaining opening therein, the computer subassembly further comprising a plurality of fasteners, each fastener being inserted through a respective mounting and a respective retaining opening to mount a respective bulkhead connector assembly to the bulkhead component. 6. The computer subassembly of claim 5, wherein each bulkhead connector assembly includes at least two of said formations on opposing sides of the bulkhead connector thereof. 7. The computer subassembly of claim 1, further comprising: a plurality of information storage storage drives held by the supports; and a plurality of drive connectors, each on a respective storage drive and each mating with a respective bulkhead connector. 8. The computer subassembly of claim 1, further comprising: a computer processor; and a memory, the board connectors being coupled to the computer processor <Desc/Clms Page number 13> and the memory. 9. A computer subassembly, comprising: a frame; a bulkhead component, mounted to the frame, having a plurality of mounting openings; a plurality of bulkhead connector assemblies, each including a bulkhead connector and a formation, on the bulkhead connector, defining a retaining opening ; a plurality of fasteners, each fastener being inserted through a respective mounting and a respective retaining opening to mount the respective bulkhead connector assembly to the bulkhead component; supports on the frame for inserting a plurality of information storage drives, a respective drive connector on each respective storage drive mating with a respective bulkhead connector. 10. The computer subassembly of claim 9, further comprising: an electronic board; a plurality of board connectors on the board, each board connector being connected to a respective bulkhead connector. 11. The computer subassembly of claim 9, further comprising: a plurality of flexible signal cables interconnecting a respective board <Desc/Clms Page number 14> connector with a respective bulkhead connector. 12. The computer subassembly of claim 9, further comprising: a plurality of information storage drives held by the supports; and a plurality of drive connectors, each on a respective storage drive and each mating with a respective bulkhead connector. 13. A computer subassembly, comprising: a frame; a bulkhead component mounted to the frame; a plurality of serial ATA bulkhead connectors mounted on the bulkhead component; supports secured to the frame; a plurality of serial ATA storage drives, each inserted on a respective support; a plurality of serial ATA drive connectors, each mounted to a respective serial ATA storage drive and each being connected to a respective serial ATA bulkhead connector due to insertion of the respective storage drive; an electronic board; a plurality of serial ATA board connectors on the board, each serial ATA board connector being individually connected to a respective serial ATA bulkhead connector. <Desc/Clms Page number 15> 14. The computer subassembly of claim 13, further comprising: a plurality of flexible signal cables interconnecting a respective board connector with a respective bulkhead connector. 15. The computer subassembly of claim 13, wherein the bulkhead component has a plurality of mounting openings therein and each serial ATA bulkhead connector forms part of a bulkhead connector assembly, each bulkhead connector assembly further including at least one formation on the bulkhead connector thereof, the formation having a retaining opening therein, the computer subassembly further comprising a plurality of fasteners, each fastener being inserted through a respective mounting and a respective retaining opening to mount a respective bulkhead connector assembly to the bulkhead component. 16. The computer subassembly of claim 15, wherein each bulkhead connector assembly includes at least two of said formations on opposing sides of the serial ATA bulkhead connector thereof. 17. A method of constructing a computer subassembly, comprising: mounting a plurality of bulkhead connectors to a bulkhead component; connecting each bulkhead connector individually to a respective board connector on an electronic board; and inserting a plurality of information storage drives on supports, a respective drive connector on each storage drive mating with a respective bulkhead <Desc/Clms Page number 16> connector on the bulkhead component. 18. The method of claim 17, wherein the bulkhead connectors are mounted to the bulkhead component by inserting fasteners through mounting and retaining openings respectively in the bulkhead component and in formations on each bulkhead connector. 19. The method of claim 17, wherein the bulkhead connectors and the board connectors are serial ATA connectors and the storage drives are serial ATA storage drives.
<Desc/Clms Page number 1> A COMPUTER ASSEMBLY HAVING A BULKHEAD COMPONENT AND BULKHEAD CONNECTORS FOR MATING WITH STORAGE DRIVE CONNECTORS BACKGROUND OF THE INVENTION 1). Field of the Invention [0001] This invention relates generally to a computer assembly, and more specifically to the manner in which information storage drives of the computer assembly connect to corresponding connectors for purposes of providing logic communication between the storage drives and electronic board, and for purposes of providing power to the storage drives. 2). Discussion of Related Art [0002] Information storage drives, such as hard disk drives, CD drives, etc. , of a computer are usually connected to an electronic board, often a motherboard, for purposes of providing logic communication between the storage drives and the electronic board. [0003] According to one conventional approach, a cable is connected between an electronic board and a first storage drive, and the storage drive is subsequently mounted on a support secured to a frame. Another cable is then connected between the first storage drive and a second storage drive, and the second storage drive is mounted to a support connected to the frame. The process may be repeated by connecting another cable between the second storage drive and a third storage drive, and mounting the third storage drive to another support. <Desc/Clms Page number 2> Whenever a storage drive has to be replaced, the entire system has to be switched off, the computer has to be opened, cables connected to the storage drive that has to be replaced have to be disconnected from behind the storage drive, whereafter the storage drive can be removed, another storage drive can be installed in its place, and the system can be switched on. [0004] A computer system utilizing storage drives generally known as SCSI storage drives allows for replacement of storage drives without switching the system off, and also for replacement of storage drives without opening the computer. In such a system, a controller located on a backplane turns power to a specific storage drive off, and illuminates an LED to indicate which storage drive can be removed. The backplane is mounted to a frame and has a plurality of backplane connectors thereon that are connected to one another in series. Because the backplane and the backplane connectors are in a fixed relationship relative to the frame, SCSI storage drives can be inserted on supports, and drive connectors thereon can engage with the backplane connectors. An SCSI storage drive can be replaced by simply removing the SCSI storage drive from the front, without opening the computer system, and then locating another SCSI storage drive in its place. An SCSI system thus allows for"hot swapping"of storage drives. The backplane is connected to an electronic board through a ribbon cable, and a connector on the backplane connected to the ribbon cable is connected to a first of the backplane connectors. Subsequent ones of the backplane connectors are connected to one another in series. Serial ATA (S-ATA) is a new technology that is lower in cost than SCSI <Desc/Clms Page number 3> technology, with similar performance. A S-ATA system has a plurality of S-ATA storage drives that are individually connected through separate cables to separate connectors on an electronic board. A S-ATA system does allow for replacement of S-ATA storage drives without switching the entire system off. However, a S-ATA storage drive does not have a backplane, such as in an SCSI system, so that the computer system has to be opened in order to connect/disconnect cables to/from the S-ATA storage drives. BRIEF DESCRIPTION OF THE DRAWINGS [0006] The invention is described by way of example with reference to the accompanying drawings, wherein: [0007] Figure 1 is an elevational side view of a computer subassembly, according to an embodiment of the invention; [0008] Figure 2 is an end view of a bulkhead connector assembly that is used in the computer subassembly of Figure 1; [0009] Figure 3 is a perspective view of the bulkhead connector assembly; [0010] Figure 4 is a top plan view of the bulkhead connector assembly when mounted to a bulkhead component of the computer subassembly; and [0011] Figure 5 is block diagram illustrating further components of a computer system which includes the subsystem of Figure 1. <Desc/Clms Page number 4> DETAILED DESCRIPTION OF THE INVENTION [0012] Figure 1 of the accompanying drawings illustrates a computer subassembly 10 according to an embodiment of the invention. The computer subassembly 10 includes structural components such as a frame 12, a plurality of storage drive supports 14, and a bulkhead component 16. The computer subassembly 10 also includes an electronic board 18 and a plurality of information storage drives in the form of S-ATA storage drives 20. The computer subassembly 10 further includes electronic interconnection components, including a plurality of drive connectors 22, a plurality of bulkhead connectors 24, a plurality of board connectors 26, a plurality of signal connectors 30, and a plurality of flexible signal cables 34. The computer subassembly 10 further includes power components, including a power supply 36 and a plurality of power cables 38, a plurality of first power connectors 40, a plurality of second power connectors 32, and a plurality of flexible power lines 28. The provision of the bulkhead component 16 and the bulkhead connectors 24 allow for the S-ATA storage drives 20 to be replaced by simply removing a S-ATA storage drive 20 from left to right and inserting another S-ATA storage drive from the right to the left in its place. Furthermore, the bulkhead component 16 requires no electric interconnection, as may be the case when an electronic backplane is used. Each S- ATA storage drive 20 is individually connected to the electronic board 18 as mandated by S-ATA protocol. [0013] The bulkhead component 16 is mounted to the frame 12 in a vertical <Desc/Clms Page number 5> orientation. The bulkhead component 16 has a plurality of connector openings 42 formed therein at spaced vertical locations. The storage drive supports 14 are also secured to the frame 12, either directly or indirectly through the bulkhead component 16. The storage drive supports 14 are so placed in a fixed relationship relative to the bulkhead component 16. The storage drive supports 14 are illustratively represented as a plurality of horizontal shelves that are located above one another, although it should be understood that each storage drive support 14 may, for example, include a lower shelf, side walls, and an upper panel which jointly define a housing or a slot with a profile corresponding to a profile of one of the S-ATA storage drives 20, or may be in the form of rails on opposing sides of each S-ATA storage drive 20. The storage drive supports 14 are all located on the right of the bulkhead component 16, and are spaced from one another the same as the connector openings 42. [0014] The bulkhead connectors 24 are S-ATA connectors that are inserted partially through the connector openings 42 and mounted to the bulkhead component 16. Each bulkhead connector 24 has a bulkhead connector body 44, first bulkhead connector terminals 46, with respective flexible signal cables 34 and flexible power lines 28 extending therefrom. The bulkhead connector terminals 46 are located in or on the bulkhead connector body 44, and are connected to the flexible signal cables 34 or flexible power lines 28. The bulkhead connector terminals 46 are exposed to the right of the bulkhead component 16, and the flexible signal cables 34 and flexible power lines 28 extend to the left of the bulkhead component 16. <Desc/Clms Page number 6> [0015] As required by S-ATA protocol, each flexible signal cable 34 and a respective signal connector 30 interconnects a respective one of the board connectors 26 individually to a respective one of the bulkhead connectors 24 with the signal connector 30 mating with a respective board connector 26. [0016] One of the power cables 38A connects the power supply 36 to the first power connector 40A illustrated at the top. The first power connector 40 at the top also mates with the second power connectors 32A at the top. The second power connector 32A at the top is connected through the flexible power lines 28A at the top to the bulkhead connector 24A at the top. As such, the bulkhead connector 24A at the top is connected to both the power supply 36 and to the electronic board 18. [0017] Another one of the power cables 38B connects the first power connector 40A at the top with the first power connector 40B second from the top. The first power connector 40B second from the top mates with the second power connector 32B second from the top, which is connected through the flexible power line 28B second from the top to the bulkhead connector 24B second from the top. Further ones of the power cables 38 connect subsequent ones of the first power connectors 40 to one another and to further ones of the bulkhead connectors 24. [0018] It can thus be seen that the bulkhead connectors 24 are all connected to be in communication with the electronic board 18, are all connected to the power supply 36, and are all mounted in a fixed relationship relative to the bulkhead component 16, the frame 12, and the storage drive supports 14. [0019] The S-ATA storage drives 20 may, for example, be hard drives, CD-ROM <Desc/Clms Page number 7> drives, etc. Each S-ATA storage drive 20 has a respective drive connector 22 on a left side thereof. [0020] With the bulkhead connectors 24 in place, the S-ATA storage drives 20 can be connected thereto. Each S-ATA storage drive 20 is positioned on a respective one of the storage drive supports 14 and moved to the left until the drive connector 22 thereon mates with the first bulkhead connector terminals 46 on a respective bulkhead connector 24. The respective S-ATA storage drive 20 is then connected through the drive connector 22 thereon, one bulkhead connector 24, one of the flexible signal cables 34 and one of the signal connectors 30 to a respective one of the board connectors 26 on the electronic board 18. The respective S-ATA storage drive 20 is also connected through the drive connector 22 thereon, the same bulkhead connector 24, one of the second power connectors 32, one or more of the power connectors 40, and one or more of the power cables 38 to the power supply 36. [0021] It may be required from time to time to replace one of the S-ATA storage drives 20. The S-ATA storage drive 20 can be moved to the right so that the drive connector 22 thereon disengages from the bulkhead connector 24. The S-ATA storage drive 20 can then be removed from the storage drive support 14 with the drive connector 20 thereon disengaging from the bulkhead connector 24, and another S-ATA storage drive (not shown) can be located in place of the removed S-ATA storage drive on the storage drive support 14 and connected to the same bulkhead connector 24. S-ATA protocol allows for hot-swapping of S-ATA storage drives, i. e. , while the computer system is switched on, and the structural <Desc/Clms Page number 8> support provided by the bulkhead component 16 allows for replacement of S- ATA storage drives by inserting new S-ATA storage drives from the right, i. e., without the need for an operator to open the casing and connect the S-ATA storage drive from the left. [0022] As illustrated in Figures 2 to 4, each bulkhead connector 24 forms part of a bulkhead connector assembly 54, which further includes two mounting formations 56 on opposing sides of the bulkhead connector body 44. One side of the bulkhead connector is designated for power and the other side for signals. Each mounting formation 56 has a respective retaining opening 58 formed therein. [0023] With specific reference to Figure 4, the bulkhead component 16 has a plurality of mounting openings 60 formed therein. Each retaining opening 58 is aligned with a respective mounting opening 60 after the respective bulkhead connector 24 is inserted into a respective connector opening 42. A fastener can then be used to secure the bulkhead connector 24 to the bulkhead component 16. For example, a shaft 62 of a bolt 64 can be inserted through the aligned retaining and mounting openings 58 and 60, with a head 66 on one side thereof, and a nut 68 can be placed on the shaft 62 with the mounting formation 56 and the bulkhead component 16 between the head 66 and the nut 68, to secure the mounting formation 56 to the bulkhead component 16. Figure 5 illustrates further components of a computer system 140 which includes the computer subassembly 10 of Figure 1. The computer system 140 includes a processor 150, memory 155, and input/output capability 160 coupled to a system bus 165. The memory 155 is configured to store instructions which, <Desc/Clms Page number 9> when executed by the processor 150, perform the methods described herein. The memory 155 may also store the input and currently edited video content. Input/output 160 provides for the delivery and display of the video contents or portions or representations thereof. Input/output 160 also encompasses various types of computer-readable media, including the S-ATA storage drives, that is accessible by the processor 150. One of skill in the art will immediately recognize that the term"computer-readable medium/media"further encompasses a carrier wave that encodes a data signal. Input/output and related media 160 store the computer-executable instructions for the operating system and methods of the present invention as well as the video content. [0025] The description of Figure 5 is intended to provide an overview of computer hardware and other operating components suitable for implementing the invention, but is not intended to limit the applicable environments. It will be appreciated that the computer system 140 is one example of many possible computer systems which have different architectures. A typical computer system will usually include at least a processor, memory, and a bus coupling the memory to the processor. One of skill in the art will immediately appreciate that the invention can be practiced with other computer system configurations, including multiprocessor systems, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. While certain exemplary embodiments have been described and shown in <Desc/Clms Page number 10> the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.
Methods, systems, and devices for bias control for a memory device are described. A memory system may store indication of whether data is coherent. In some examples, the indication may be stored as metadata, where a first value indicates that the data is not coherent and a second value or a third value indicate that the data is coherent. When a processing unit or other component of the memory system processes a command to access data, the memory system may operate according to a device bias mode when the indication is the first value, and according to a host bias mode when the indication is the second value or the third value.
CLAIMSWhat is claimed is:1. A system, comprising: a memory; and a controller coupled with the memory, wherein the controller is configured to: store a coherency state for a set of data stored in the memory relative to a cache of a host device; and access the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, wherein the bias state is associated with control of access for the set of data by the controller.2. The system of claim 1, wherein the controller is configured to access the memory independent of the host device based at least in part on the bias state.3. The system of claim 2, further comprising: a cache associated with the memory, wherein the command comprises a read command, and wherein the controller is configured to: determine that the set of data is stored to the cache associated with the memory based at least in part on identifying the command; and accessing the set of data stored to the cache based at least in part on determining that the set of data is stored to the cache.4. The system of claim 3, wherein the controller is configured to: determine that a second set of data is not stored to the cache based at least in part on identifying a command to perform an access operation for the second set of data; and access the memory according to a second bias state for the second set of data based at least in part on determining that the second set of data is not stored to the cache, wherein the second bias state is associated with control of access for the second set of data by the host device.5. The system of claim 1, wherein the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data.6. The system of claim 5, wherein: the first value indicates that the set of data is not coherent; and the second value and the third value indicate that the set of data is coherent, wherein the controller is configured to process the command according to the bias state based at least in part on the memory storing the first value for the set of data.7. The system of claim 1, wherein the set of data is associated with a first quantity of data, and a cache line of the host device coupled with the memory is configured to store the first quantity of data.8. The system of claim 1, wherein the bias state corresponds to a first bias state and a second bias state is associated with control of access for the set of data by the host device.9. A system, comprising: a memory; and a controller coupled with the memory, wherein the controller is configured to: store a coherency state for a set of data stored in the memory relative to a cache of a host device; and access the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, wherein the bias state is associated with control of access for the set of data by the host device.10. The system of claim 9, wherein the controller is configured to store the coherency state for the set of data independent from the command, wherein the controller is configured to: transmit, to the host device, an indication of the access operation to be performed on the set of data based at least in part on determining to process the command according to the bias state; and receive, from the host device, a second command for indicating direct access for the set of data for the access operation based at least in part on transmitting the indication of the access operation.11. The system of claim 10, wherein the controller is configured to access the memory for the set of data based at least in part on receiving the second command from the host device.12. The system of claim 9, wherein the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data.13. The system of claim 12, wherein: the first value indicates that an invalid version of the set of data is stored to the cache of the host device; the second value indicates that a shared version or an exclusive version of the set of data is stored to the cache of the host device; and the third value indicates that the shared version of the set of data is stored to the cache of the host device.14. The system of claim 12, wherein the second value and the third value indicate that the set of data is coherent, the controller is configured to process the command according to the bias state based at least in part on the memory storing the second value or the third value for the set of data.15. The system of claim 9, wherein the set of data is associated with a first quantity of data, and a cache line of the host device coupled with the memory is configured to store the first quantity of data.16. The system of claim 9, wherein a first bias state is associated with control of access for the set of data by the controller, and the bias state corresponds to a second bias state.17. A method, comprising: storing a coherency state for a set of data stored in a memory relative to a cache of a host device; accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, wherein the bias state is associated with control of access for the set of data by a controller associated with the memory.18. The method of claim 17, further comprising: accessing the memory independent of the host device based at least in part on the bias state.19. The method of claim 18, wherein the command comprises a read command, the method further comprising: determining that the set of data is stored to a cache associated with the memory; and accessing the set of data stored to the cache based at least in part on determining that the set of data is stored to the cache associated with the memory.20. The method of claim 17, wherein the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data.21. The method of claim 20, wherein: the first value indicates that the set of data is not coherent; and the second value and the third value indicate that the set of data is coherent, wherein the controller is configured to process the command according to the bias state based at least in part on the memory storing the first value for the set of data.22. The method of claim 17, wherein the set of data is associated with a first quantity of data, and a cache line of the host device coupled with the memory is configured to store the first quantity of data.23. The method of claim 17, wherein the bias state corresponds to a first bias state and a second bias state is associated with control of access for the set of data by the host device.24. A method, comprising: storing a coherency state for a set of data stored in a memory relative to a cache of a host device; and accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, wherein the bias state is associated with control of access for the set of data by the host device.25. The method of claim 24, wherein the coherency state for the set of data is stored independent from the command, the method further comprising: transmitting, to the host device, an indication of the access operation to be performed on the set of data based at least in part on determining to process the command according to the bias state; and receiving, from the host device, a second command for indicating direct access for the set of data for the access operation based at least in part on transmitting the indication of the access operation.26. The method of claim 25, wherein the command comprises a read command, the method further comprising: accessing the memory for the set of data based at least in part on receiving the second command from the host device.27. The method of claim 26, further comprising: transmitting the set of data received from the host device to the memory based at least in part on receiving the set of data from the host device.28. The method of claim 24, wherein the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data.29. The method of claim 28, wherein: the first value indicates that an invalid version of the set of data is stored to the cache of the host device; the second value indicates that a shared version or an exclusive version of the set of data is stored to the cache of the host device; and the third value indicates that the shared version of the set of data is stored to the cache of the host device.30. The method of claim 28, wherein the second value and the third value indicate that the set of data is coherent, the command is processed according to the bias state based at least in part on the memory storing the second value or the third value for the set of data.31. The method of claim 24, wherein the set of data is associated with a first quantity of data, and a cache line of the host device coupled with the memory is configured to store the first quantity of data.32. The method of claim 24, wherein a first bias state is associated with control of access for the set of data by the controller, and the bias state corresponds to a second bias state.33. A non-transitory computer-readable medium storing code comprising instructions, which when executed by a processor of an electronic device, cause the electronic device to: store a coherency state for a set of data stored in a memory relative to a cache of a host device; and access the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, wherein the bias state is associated with control of access for the set of data by a controller associated with the memory.
BIAS CONTROL FOR A MEMORY DEVICECROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/198,084 by Walker et ak, entitled “BIAS CONTROL FOR A MEMORY DEVICE,” filed March 10, 2021, each of which is assigned to the assignee hereof and each of which is expressly incorporated herein.STATEMENT REGARDING GOVERNMENT SUPPORT[0002] This invention was made with Government support under Contract No. 2168213 awarded by U.S. Department of Energy. The Government has certain rights in the invention.FIELD OF TECHNOLOGY[0003] The following relates generally to one or more systems for memory and more specifically to bias control for a memory device.BACKGROUND[0004] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often corresponding to a logic 1 or a logic 0. In some examples, a single memory cell may support more than two possible states, any one of which may be stored by the memory cell. To access information stored by a memory device, a component may read, or sense, the state of one or more memory cells within the memory device. To store information, a component may write, or program, one or more memory cells within the memory device to corresponding states.[0005] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), 3 -dimensional cross-point memory (3D cross point), not-or (NOR) and not-and (NAND) memory devices, and others. Memory devices may be volatile or non-volatile. Volatile memory cells (e.g., DRAM cells) may lose their programmed states over time unless they are periodically refreshed by an external power source. Non-volatile memory cells (e.g., NAND memory cells) may maintain their programmed states for extended periods of time even in the absence of an external power source.BRIEF DESCRIPTION OF THE DRAWINGS [0006] FIG. 1 illustrates an example of a system that supports bias control for a memory device in accordance with examples as disclosed herein.[0007] FIG. 2 illustrates an example of a system that supports bias control for a memory device in accordance with examples as disclosed herein.[0008] FIG. 3 illustrates an example of a system that supports bias control for a memory device in accordance with examples as disclosed herein.[0009] FIG. 4 illustrates an example of a process flow diagram that supports bias control for a memory device in accordance with examples as disclosed herein.[0010] FIG. 5 illustrates an example of a process flow diagram that supports bias control for a memory device in accordance with examples as disclosed herein. [0011] FIG. 6 shows a block diagram of a memory controller that supports bias control for a memory device in accordance with examples as disclosed herein.[0012] FIGs. 7 and 8 show flowcharts illustrating a method or methods that support bias control for a memory device in accordance with examples as disclosed herein.DETAILED DESCRIPTION [0013] Some interfaces (e.g., the Compute Express Link (CXL) interface) may provide a mechanism for a bias-based coherency model. Specifically, the CXL specification defines that a device-attached memory can track the bias state to determine if the device needs to send a request to the host device to resolve coherency when the device (e.g., a processing unit of the device) accesses the memory. The two bias states are host bias, where the device needs to send a request to the host device to resolve coherency, and device bias, where the memory device may access the memory independent of the host device.[0014] The CXL specification indicates that the device will maintain a bias table for blocks (e.g., pages) of memory and a transition agent for managing bias transitions. Bias transitions are generally managed by software through the use of commands between the memory device and host device, and are managed on a per-block basis. However, managing bias transitions on a per-block basis may result in inefficiencies when portions of a block may have different coherency states.[0015] Methods and systems for managing bias translations at a relatively finer granularity are described herein. The CXL specification describes metadata that can be maintained with each line of data. One of the described metadata fields is the MetaO-state, which can hold one of three values, Invalid, Any, or Shared. The invalid state indicates that the host does not have a cacheable copy of the line, the any state indicates the host may have a shared, exclusive, or modified copy of the line, and the shared state indicates that the host may have at most a shared copy of the line. The CXL specification thus indicates that bias states, which are used to synchronize control over coherency, are maintained separate from the MetaO-states, which indicate the caching or coherency state. The MetaO-states may be stored for each line, and may be stored with the line data.[0016] As described herein, the MetaO-state field may be used to track the current bias state. In particular, according to various aspects a MetaO-state of Invalid may be equated to device bias and both Any and Shared may be equated to host bias. Tracking device attached media bias at a granularity that corresponds to a cache line size of the host device may allow the device to transition the media’s bias on any component that has access to that media. This allows complete hardware control of bias and eliminates the need for software to undertake the complex task of transitioning the device attached media bias for optimal device performance. In some examples, the MetaO-state may be stored with the line of data in the memory media, or may be stored only in a device cache (e.g., maintained only for lines in the cache). If it is stored only in the cache, the device would assume a state of Any on a cache miss. In either instance, utilizing the MetaO-state of the CXL specification to track the bias state on a per-cache-line basis may improve the overall performance of the memory device while maintaining a relatively simplistic programming model.[0017] Features of the disclosure are initially described in the context of systems as described with reference to FIGs. 1 through 3. Features of the disclosure are described in the context of process flow diagrams as described with reference to FIGs. 4 and 5. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to bias control for a memory device as described with reference to FIGs. 6-8. [0018] FIG. 1 illustrates an example of a system 100 that supports bias control for a memory device in accordance with examples as disclosed herein. The system 100 includes a host system 105 coupled with a memory system 110.[0019] A memory system 110 may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system 110 may be or include a Universal Flash Storage (UFS) device, an embedded Multi -Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities.[0020] The system 100 may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device.[0021] The system 100 may include a host system 105, which may be coupled with the memory system 110. In some examples, this coupling may include an interface with a host system controller 106, which may be an example of a control component configured to cause the host system 105 to perform various operations in accordance with examples as described herein. The host system 105 may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system 105 may include an application configured for communicating with the memory system 110 or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system 105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA)controller). The host system 105 may use the memory system 110, for example, to write data to the memory system 110 and read data from the memory system 110. Although one memory system 110 is shown in FIG. 1, the host system 105 may be coupled with any quantity of memory systems 110. [0022] The host system 105 may be coupled with the memory system 110 via at least one physical host interface. The host system 105 and the memory system 110 may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system 110 and the host system 105). Examples of a physical host interface may include, but are not limited to, a SATA interface, a UFS interface, an eMMC interface, a PCIe interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), a Low Power Double Data Rate (LPDDR) interface, and a CXL interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller 106 of the host system 105 and a memory system controller 115 of the memory system 110. In some examples, the host system 105 may be coupled with the memory system 110 (e.g., the host system controller 106 may be coupled with the memory system controller 115) via a respective physical host interface for each memory device 130 included in the memory system 110, or via a respective physical host interface for each type of memory device 130 included in the memory system 110.[0023] The memory system 110 may include a memory system controller 115 and one or more memory devices 130. A memory device 130 may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereof). Although two memory devices 130-a and 130-b are shown in the example of FIG. 1, the memory system 110 may include any quantity of memory devices 130. Further, if the memory system 110 includes more than one memory device 130, different memory devices 130 within the memory system 110 may include the same or different types of memory cells.[0024] The memory system controller 115 may be coupled with and communicate with the host system 105 (e.g., via the physical host interface) and may be an example of a control component configured to cause the memory system 110 to perform various operations in accordance with examples as described herein. The memory system controller 115 may also be coupled with and communicate with memory devices 130 to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device 130 — among other such operations — which may generically be referred to as access operations. In some cases, the memory system controller 115 may receive commands from the host system 105 and communicate with one or more memory devices 130 to execute such commands (e.g., at memory arrays within the one or more memory devices 130). For example, the memory system controller 115 may receive commands or operations from the host system 105 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices 130. In some cases, the memory system controller 115 may exchange data with the host system 105 and with one or more memory devices 130 (e.g., in response to or otherwise in association with commands from the host system 105). For example, the memory system controller 115 may convert responses (e.g., data packets or other signals) associated with the memory devices 130 into corresponding signals for the host system 105.[0025] The memory system controller 115 may be configured for other operations associated with the memory devices 130. For example, the memory system controller 115 may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LB As)) associated with commands from the host system 105 and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices 130.[0026] The memory system controller 115 may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller 115. The memory system controller 115 may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry.[0027] The memory system controller 115 may also include a local memory 120. In some cases, the local memory 120 may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller 115 to perform functions ascribed herein to the memory system controller 115. In some cases, the local memory 120 may additionally or alternatively include static random access memory (SRAM) or other memory that may be used by the memory system controller 115 for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller 115. Additionally or alternatively, the local memory 120 may serve as a cache for the memory system controller 115. For example, data may be stored in the local memory 120 if read from or written to a memory device 130, and the data may be available within the local memory 120 for subsequent retrieval for or manipulation (e.g., updating) by the host system 105 (e.g., with reduced latency relative to a memory device 130) in accordance with a cache policy.[0028] Although the example of the memory system 110 in FIG. 1 has been illustrated as including the memory system controller 115, in some cases, a memory system 110 may not include a memory system controller 115. For example, the memory system 110 may additionally or alternatively rely upon an external controller (e.g., implemented by the host system 105) or one or more local controllers 135, which may be internal to memory devices 130, respectively, to perform the functions ascribed herein to the memory system controller 115. In general, one or more functions ascribed herein to the memory system controller 115 may in some cases instead be performed by the host system 105, a local controller 135, or any combination thereof. In some cases, a memory device 130 that is managed at least in part by a memory system controller 115 may be referred to as a managed memory device. An example of a managed memory device is a managed NAND (MNAND) device, though other types of managed memory devices are supported. For example, a managed memory device may include any type or quantity of volatile or non-volatile memory devices.[0029] A memory device 130 may include one or more arrays of non-volatile memory cells. For example, a memory device 130 may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), electrically erasable programmable ROM (EEPROM), or any combination thereof. Additionally or alternatively, a memory device 130 may include one or more arrays of volatile memory cells. For example, a memory device 130 may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells. [0030] In some examples, a memory device 130 may include (e.g., on a same die or within a same package) a local controller 135, which may execute operations on one or more memory cells of the respective memory device 130. A local controller 135 may operate in conjunction with a memory system controller 115 or may perform one or more functions ascribed herein to the memory system controller 115. For example, as illustrated in FIG. 1, a memory device 130-a may include a local controller 135-a and a memory device 130-b may include a local controller 135-b.[0031] In some cases, a memory device 130 may be or include a NAND device (e.g., NAND flash device). A memory device 130 may be or include a memory die 160. For example, in some cases, a memory device 130 may be a package that includes one or more dies 160. A die 160 may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die 160 may include one or more planes 165, and each plane 165 may include a respective set of blocks 170, where each block 170 may include a respective set of pages 175, and each page 175 may include a set of memory cells.[0032] In some cases, a NAND memory device 130 may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device 130 may include memory cells configured to each store multiple bits of information, which may be referred to as multi level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more genetically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry.[0033] In some cases, planes 165 may refer to groups of blocks 170, and in some cases, concurrent operations may take place within different planes 165. For example, concurrent operations may be performed on memory cells within different blocks 170 so long as the different blocks 170 are in different planes 165. In some cases, performing concurrent operations in different planes 165 may be subject to one or more restrictions, such as identical operations being performed on memory cells within different pages 175 that have the same page address within their respective planes 165 (e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes 165).[0034] In some cases, a block 170 may include memory cells organized into rows (pages 175) and columns (e.g., strings, not shown). For example, memory cells in a same page 175 may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line).[0035] For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page 175 may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block 170 may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re written with new data. Thus, for example, a used page 175 may in some cases not be updated until the entire block 170 that includes the page 175 has been erased.[0036] The system 100 may include any quantity of non-transitory computer readable media that support bias control for a memory device. For example, the host system 105, the memory system controller 115, or a memory device 130 may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system 105, memory system controller 115, or memory device 130. For example, such instructions, if executed by the host system 105 (e.g., by the host system controller 106), by the memory system controller 115, or by a memory device 130 (e.g., by a local controller 135), may cause the host system 105, memory system controller 115, or memory device 130 to perform one or more associated functions as described herein.[0037] In some cases, a memory system 110 may utilize a memory system controller 115 to provide a managed memory system that may include, for example, one or more memory arrays and related circuitry combined with a local (e.g., on-die or in-package) controller (e.g., local controller 135). An example of a managed memory system is a managed NAND (MNAND) system. [0038] In some examples, the host system 105 may include a cache that includes one or more pages (e.g., one or more pages for storing data). Each page may include one or more lines that are each associated with a quantity of data. As described herein, the memory system controller 115 may be configured to track the coherency of data shared between the host system 105 and the memory system 110. For example, the memory system controller 115 may track coherency of data shared between the memory device 130-a or the memory device 130-b and the host system 105. The coherency of the data may pertain to one of three states that indicate whether no coherent copies of the data exist at the host system 105 and the memory system 110, or whether a shared, exclusive or modified copy of the data exists.[0039] The memory system 110 may be am example of a CXL device, and the memory system controller 115 may be an example of a processing unit that is configured to access the memory device 130-a and the memory device 130-b. For example, the memory system controller 115 may be configured to access the memory device 130-a and the memory device 130-b according to a host bias mode or a device bias mode. When operating according to a device bias mode, the memory system controller 115 may access the memory device 130-a or the memory device 130-b (e.g., upon receiving an access request or access command) independent of the host system 105. Conversely, when operating according to a host bias mode, the memory system controller 115 may access the memory device 130-a or the memory device 130-b via the host system 105. In addition, when one or more pages are associated with host bias mode, the host may modify the version of the one or more pages maintained by the host without notifying the memory system controller 115.[0040] In some examples, each line of data stored to the memory device 130-a or the memory device 130-b may be associated with an optional 2-bit MetaO field. As described herein, the 2-bit MetaO field may be used to indicate a bias state of associated data based on the three coherency states. For example, when data is associated with a first coherency state (e.g., an “invalid” state; when no coherent copy of the data exists), the MetaO field may indicate to operate according to a device bias mode. Additionally or alternatively, when data is associated with a second coherency state (e.g., an “any” state; when a coherent copy of the data exists) or a third coherency state (e.g., a “shared” state; when a coherent copy of the data exists), the MetaO field may indicate to operate according to a host bias mode. Thus, utilizing the MetaO-state to track device bias may allow for bias states to be tracked in a more granular manner. [0041] FIG. 2 illustrates an example of a system 200 that supports bias control for a memory device in accordance with examples as disclosed herein. The system 200 may be an example of a system 100 as described with reference to FIG. 1 or aspects thereof. The system 200 may include a memory system 210 configured to store data received from the host system 205 and to send data to the host system 205, if requested by the host system 205 using access commands (e.g., read commands or write commands). The system 200 may implement aspects of the system 100 as described with reference to FIG. 1. For example, the memory system 210 and the host system 205 may be examples of the memory system 110 and the host system 105, respectively.[0042] The memory system 210 may include memory devices 240 to store data transferred between the memory system 210 and the host system 205, e.g., in response to receiving access commands from the host system 205, as described herein. The memory devices 240 may include one or more memory devices as described with reference to FIG. 1. For example, the memory devices 240 may include NAND memory, PCM, self-selecting memory, 3D cross point, other chalcogenide-based memories, FERAM, MRAM, NOR (e.g., NOR flash) memory, STT-MRAM, CBRAM, RRAM, or OxRAM.[0043] The memory system 210 may include a storage controller 230 for controlling the passing of data directly to and from the memory devices 240, e.g., for storing data, retrieving data, and determining memory locations in which to store data and from which to retrieve data. The storage controller 230 may communicate with memory devices 240 directly or via a bus (not shown) using a protocol specific to each type of memory device 240. In some cases, a single storage controller 230 may be used to control multiple memory devices 240 of the same or different types. In some cases, the memory system 210 may include multiple storage controllers 230, e.g., a different storage controller 230 for each type of memory device 240.In some cases, a storage controller 230 may implement aspects of a local controller 135 as described with reference to FIG. 1.[0044] The memory system 210 may additionally include an interface 220 for communication with the host system 205 and a buffer 225 for temporary storage of data being transferred between the host system 205 and the memory devices 240. The interface 220, buffer 225, and storage controller 230 may be for translating data between the host system 205 and the memory devices 240, e.g., as shown by a data path 250, and may be collectively referred to as data path components. [0045] Using the buffer 225 to temporarily store data during transfers may allow data to be buffered as commands are being processed, thereby reducing latency between commands and allowing arbitrary data sizes associated with commands. This may also allow bursts of commands to be handled, and the buffered data may be stored or transmitted (or both) once a burst has stopped. The buffer 225 may include relatively fast memory (e.g., some types of volatile memory, such as SRAM or DRAM) or hardware accelerators or both to allow fast storage and retrieval of data to and from the buffer 225. The buffer 225 may include data path switching components for bi-directional data transfer between the buffer 225 and other components.[0046] The temporary storage of data within a buffer 225 may refer to the storage of data in the buffer 225 during the execution of access commands. That is, upon completion of an access command, the associated data may no longer be maintained in the buffer 225 (e.g., may be overwritten with data for additional access commands). In addition, the buffer 225 may be a non-cache buffer. That is, data may not be read directly from the buffer 225 by the host system 205. For example, read commands may be added to a queue without an operation to match the address to addresses already in the buffer 225 (e.g., without a cache address match or lookup operation).[0047] The memory system 210 may additionally include a memory system controller 215 for executing the commands received from the host system 205 and controlling the data path components in the moving of the data. The memory system controller 215 may be an example of the memory system controller 115 as described with reference to FIG. 1. A bus 235 may be used to communicate between the system components.[0048] In some cases, one or more queues (e.g., a command queue 260, a buffer queue 265, and a storage queue 270) may be used to control the processing of the access commands and the movement of the corresponding data. This may be beneficial, e.g., if more than one access command from the host system 205 is processed concurrently by the memory system 210. The command queue 260, buffer queue 265, and storage queue 270 are depicted at the interface 220, memory system controller 215, and storage controller 230, respectively, as examples of a possible implementation. However, queues, if used, may be positioned anywhere within the memory system 210.[0049] Data transferred between the host system 205 and the memory devices 240 may take a different path in the memory system 210 than non-data information (e.g., commands, status information). For example, the system components in the memory system 210 may communicate with each other using a bus 235, while the data may use the data path 250 through the data path components instead of the bus 235. The memory system controller 215 may control how and if data is transferred between the host system 205 and the memory devices 240 by communicating with the data path components over the bus 235 (e.g., using a protocol specific to the memory system 210).[0050] If a host system 205 transmits access commands to the memory system 210, the commands may be received by the interface 220, e.g., according to a protocol (e.g., a UFS protocol, an eMMC protocol, a CXL protocol). Thus, the interface 220 may be considered a front end of the memory system 210. Upon receipt of each access command, the interface 220 may communicate the command to the memory system controller 215, e.g., via the bus 235. In some cases, each command may be added to a command queue 260 by the interface 220 to communicate the command to the memory system controller 215.[0051] The memory system controller 215 may determine that an access command has been received based on the communication from the interface 220. In some cases, the memory system controller 215 may determine the access command has been received by retrieving the command from the command queue 260. The command may be removed from the command queue 260 after it has been retrieved therefrom, e.g., by the memory system controller 215. In some cases, the memory system controller 215 may cause the interface 220, e.g., via the bus 235, to remove the command from the command queue 260.[0052] Upon the determination that an access command has been received, the memory system controller 215 may execute the access command. For a read command, this may mean obtaining data from the memory devices 240 and transmitting the data to the host system 205. For a write command, this may mean receiving data from the host system 205 and moving the data to the memory devices 240.[0053] In either case, the memory system controller 215 may use the buffer 225 for, among other things, temporary storage of the data being received from or sent to the host system 205. The buffer 225 may be considered a middle end of the memory system 210. In some cases, buffer address management (e.g., pointers to address locations in the buffer 225) may be performed by hardware (e.g., dedicated circuits) in the interface 220, buffer 225, or storage controller 230. [0054] To process a write command received from the host system 205, the memory system controller 215 may first determine if the buffer 225 has sufficient available space to store the data associated with the command. For example, the memory system controller 215 may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer 225 that may be available to store data associated with the write command.[0055] In some cases, a buffer queue 265 may be used to control a flow of commands associated with data stored in the buffer 225, including write commands. The buffer queue 265 may include the access commands associated with data currently stored in the buffer 225. In some cases, the commands in the command queue 260 may be moved to the buffer queue 265 by the memory system controller 215 and may remain in the buffer queue 265 while the associated data is stored in the buffer 225. In some cases, each command in the buffer queue 265 may be associated with an address at the buffer 225. That is, pointers may be maintained that indicate where in the buffer 225 the data associated with each command is stored. Using the buffer queue 265, multiple access commands may be received sequentially from the host system 205 and at least portions of the access commands may be processed concurrently.[0056] If the buffer 225 has sufficient space to store the write data, the memory system controller 215 may cause the interface 220 to transmit an indication of availability to the host system 205 (e.g., a “ready to transfer” indication), e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). As the interface 220 subsequently receives from the host system 205 the data associated with the write command, the interface 220 may transfer the data to the buffer 225 for temporary storage using the data path 250. In some cases, the interface 220 may obtain from the buffer 225 or buffer queue 265 the location within the buffer 225 to store the data. The interface 220 may indicate to the memory system controller 215, e.g., via the bus 235, if the data transfer to the buffer 225 has been completed.[0057] Once the write data has been stored in the buffer 225 by the interface 220, the data may be transferred out of the buffer 225 and stored in a memory device 240. This may be done using the storage controller 230. For example, the memory system controller 215 may cause the storage controller 230 to retrieve the data out of the buffer 225 using the data path 250 and transfer the data to a memory device 240. The storage controller 230 may be considered a back end of the memory system 210. The storage controller 230 may indicate to the memory system controller 215, e.g., via the bus 235, that the data transfer to a memory device of the memory devices 240 has been completed. [0058] In some cases, a storage queue 270 may be used to aid with the transfer of write data. For example, the memory system controller 215 may push (e.g., via the bus 235) write commands from the buffer queue 265 to the storage queue 270 for processing. The storage queue 270 may include entries for each access command. In some examples, the storage queue 270 may additionally include a buffer pointer (e.g., an address) that may indicate where in the buffer 225 the data associated with the command is stored and a storage pointer (e.g., an address) that may indicate the location in the memory devices 240 associated with the data. In some cases, the storage controller 230 may obtain from the buffer 225, buffer queue 265, or storage queue 270 the location within the buffer 225 from which to obtain the data. The storage controller 230 may manage the locations within the memory devices 240 to store the data (e.g., performing wear-leveling, garbage collection, and the like). The entries may be added to the storage queue 270, e.g., by the memory system controller 215. The entries may be removed from the storage queue 270, e.g., by the storage controller 230 or memory system controller 215 upon completion of the transfer of the data.[0059] To process a read command received from the host system 205, the memory system controller 215 may again first determine if the buffer 225 has sufficient available space to store the data associated with the command. For example, the memory system controller 215 may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer 225 that may be available to store data associated with the read command.[0060] In some cases, the buffer queue 265 may be used to aid with buffer storage of data associated with read commands in a similar manner as discussed above with respect to write commands. For example, if the buffer 225 has sufficient space to store the read data, the memory system controller 215 may cause the storage controller 230 to retrieve the data associated with the read command from a memory device 240 and store the data in the buffer 225 for temporary storage using the data path 250. The storage controller 230 may indicate to the memory system controller 215, e.g., via the bus 235, when the data transfer to the buffer 225 has been completed.[0061] In some cases, the storage queue 270 may be used to aid with the transfer of read data. For example, the memory system controller 215 may push the read command to the storage queue 270 for processing. In some cases, the storage controller 230 may obtain from the buffer 225 or storage queue 270 the location within the memory devices 240 from which to retrieve the data. In some cases, the storage controller 230 may obtain from the buffer queue 265 the location within the buffer 225 to store the data. In some cases, the storage controller 230 may obtain from the storage queue 270 the location within the buffer 225 to store the data. In some cases, the memory system controller 215 may move the command processed by the storage queue 270 back to the command queue 260.[0062] Once the data has been stored in the buffer 225 by the storage controller 230, the data may be transferred out of the buffer 225 and sent to the host system 205. For example, the memory system controller 215 may cause the interface 220 to retrieve the data out of the buffer 225 using the data path 250 and transmit the data to the host system 205, e.g., according to a protocol (e.g., a UFS protocol, an eMMC protocol, a CXL protocol). For example, the interface 220 may process the command from the command queue 260 and may indicate to the memory system controller 215, e.g., via the bus 235, that the data transmission to the host system 205 has been completed.[0063] The memory system controller 215 may execute received commands according to an order (e.g., a first-in, first-out order, according to the order of the command queue 260). For each command, the memory system controller 215 may cause data corresponding to the command to be moved into and out of the buffer 225, as discussed above. As the data is moved into and stored within the buffer 225, the command may remain in the buffer queue 265. A command may be removed from the buffer queue 265, e.g., by the memory system controller 215, if the processing of the command has been completed (e.g., if data corresponding to the access command has been transferred out of the buffer 225). If a command is removed from the buffer queue 265, the address previously storing the data associated with that command may be available to store data associated with a new command.[0064] The memory system controller 215 may additionally be configured for operations associated with the memory devices 240. For example, the memory system controller 215 may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., LB As) associated with commands from the host system 205 and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices 240. That is, the host system 205 may issue commands indicating one or more LB As and the memory system controller 215 may identify one or more physical block addresses indicated by the LB As. In some cases, one or more contiguous LB As may correspond to noncontiguous physical block addresses. In some cases, the storage controller 230 may be configured to perform one or more of the above operations in conjunction with or instead of the memory system controller 215. In some cases, the memory system controller 215 may perform the functions of the storage controller 230 and the storage controller 230 may be omitted.[0065] In some examples, the host system 205 may include a cache that includes one or more pages (e.g., one or more pages for storing data). Each page may include one or more lines that are each associated with a quantity of data. As described herein, the memory system controller 215 may be configured to track the coherency of data shared between the host system 205 and the memory system 210. For example, the memory system controller 215 may track coherency of data shared between device memory (not shown) and the host system 205 (not shown). For example, one or more of the memory devices 240 may be a CXL device that is configured to access data stored to memory (e.g., device memory) that is accessible to various devices. The memory system controller 215 may track the coherency of the data, which may pertain to one of three states that indicate whether no coherent copies of the data exist at the host system 205 and the memory system 210, or whether a shared, exclusive or modified copy of the data exists.[0066] The memory system 210 may be an example of a CXL device that may be configured to access the device memory (memory devices 240). For example, a CXL device may include a processing device (not shown) configured to access the device memory according to a host bias mode or a device bias mode. When operating according to a device bias mode, the memory system controller 215 may access the device memory (e.g., upon receiving an access request or access command) independent of the host system 205. Conversely, when operating according to a host bias mode, the memory system controller 215 may access the device memory via the host system 205. Thus, when operating according to a host bias mode, the host system 205 may be configured to access the device memory for the respective CXL device.[0067] In some examples, each line of data stored to the device memory may be associated with an optional 2-bit MetaO field. As described herein, the 2-bit MetaO field may be used to indicate a bias state of associated data based on the three coherency states. For example, when data is associated with a first coherency state (e.g., an “invalid” state; when no coherent copy of the data exists), the MetaO field may indicate to operate according to a device bias mode. Additionally or alternatively, when data is associated with a second coherency state (e.g., an “any” state; when a coherent copy of the data exists) or a third coherency state (e.g., a “shared” state; when a coherent copy of the data exists), the MetaO field may indicate to operate according to a host bias mode. Thus, utilizing the MetaO-state to track device bias may allow for bias states to be tracked in a more granular manner.[0068] FIG. 3 illustrates an example of a system 300 that supports bias control for a memory device in accordance with examples as disclosed herein. The system 300 may include a memory system 305 and a host device 310. The memory system 305 may be coupled with the host device 310 via an interface 315, which may be a CXL interface. The memory system 305 may include device memory 320, a cache 325, a controller 330, and a processing unit 335, and the host device 310 may include a cache 340. In some examples, the system 300 may be configured to use one or more metadata fields, such as the MetaO-state, to track coherency states of data. In addition, utilizing the MetaO-state to track the bias state on a per-cache-line basis may improve the overall performance of the memory system 305 while maintaining a relatively simplistic programming model.[0069] The memory system 305 may be a CXL device. The device memory 320 may be accessible by one or more accelerators that include or may be associated with a processing unit 335. The processing unit 335 and controller 330 shown in FIG. 3 may be a single logic component formed on a same field programmable gate array (FPGA) or application-specific integrated circuit (ASIC). However, in some examples, the controller 330 and processing unit 335 may be individual components. For example, the processing unit 335 may be associated with a graphics processing unit (GPU) or general-purpose graphics processing unit (GPGPU) of the memory system 305. The processing unit 335 may be configured to transmit signaling and/or commands to the controller 330 for accessing the device memory 320. As described herein, the controller 330 may access the device memory 320 independent of the host device 310 in a device bias state, whereas the controller 330 may access the device memory 320 via the host device 310 in a host bias state.[0070] In some examples, the host device 310 may include a cache 340. The cache 340 may include a set of pages, which may each include 4kB of data. Moreover, each page may include one or more blocks, which may be referred to as cache lines, and each cache line may include 64B of data. Although the size of each page and cache line is meant for exemplary purposes only, and each page and cache line may be configured to store a different quantity of data, the memory system 305 may be configured to track bias states on a per-cache-line basis using the MetaO-states instead of a bias table that is maintained on a page basis. That is, utilizing the MetaO-state to track device bias may allow for bias states to be tracked in a more granular manner.[0071] The device memory 320 may be configured to store data that is accessible by the processing unit 335 (e.g., via the controller 330) and the host device 310. Because the data is accessible by both the processing unit 335 and the host device 310 (e.g., the data is shared), it is desirable for coherency of the data to be tracked. That is, it is desirable for both the processing unit 335 and the host device 310 to know whether data in the cache 340 is coherent with corresponding data (for the same address) in the device memory 320. To track cache coherency, three states of cache coherency may be maintained and tracked using metadata. For example, a first state may be associated with an “invalid” state — e.g., the cache 340 of the host device 310 does not have a cacheable copy of the data. The second state may be associated with an “any” state — e.g., the cache 340 of the host device 310 may have a shared, exclusive or modified copy of the data. The third state may be associated with a “shared” state — e.g., the cache 340 of the host device 310 may have, at most, a shared copy of the data. As the coherency states of the data change, updated metadata associated with the coherency state may be stored to the device memory 320.[0072] In some examples, data stored to the device memory 320 may be accessed according to a device bias mode or a host bias mode. Host bias mode may prioritize coherent access from the host device 310 to the device memory 320. Host bias mode may be used during work submission, when data is being written from the host device 310 to the device memory 320, and during work completion when data is being read by the host device 310 from the device memory 320. In host bias mode, the device memory 320 may appear, to the memory system (or to the processing unit 335) just like memory attached to the host device 310. If the processing unit 335 requires access to the device memory 320 when operating in host bias mode, the access operation is handled by the controller 330 first transmitting a request (e.g., a command; signaling) to the host device 310.[0073] Conversely, in device bias mode the processing unit 335 may access the device memory 320 directly. That is, the processing unit 335 may access the device memory 320 without any interaction by the host device 310. Thus, if the processing unit 335 requires access to the device memory 320 when operating in device bias mode, the access operation may be handled by the controller 330 accessing the device memory 320 directly.[0074] The CXL specification provides for an optional 2-bit MetaO field that can be associated with device attached media. According to various aspects described herein, the 2- bit MetaO field may be used to convey a coherency state of data (e.g., of data associated with a size of a cache line of the cache 340), and the coherency state may be used for determining a bias state of the associated data. In some examples, the MetaO field may be configured to store a first value (e.g., a first bit value) that is associated with the “invalid” coherency state. When the MetaO field includes the first value, data associated with the MetaO field may be accessed according to a device bias mode. Additionally or alternatively, the MetaO field may be configured to store a second value (e.g., a second bit value) and a third value (e.g., a third bit value) that are associated with the “shared” and “any” states. When the MetaO field includes either the second value or the third value, data associated with the MetaO field may be accessed according to a host bias mode.[0075] In some examples, a respective MetaO field may be associated with each line of data stored to the device memory 320, and thus may be determined when the processing unit 335 requires access to the device memory 320. For example, the processing unit 335 may communicate an access request (e.g., a command; signaling) to the controller 330, and the controller may determine the MetaO value of the associated data. Depending on the MetaO value (e.g., the coherency state), the controller 330 may have direct access to the line of data (e.g., according to a device bias mode) or may communicate a request (e.g., a command; signaling) to the host device 310 for resolving coherency associated with the data (e.g., according to a host bias mode).[0076] The host device 310 may periodically update either the coherency state of the data and/or the MetaO field associated with data. For example, if the host device 310 performs any operations that affect the coherency of data, the host device 310 may communicate the change to the memory system 305. In some examples, the host device 310 may communicate the updated coherency state to the memory system 305 (e.g., explicitly or implicitly from access commands), and the controller 330 may use the updated coherency state to update the MetaO field of the associated data that is stored to the device memory 320 or the cache 325.In other examples, the host device 310 may communicate the updated MetaO field to the memory system 305, and the controller 330 may update the MetaO field of the associated data that is stored to the device memory 320 or the cache 325.[0077] Additionally or alternatively, the cache 325 may be used to determine whether to access data according to a device bias state or a host bias state. For example, data and an associated MetaO field may be stored to the cache 325. That is, in some examples, the MetaO fields may be stored only with cached data and not data stored to the device memory 320. When the processing unit 335 requires access to the data (e.g., when the processing unit 335 communicates a command for the data to the controller 330), the controller 330 may determine whether the data is stored to the cache 325. If the data is stored to the cache (e.g., when there is a cache “hit”), the data may be accessed according to the coherency state indicated by the MetaO field (e.g., by associating coherency states to bias states). In other examples, if the data is not stored to the cache (e.g., when there is a cache “miss”), the controller 330 may assume that the data is associated with a host bias state. Accordingly, when there is a cache “miss”, the processing unit 335 may access the device memory 320 via the host device 310. Whether the MetaO field is stored to the device memory 320 or to the cache 325, utilizing the MetaO-state to track the bias state on a per-cache-line basis may improve the overall performance of the memory system 305 while maintaining a relatively simplistic programming model.[0078] FIG. 4 illustrates an example of a process flow diagram 400 that supports bias control for a memory device in accordance with examples as disclosed herein. The process flow diagram 400 may illustrate the operations of a processing unit 405, a controller 410, a device memory 415, and a cache 420. In some examples, the processing unit 405, controller 410, device memory 415, and cache 420 may be examples of a processing unit 335, a controller 330, a device memory 320, and a cache 325, respectively, of a memory system 305 as described with FIG. 3. The process flow diagram 400 may illustrate utilizing a MetaO-state to track a bias state on a per-cache-line basis, which may improve the overall performance of the memory system 305 while maintaining a relatively simplistic programming model.[0079] At 425, a coherency state for a line of data may be stored at the device memory 415. In some examples, the coherency state may be stored for a plurality of lines of data. For example, a coherency state for each line of data may be stored to the device memory 415. As described herein, the coherency state may correspond to a “invalid,” “any,” or “shared” state, and may be stored in the MetaO field associated with the respective line of data. In other examples, at 425, the coherency state for the data may be updated. That is, a coherency state for one or more lines of data may have been previously stored to the device memory 415 but, due to a change in the coherency state, the stored state may be updated. In either instance, the coherency state stored at 425 may be used for determining a bias state associated with the data. Although illustrated as being stored at the device memory 415, the coherency state may be additionally or alternatively stored at the cache 420, as described below.[0080] At 430, the processing unit 405 may determine to access a line of data. For example, the processing unit 405 may determine to read a line of data from the device memory 415 or may determine to write a line of data to the device memory 415. For exemplary purposes, the line of data may be stored at the device memory 415. However, in some examples, the line of data may be stored to the cache 420 (e.g., the data may be cached), and the processing unit 405 or the controller 410 may determine whether the data is stored to the cache 420 or the device memory 415.[0081] At 435, the processing unit 405 may transmit signaling to the controller 410. The signaling may be in response to determining to access a line of data (e.g., at 425), and the signaling may include a command (e.g., a read command, a write command), a request, or another type of communication. The signaling may indicate, to the controller 410, the type of access operation and an address (e.g., a memory address) of the device memory 415 for the line of data.[0082] At 440, the controller 410 may transmit signaling to the cache 420. The signaling may be in response to the signaling indicating the type of access operation and the address of the device memory 415 (e.g., at 435), and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the cache 420 to determine if the associated line of data is stored to the cache 420.[0083] At 445, the controller 410 may receive signaling from the cache 420. The signaling may be in response to the signaling transmitted at 440, and may indicate whether the line of data was stored to the cache (e.g., a cache “hit” or a cache “miss”). For exemplary purposes only, the signaling 445 may indicate a cache “miss.”[0084] At 450, the controller 410 may transmit signaling to the device memory 415. The signaling may be in response to the signaling indicating a cache “miss”, and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the device memory 415. [0085] At 455, the controller 410 may receive signaling from the device memory 415.The signaling may be in response to the signaling transmitted at 450, and may include a line of data read from the device memory 415. Additionally, the signaling may include a MetaO state associated with the line of data. As described herein, the MetaO state may indicate a first value that corresponds to an “invalid” coherency state, a second value that corresponds to a “shared” coherency state, or a third value that corresponds to an “any” state. Alternatively, the MetaO state may not be stored in the device memory 415, and the controller 410 may assume a MetaO state of “any” or “shared” upon the occurrence of a cache miss.[0086] At 460, the controller 410 may determine a bias state of the line of data read from the device memory 415. The controller 410 may determine the bias state of the line of data in response to the signaling received by the controller 410 (e.g., at 455). For example, the signaling may have indicated that the MetaO field includes a first value corresponding to an “invalid” coherency state. Thus the controller 410 may determine a device bias state for the line of data stored to the first memory address and may complete processing the access operation without any interaction from a host device.[0087] In other examples, the signaling (e.g., at 435) may indicate a write command at a second memory address of the device memory 415. In such examples, the controller 410 may determine the bias state for writing data to the second memory address based on the MetaO field associated with the line of data. For example, the controller 410 may transmit signaling to the device memory 415 to read a line of data from the device memory 415 that corresponds to the second memory address. In response, the controller 410 may receive signaling that indicates the MetaO state of the line of data. If the MetaO state corresponds to an “invalid” coherency state, the controller 410 may operate according to a device bias. In such an example, the controller 410 may write the data to the second memory address of the device memory 415 without any interaction from a host device.[0088] In other examples, the controller 410 may not transmit signaling to the cache 420 and/or device memory 415 but may instead maintain a table for tracking a MetaO state of one or more lines of data. For example, for both read operations or write operations, the controller 410 may track a MetaO value on a per-line basis, such that when the processing unit determines to access data (e.g., at 430), the controller may determine the MetaO state based on the table. Accordingly, based on the MetaO state, the controller 410 may operate in either a device bias mode or a host bias mode. [0089] At 465, data may be communicated to the processing unit 405. In some examples, the data may be communicated in response to determining the bias state of the data (e.g., at 450). The data may, in some examples, be communicated directly to the processing unit 405, while in other examples the data may be communicated to the controller 410, and the controller 410 may communicate the data to the processing unit 405.[0090] At 470, the processing unit 405 may determine to access a line of data (e.g., another line of data). For example, the processing unit 405 may determine to read a line of data that is stored to the cache 420 (e.g., a cached line of data). For exemplary purposes, the line of data may be stored at the cache 420. However, in some examples, the line of data may not be stored to the cache 420 (e.g., the data may not be cached), resulting in a cache “miss.” When a cache “miss” occurs, the controller 410 may determine to access the data according to a host bias mode.[0091] At 475, the processing unit 405 may transmit signaling to the controller 410. The signaling may be in response to determining to access a line of data (e.g., at 465), and the signaling may include a command (e.g., a read command, a write command), a request, or another type of communication. The signaling may indicate, to the controller 410, the type of access operation and an address (e.g., a memory address) of the device memory 415.[0092] At 480, the controller 410 may transmit signaling to the cache 420. The signaling may be in response to the signaling indicating the type of access operation and the address of the device memory 415 (e.g., at 475), and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the cache 420 to determine if the associated line of data is stored to the cache 420.[0093] At 485, the controller 410 may receive signaling from the cache 420. The signaling may be in response to the signaling transmitted at 440, and may indicate whether the line of data was stored to the cache (e.g., a cache “hit” or a cache “miss”). For exemplary purposes only, the signaling 445 may indicate a cache “hit.” Accordingly, the signaling may include the line of data and the associated MetaO state, which may indicate a first value that corresponds to an “invalid” coherency state, a second value that corresponds to a “shared” coherency state, or a third value that corresponds to an “any” state.[0094] At 490, the controller 410 may determine a bias state of the line of data read from the cache 420. The controller 410 may determine the bias state of the line of data in response to the signaling received by the controller 410 (e.g., at 485). For example, the signaling may have indicated that the MetaO field includes a first value corresponding to an “invalid” coherency state. Thus the controller 410 may determine a device bias state for the line of data stored to the cache and may complete processing the access operation without any interaction from a host device.[0095] At 495, data may be communicated to the processing unit 405. In some examples, the data may be communicated in response to signaling transmitted from the controller 410 to the processing unit (e.g., at 475). The data may, in some examples, be communicated directly to the processing unit 405, while in other examples the data may be communicated to the controller 410, and the controller 410 may communicate the data to the processing unit 405. Whether the MetaO field is stored to the device memory 415, to the cache 420, or is tracked by the controller 410, utilizing the MetaO-state to track the bias state on a per-cache-line basis may improve the overall performance of the memory system while maintaining a relatively simplistic programming model.[0096] FIG. 5 illustrates an example of a process flow diagram 500 that supports bias control for a memory device in accordance with examples as disclosed herein. The process flow diagram 500 may illustrate the operations of a memory system 501 and a host device 525, which may be examples of a memory system 305 and a host device 310, respectively, as described with reference to FIG 3. The memory system 501 may include a processing unit 505, a controller 510, a device memory 515, and a cache 520. The process flow diagram 500 may illustrate utilizing a MetaO-state to track a bias state on a per-cache-line basis, which may improve the overall performance of the memory system 501 while maintaining a relatively simplistic programming model.[0097] At 530, a coherency state for a line of data may be stored at the device memory 515. In some examples, the coherency state may be stored for a plurality of data. For example, a coherency state for each line of data may be stored to the device memory 515. As described herein, the coherency state may correspond to a “invalid,” “any,” or “shared” state, and may be stored in the MetaO field associated with the respective line of data. In other examples, at 530, the coherency state for the data may be updated. That is, a coherency state for one or more lines of data may have been previously stored to the device memory 415 but, due to a change in the coherency state, the stored state may be updated. In either instance, the coherency state stored at 530 may be used for determining a bias state associated with the data. [0098] At 535, the processing unit 505 may determine to access a line of data. For example, the processing unit 505 may determine to read a line of data from the device memory 515 or may determine to write a line of data to the device memory 515. For exemplary purposes, the line of data may be stored at the device memory 515. However, in some examples, the line of data may be stored to the cache 520 (e.g., the data may be cached), and the processing unit 505 or the controller 510 may determine whether the data is stored to the cache 520 or the device memory 515.[0099] At 540, the processing unit 505 may transmit signaling to the controller 510. The signaling may be in response to determining to access a line of data (e.g., at 535), and the signaling may include a command (e.g., a read command, a write command), a request, or another type of communication. The signaling may indicate, to the controller 510, the type of access operation and an address (e.g., a memory address) of the device memory 515.[0100] After 540, the controller 510 may transmit signaling to the cache 520, which is not shown in FIG. 5. The signaling may be in response to the signaling indicating the type of access operation and the address of the device memory 515 (e.g., at 540), and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the cache 540 to determine if the associated line of data is stored to the cache 540.[0101] After transmitting the signaling to the cache 520, the controller 510 may receive signaling from the cache 520, which is not shown in FIG. 5. The signaling may indicate whether the line of data was stored to the cache (e.g., a cache “hit” or a cache “miss”). For exemplary purposes only, such signaling indicate a cache “miss.” In some examples, upon a cache “miss” then the controller 510 may assume to operate in a host bias mode.[0102] After the controller 510 receives the signaling, the controller 510 may transmit signaling to the device memory 515, which is not shown in FIG. 5. The signaling may be in response to the signaling indicating a cache “miss”, and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the device memory 515.[0103] After transmitting the signaling to the device memory 515, the controller 510 may receive signaling from the device memory 515, which is not shown in FIG. 5. The signaling may include a line of data read from the device memory 515. Additionally, the signaling may include a MetaO state associated with the line of data. As described herein, the MetaO state may indicate a first value that corresponds to an “invalid” coherency state, a second value that corresponds to a “shared” coherency state, or a third value that corresponds to an “any” state.[0104] At 545, the controller 510 may determine a bias state of the line of data stored to the device memory 515 in response to receiving the line of data and associated MetaO state. The controller 510 may determine the bias state based on the MetaO state. For example, the MetaO field may indicate a first value that corresponds to an “invalid” coherency state and results in a device bias for the line of data. Additionally or alternatively, the MetaO field may indicate a second value that corresponds to a “shared” state or a third value that corresponds to an “any” state, both of which result in a host bias for the line of data. For exemplary purposes only, the controller 510 may determine a host bias state for the line of data stored to the first memory address.[0105] At 550, the controller 510 may transmit signaling to the host device 525. The signaling may be in response to determining to access the device memory 415 according to a host bias mode (e.g., at 545), and the signaling may include a command (e.g., a read command), a request, or another type of communication. For example, the signaling may include a request for the host 525 to resolve coherency of the line of data, or may include a request for the host 525 to grant access of the line of data to the controller 510.[0106] At 555, the host device 525 may transmit signaling to the controller 510 (or to another component of the memory system 501). The signaling may be in response to the signaling transmitted to the host device 525 (e.g., at 550), and the signaling may include a command (e.g., a second command, a read command), a request, or another type of communication. For example, the second command may resolve coherency of the data by granting direct access for the line of data to the controller 510.[0107] In other examples, the signaling (e.g., at 540) may indicate a write command at a second memory address of the device memory 515. In such examples, the controller 510 may determine the bias state for writing data to the second memory address based on the MetaO field associated with the line of data. For example, the controller 510 may transmit signaling to the device memory 515 to read a line of data from the device memory 515 that corresponds to the second memory address. In response, the controller 510 may receive signaling that indicates the MetaO state of the line of data. If the MetaO state corresponds to an “invalid” coherency state, the controller 510 may operate according to a device bias. In such an example, the controller 510 may write the data to the second memory address of the device memory 515 without any interaction from a host device.[0108] In other examples, the controller 510 may not transmit signaling to the cache 520 and/or device memory 515 but may instead maintain a table for tracking a MetaO state of one or more lines of data. For example, for both read operations or write operations, the controller 510 may track a MetaO value on a per-line basis, such that when the processing unit determines to access data (e.g., at 530), the controller may determine the MetaO state based on the table. Accordingly, based on the MetaO state, the controller 410 may operate in either a device bias mode or a host bias mode.[0109] At 560, data may be communicated to the processing unit 505. In some examples, the data may be communicated in response to signaling transmitted to the controller 510 to from the host 525 (e.g., at 555). The data may, in some examples, be communicated directly to the processing unit 505, while in other examples the data may be communicated to the controller 510, and the controller 510 may communicate the data to the processing unit 505.[0110] At 565, the processing unit 505 may determine to access a line of data (e.g., another line of data). For example, the processing unit 505 may determine to read a line of data that may be stored to the cache 520 (e.g., a cached line of data). For exemplary purposes, the line of data may not be stored at the cache 520. However, in some examples, the line of data may be stored to the cache 520 (e.g., the data may be cached), resulting in a cache “hit.” When a cache “hit” occurs, the controller 510 may determine to access the data according a coherency state (e.g., a MetaO field) associated with the line of data.[0111] At 570, the processing unit 505 may transmit signaling to the controller 510. The signaling may be in response to determining to access a line of data (e.g., at 565), and the signaling may include a command (e.g., a read command, a write command), a request, or another type of communication. The signaling may indicate, to the controller 510, the type of access operation and an address (e.g., a memory address) of the device memory 515.[0112] After 570, the controller 510 may transmit signaling to the cache 520, which is not shown in FIG. 5. The signaling may be in response to the signaling indicating the type of access operation and the address of the device memory 515 (e.g., at 540), and the signaling may include a command (e.g., a read command), a request, or another type of communication. The signaling may initiate a read on the cache 520 to determine if the associated line of data is stored to the cache 520. [0113] After transmitting the signaling to the cache 520, the controller 510 may receive signaling from the cache 520, which is not shown in FIG. 5. The signaling may indicate whether the line of data was stored to the cache (e.g., a cache “hit” or a cache “miss”). For exemplary purposes only, such signaling indicate a cache “hit.” Additionally, the signaling may include a line of data read from the cache 520, and may include a MetaO state associated with the line of data. As described herein, the MetaO state may indicate a first value that corresponds to an “invalid” coherency state, a second value that corresponds to a “shared” coherency state, or a third value that corresponds to an “any” state.[0114] At 575, the controller 510 may determine a bias state of the line of data stored to the cache 520 in response to receiving the line of data and associated MetaO state. The controller 510 may determine the bias state based on the MetaO state. For example, the MetaO field may indicate a first value that corresponds to an “invalid” coherency state and results in a device bias for the line of data. Additionally or alternatively, the MetaO field may indicate a second value that corresponds to a “shared” state or a third value that corresponds to an “any” state, both of which result in a host bias for the line of data. For exemplary purposes only, the controller 510 may determine a host bias state for the line of data stored to the cache 520.[0115] At 580, the controller 510 may transmit signaling to the host device 525. The signaling may be in response to determining to access the cache 580 according to a host bias mode (e.g., at 575), and the signaling may include a command (e.g., a read command), a request, or another type of communication. For example, the signaling may include a request for the host 525 to resolve coherency of the line of data, or may include a request for the host 525 to grant access of the line of data to the controller 510.[0116] At 585, the host device 525 may transmit signaling to the controller 510 (or to another component of the memory system 501). The signaling may be in response to the signaling transmitted to the host device 525 (e.g., at 580), and the signaling may include a command (e.g., a second command, a read command), a request, or another type of communication. For example, the second command may resolve coherency of the data by granting direct access for the line of data to the controller 510.[0117] At 590, data may be communicated to the processing unit 505. The data may, in some examples, be communicated directly to the processing unit 505, while in other examples the data may be communicated to the controller 510, and the controller 510 may communicate the data to the processing unit 505. Whether the MetaO field is stored to the device memory 515, to the cache 520, or is tracked by the controller 510, utilizing the MetaO- state to track the bias state on a per-cache-line basis may improve the overall performance of the memory system while maintaining a relatively simplistic programming model.[0118] FIG. 6 shows a block diagram 600 of a memory controller 620 that supports bias control for a memory device in accordance with examples as disclosed herein. The memory controller 620 may be an example of aspects of a memory controller as described with reference to FIGs. 1 through 5. The memory controller 620, or various components thereof, may be an example of means for performing various aspects of bias control for a memory device as described herein. For example, the memory controller 620 may include a coherency component 625, a memory accessing component 630, a transmission component 635, a reception component 640, a determination component 645, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0119] The coherency component 625 may be configured as or otherwise support a means for storing a coherency state for a set of data stored in a memory relative to a cache of a host device. In some examples, the coherency component 625 may be configured as or otherwise support a means for storing a coherency state for a set of data stored in a memory relative to a cache of a host device.[0120] The memory accessing component 630 may be configured as or otherwise support a means for accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by a controller associated with the memory.[0121] In some examples, the memory accessing component 630 may be configured as or otherwise support a means for accessing the memory independent of the host device based at least in part on the bias state. In some examples, the command includes a read command, and the memory accessing component 630 may be configured as or otherwise support a means for accessing the set of data stored to the cache based at least in part on determining that the set of data is stored to the cache associated with the memory.[0122] In some examples, the memory accessing component 630 may be configured as or otherwise support a means for accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by the host device. In some examples, the command includes a read command, and the memory accessing component 630 may be configured as or otherwise support a means for accessing the memory for the set of data based at least in part on receiving the second command from the host device.[0123] In some examples, the coherency state for the set of data is stored independent from the command, and the transmission component 635 may be configured as or otherwise support a means for transmitting, to the host device, an indication of the access operation to be performed on the set of data based at least in part on determining to process the command according to the bias state. In some examples, the transmission component 635 may be configured as or otherwise support a means for transmitting the set of data received from the host device to the memory based at least in part on receiving the set of data from the host device.[0124] In some examples, the coherency state for the set of data is stored independent from the command, and the reception component 640 may be configured as or otherwise support a means for receiving, from the host device, a second command for indicating direct access for the set of data for the access operation based at least in part on transmitting the indication of the access operation.[0125] In some examples, the command includes a read command, and the determination component 645 may be configured as or otherwise support a means for determining that the set of data is stored to a cache associated with the memory.[0126] In some examples, the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data. In some examples, the first value indicates that the set of data is not coherent. In some examples, the second value and the third value indicate that the set of data is coherent, where the controller is configured to process the command according to the bias state based at least in part on the memory storing the first value for the set of data. [0127] In some examples, the set of data is associated with a first quantity of data. In some examples, a cache line of the host device coupled with the memory is configured to store the first quantity of data. In some examples, the bias state corresponds to a first bias state and. In some examples, a second bias state is associated with control of access for the set of data by the host device. In some examples, the memory is configured to store a first value that is associated with a first coherency state for the set of data, a second value that is associated with a second coherency state for the set of data, or a third value that is associated with a third coherency state for the set of data.[0128] In some examples, the first value indicates that an invalid version of the set of data is stored to the cache of the host device. In some examples, the second value indicates that a shared version or an exclusive version of the set of data is stored to the cache of the host device. In some examples, the third value indicates that the shared version of the set of data is stored to the cache of the host device. In some examples, the second value and the third value indicate that the set of data is coherent. In some examples, the command is processed according to the bias state based at least in part on the memory storing the second value or the third value for the set of data.[0129] In some examples, the set of data is associated with a first quantity of data. In some examples, a cache line of the host device coupled with the memory is configured to store the first quantity of data. In some examples, a first bias state is associated with control of access for the set of data by the controller. In some examples, the bias state corresponds to a second bias state.[0130] FIG. 7 shows a flowchart illustrating a method 700 that supports bias control for a memory device in accordance with examples as disclosed herein. The operations of method 700 may be implemented by a memory controller or its components as described herein. For example, the operations of method 700 may be performed by a memory controller as described with reference to FIGs. 1 through 5 and 6. In some examples, a memory controller may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory controller may perform aspects of the described functions using special-purpose hardware.[0131] At 705, the method may include storing a coherency state for a set of data stored in a memory relative to a cache of a host device. The operations of 705 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 705 may be performed by a coherency component 625 as described with reference to FIG. 6.[0132] At 710, the method may include accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by a controller associated with the memory. The operations of 710 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 710 may be performed by a memory accessing component 630 as described with reference to FIG. 6.[0133] In some examples, an apparatus as described herein may perform a method or methods, such as the method 700. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for storing a coherency state for a set of data stored in a memory relative to a cache of a host device and accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by a controller associated with the memory.[0134] Some examples of the method 700 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for accessing the memory independent of the host device based at least in part on the bias state.[0135] In some examples of the method 700 and the apparatus described herein, the command includes a read command and the method, apparatuses, and non-transitory computer-readable medium may include further operations, features, circuitry, logic, means, or instructions for determining that the set of data may be stored to a cache associated with the memory and accessing the set of data stored to the cache based at least in part on determining that the set of data may be stored to the cache associated with the memory.[0136] In some examples of the method 700 and the apparatus described herein, the memory may be configured to store a first value that may be associated with a first coherency state for the set of data, a second value that may be associated with a second coherency state for the set of data, or a third value that may be associated with a third coherency state for the set of data.[0137] In some examples of the method 700 and the apparatus described herein, the first value indicates that the set of data may be not coherent and the second value and the third value indicate that the set of data may be coherent, where the controller may be configured to process the command according to the bias state based at least in part on the memory storing the first value for the set of data.[0138] In some examples of the method 700 and the apparatus described herein, the set of data may be associated with a first quantity of data, and a cache line of the host device coupled with the memory may be configured to store the first quantity of data.[0139] In some examples of the method 700 and the apparatus described herein, the bias state corresponds to a first bias state and a second bias state may be associated with control of access for the set of data by the host device.[0140] FIG. 8 shows a flowchart illustrating a method 800 that supports bias control for a memory device in accordance with examples as disclosed herein. The operations of method 800 may be implemented by a memory controller or its components as described herein. For example, the operations of method 800 may be performed by a memory controller as described with reference to FIGs. 1 through 5 and 6. In some examples, a memory controller may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory controller may perform aspects of the described functions using special-purpose hardware.[0141] At 805, the method may include storing a coherency state for a set of data stored in a memory relative to a cache of a host device. The operations of 805 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 805 may be performed by a coherency component 625 as described with reference to FIG. 6.[0142] At 810, the method may include accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by the host device. The operations of 810 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 810 may be performed by a memory accessing component 630 as described with reference to FIG. 6.[0143] In some examples, an apparatus as described herein may perform a method or methods, such as the method 800. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for storing a coherency state for a set of data stored in a memory relative to a cache of a host device and accessing the memory according to a bias state for the set of data determined based at least in part on the stored coherency state for the set of data based at least in part on identifying a command to perform an access operation for the set of data stored in the memory, where the bias state is associated with control of access for the set of data by the host device.[0144] In some examples of the method 800 and the apparatus described herein, the coherency state for the set of data may be stored independent from the command and the method, apparatuses, and non-transitory computer-readable medium may include further operations, features, circuitry, logic, means, or instructions for transmitting, to the host device, an indication of the access operation to be performed on the set of data based at least in part on determining to process the command according to the bias state and receiving, from the host device, a second command for indicating direct access for the set of data for the access operation based at least in part on transmitting the indication of the access operation.[0145] In some examples of the method 800 and the apparatus described herein, the command includes a read command and the method, apparatuses, and non-transitory computer-readable medium may include further operations, features, circuitry, logic, means, or instructions for accessing the memory for the set of data based at least in part on receiving the second command from the host device.[0146] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for transmitting the set of data received from the host device to the memory based at least in part on receiving the set of data from the host device.[0147] In some examples of the method 800 and the apparatus described herein, the memory may be configured to store a first value that may be associated with a first coherency state for the set of data, a second value that may be associated with a second coherency state for the set of data, or a third value that may be associated with a third coherency state for the set of data.[0148] In some examples of the method 800 and the apparatus described herein, the first value indicates that an invalid version of the set of data may be stored to the cache of the host device, the second value indicates that a shared version or an exclusive version of the set of data may be stored to the cache of the host device, and the third value indicates that the shared version of the set of data may be stored to the cache of the host device.[0149] In some examples of the method 800 and the apparatus described herein, the second value and the third value indicate that the set of data may be coherent, and the command may be processed according to the bias state based at least in part on the memory storing the second value or the third value for the set of data.[0150] In some examples of the method 800 and the apparatus described herein, the set of data may be associated with a first quantity of data, and a cache line of the host device coupled with the memory may be configured to store the first quantity of data.[0151] In some examples of the method 800 and the apparatus described herein, a first bias state may be associated with control of access for the set of data by the controller, and the bias state corresponds to a second bias state.[0152] It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined.[0153] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0154] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0155] The term “coupling” refers to a condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. If a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0156] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other if the switch is open. If a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0157] The terms “if,” “when,” “based on,” or “based at least in part on” may be used interchangeably. In some examples, if the terms “if,” “when,” “based on,” or “based at least in part on” are used to describe a conditional action, a conditional process, or connection between portions of a process, the terms may be interchangeable. [0158] The term “in response to” may refer to one condition or action occurring at least partially, if not fully, as a result of a previous condition or action. For example, a first condition or action may be performed and second condition or action may at least partially occur as a result of the previous condition or action occurring (whether directly after or after one or more other intermediate conditions or actions occurring after the first condition or action).[0159] Additionally, the terms “directly in response to” or “in direct response to” may refer to one condition or action occurring as a direct result of a previous condition or action.In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring independent of whether other conditions or actions occur. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring, such that no other intermediate conditions or actions occur between the earlier condition or action and the second condition or action or a limited quantity of one or more intermediate steps or actions occur between the earlier condition or action and the second condition or action. Any condition or action described herein as being performed “based on,” “based at least in part on,” or “in response to” some other step, action, event, or condition may additionally or alternatively (e.g., in an alternative example) be performed “in direct response to” or “directly in response to” such other condition or action unless otherwise specified.[0160] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0161] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” if a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” if a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0162] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0163] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a hyphen and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0164] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0165] For example, the various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general- purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0166] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0167] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0168] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
Active and/or proactive semaphore mechanisms and thread synchronization techniques can be applied to various visual and graphical processing techniques.
1.A method comprising:Executing a first thread of the instruction to process the first graphical element of the image to be displayed;Executing a second thread of instructions to process a second graphical element of the image to be displayed;Responsive to detection of at least one of a set of predetermined conditions related to a relationship between the first graphical element and the second graphical element, placing the first thread of the instruction in an inactive state;Holding the first thread of the instruction in an inactive state until a message is received from the beacon entity;Execution of the first thread in response to receiving the message restart instruction from the beacon entity.2.The method of claim 1 wherein said set of predetermined conditions comprises unresolved dependencies.3.The method of claim 1 wherein the set of predetermined conditions comprises a lack of a response from the beacon indicating that a resource corresponding to the beacon is unavailable.4.The method of claim 1 further comprising maintaining a status indication for the first thread of instructions and the second thread for instructions.5.The method of claim 4 wherein the status indication of each thread includes a state variable corresponding to a dependency (if any) of the associated thread.6.The method of claim 1 wherein said first thread comprises a first set of ray tracing instructions, and said first graphical element comprises a first ray segment, and wherein said second thread comprises a second set of rays Tracking instructions, and the second graphical element includes a second ray segment.7.The method of claim 1 wherein said first thread comprises a first set of video decoding instructions and said first graphical element comprises a first picture segment and said second thread comprises a second set of video decoding The instruction and the second graphical element include a second picture segment.8.The method of claim 7, wherein the first picture segment comprises a first macroblock and the second picture segment comprises a second macroblock.9.The method of claim 1 wherein said first thread comprises a first set of three-dimensional rendering instructions and said first graphical element comprises a first rendering primitive and said second thread comprises a second set of three-dimensional rendering instructions And the second graphical element includes a second rendering primitive.10.The method of claim 9, wherein the first rendering primitive comprises one of a first point, a first line, a first triangle, and a first triangular band, and the second rendering primitive comprises a first One of the second point, the second line, the second triangle, and the second triangle.11.The method of claim 9 further comprising:Determining a distance value for the first rendering primitive;Determining a distance value for the second rendering primitive;Comparing distance values for the first rendering primitive and the second rendering primitive to determine a relationship between the first rendering primitive and the second rendering primitive;A selected one of the first rendering primitive and the second rendering primitive is displayed based on a relationship between the first rendering primitive and the second rendering primitive.12.A device comprising:An execution circuit for receiving and executing a first thread of an instruction corresponding to the first graphical element of the image and a second thread of an instruction corresponding to the second graphical element of the image, wherein the execution circuit transmits a beacon request And in response to the first thread requiring a resource having an associated beacon placing the first thread in an inactive state;a beacon entity coupled to the execution circuitry for receiving a beacon request message from the execution circuitry and for selectively granting a message by transmitting a beacon acknowledgement message to the execution circuitry in response to the beacon request message The target control, wherein the execution circuit removes the thread of the instruction from an inactive state in response to receiving the beacon acknowledgement message.13.The apparatus of claim 12 wherein said execution circuitry comprises:a first execution circuit for executing a first thread of the instruction;a second execution circuit for executing a second thread of the instruction.14.The apparatus of claim 12 wherein said first thread comprises a first set of ray tracing instructions and said first graphical element comprises a first ray segment, and said second thread comprises a second set of ray tracing The instruction and the second graphical element include a second ray segment.15.The apparatus of claim 12 wherein said first thread comprises a first set of video decoding instructions and said first graphical element comprises a first picture segment, and said second thread comprises a second set of video decoding The instruction and the second graphical element include a second picture segment.16.The apparatus of claim 15, wherein the first picture segment comprises a first macroblock and the second picture segment comprises a second macroblock.17.The apparatus of claim 12 wherein said first thread comprises a first set of three-dimensional rendering instructions and said first graphical element comprises a first rendering primitive, and said second thread comprises a second set of three dimensional The instruction is rendered and the second graphical element includes a second rendering primitive.18.The apparatus of claim 17, wherein the first rendering primitive comprises one of a first point, a first line, a first triangle, and a first triangular band, and the second rendering primitive comprises a first One of the second point, the second line, the second triangle, and the second triangle.19.The apparatus of claim 12, further comprising a memory coupled to said execution circuitry for storing a first thread of said instruction and a second thread of said instruction.20.The device of claim 12, further comprising:At least one additional execution circuit for executing the thread of the instruction;A thread allocator coupled to the execution circuitry and the at least one additional execution circuitry for allocating threads for execution.21.The apparatus of claim 12, wherein execution of the instruction is stopped and the execution circuit does not poll the beacon entity to determine a status of the beacon request message when the first thread of the instruction is in an inactive state .22.A device comprising:Means for executing a first thread of instructions for processing a first graphical element in an image to be displayed;Means for executing a second thread of instructions for processing a second graphical element in the image to be displayed;Means for placing a first thread of an instruction in an inactive state in response to detecting at least one of a set of predetermined conditions related to a relationship between the first graphical element and the second graphical element;Means for maintaining the first thread of the instruction in an inactive state until a message is received from the beacon entity;Means for responding to execution of a first thread of the message restart instruction received from the beacon entity.23.The apparatus of claim 22 wherein said first thread comprises a first set of ray tracing instructions and said first graphical element comprises a first ray segment, and said second thread comprises a second set of ray tracing The instruction and the second graphical element include a second ray segment.24.The apparatus of claim 22 wherein said first thread comprises a first set of video decoding instructions and said first graphical element comprises a first macroblock, and said second thread comprises a second set of video decoding The instruction and the second graphical element comprise a second macroblock.25.The apparatus of claim 22 wherein said first thread comprises a first set of three-dimensional rendering instructions and said first graphical element comprises a first rendered present primitive, and wherein said second thread comprises a second set The three-dimensional rendering instructions and the second graphical element include a second rendering primitive.26.A system comprising:Memory controllerAn execution circuit coupled to the memory controller, a second thread for receiving and executing a first thread of an instruction corresponding to the first graphical element of the image and an instruction corresponding to the second graphical element of the image, wherein The execution circuitry sends a beacon request message and places the first thread in an inactive state in response to a first thread requiring a resource having an associated beacon;a beacon entity coupled to the execution circuitry for receiving a beacon request message from the execution circuitry and for selectively granting a beacon request message to the execution circuitry in response to the beacon request message Control of the beacon, wherein the execution circuitry removes the thread of the instruction from an inactive state in response to receiving the beacon acknowledgement message.27.The system of claim 26 wherein said execution circuitry comprises:a first execution circuit for executing a first thread of the instruction;a second execution circuit for executing a second thread of the instruction.28.The system of claim 26 wherein said first thread comprises a first set of ray tracing instructions and said first graphical element comprises a first ray segment, and said second thread comprises a second set of ray tracing The instruction and the second graphical element include a second ray segment.29.The system of claim 26 wherein said first thread comprises a first set of video decoding instructions and said first graphical element comprises a first macroblock, and said second thread comprises a second set of video decoding The instruction and the second graphical element comprise a second macroblock.30.The system of claim 29, wherein the first picture segment comprises a first macroblock and the second picture segment comprises a second macroblock.31.The system of claim 26 wherein said first thread comprises a first set of three-dimensional rendering instructions and said first graphical element comprises a first partial rendering primitive, and said second thread comprises a second set of three dimensional The instruction is rendered and the second graphical element includes a second rendering primitive.32.The system of claim 31, wherein the first rendering primitive comprises one of a first point, a first line, a first triangle, and a first triangular band, and the second rendering primitive comprises One of the second point, the second line, the second triangle, and the second triangle.33.The system of claim 26, further comprising a memory coupled to said memory controller for storing a first thread of said instruction and a second thread of said instruction.34.The system of claim 26 wherein when the first thread of the instruction is in an inactive state, execution of the instruction is stopped and execution circuitry does not poll the beacon entity to determine a status of the beacon request message .
Visual and graphical data processing using a multi-threaded architectureTechnical fieldThe present invention relates to visual and graphical data processing. More particularly, the present invention relates to the use of an active beacon mechanism to perform visual and graphical data processing operations.Background techniqueA "beacon" (also known as a "critical section" or "mutex") is a hardware and software structure that allows coordination or synchronization of operations in which multiple processes compete for shared resources (eg, memory, files). Typically, a beacon is a value stored in a specified location in the operating system memory. The beacon-based value allows the process to check and change, and the process can access the shared resource or wait for a period of time and check the beacon again.Beacons in conventional computer systems are typically implemented as hardware-supported software routines that use atomic "test and setup" or similar types of instructions (eg, lock, bit test, bit test and set, bit test, and reset). Using this beacon implementation, a producer-consumer communication relationship can be established by sharing (eg, global) data and one or more beacons. The beacon allows the shared data to be modified by a selected one of a plurality of processes attempting to modify the data, the beacon providing consistency of the data.The beacon structure is "negative" because the thread must perform a polling operation to acquire a beacon. This polling consumes processor and system resources that can otherwise be used for other purposes. Therefore, conventional beacons can lead to inefficiencies.DRAWINGSThe invention is illustrated by way of example and not limitation,1 is a block diagram of one embodiment of a multi-threaded processor architecture.2a-2d are conceptual diagrams of affiliations in which beacons can be used to synchronize thread execution.Figure 3 is a simple example scenario in which light is tracked from the source to the viewer.4 is a flow diagram of one embodiment of ray tracing using active beacons.5 is a flow diagram of one embodiment of a Z-buffered three-dimensional graphics rendering using active ordered beacons.6 is a flow diagram of one embodiment of video decoding using active and/or active beacons.Detailed waysMethods and apparatus for visual and/or graphical data processing using active beacons are described. In the following description, numerous specific details are set forth However, it is apparent to those skilled in the art that the present invention may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to clearly illustrate the invention.Overview of the exemplary use of activity beaconsWhat is described herein is an architecture and related method in which multiple parallel passive threads of instructions (hereinafter referred to as "threads") utilize "active" beacons to coordinate access to shared resources. The beacon is said to be active because the beacon entity sends a message to the execution and/or control circuitry to cause a thread state change. For example, the thread scheduler can place a thread in a dormant (or inactive) mode in response to an unresolved affiliation, which can be indicated by a beacon. A thread state variable corresponding to the affiliation is used to indicate that the thread is in sleep mode.When the affiliation is resolved, the message is passed to a control circuit (eg, a thread scheduler) that causes the affiliation variable to be cleared. In response to the cleared affiliation variable, the thread is placed in an active (or awake) state. In the active state, execution can be done on a thread.Continuing with the above example, if a thread attempts to obtain a beacon and cannot get it, the thread is placed in an inactive state. Because the thread is inactive, it cannot poll the beacon to determine when the affiliation indicated by the beacon is resolved, as required in the prior art. The thread remains inactive until a message is received indicating that the affiliation has been resolved (eg, from a beacon entity). In response to the message, the thread is placed in an active state that allows execution to proceed.1 is a block diagram of one embodiment of a multi-threaded processor architecture. As used herein, the tag "massively multi-threaded" architecture represents an architecture that includes multiple processors that can support multi-threaded execution. In one embodiment, each processor can support one or more threads. Multithreading on a single processor achieves high execution efficiency by allowing active threads to be executed while other threads are inactive. Threads in pending inactive states on the beacon do not consume/was processor resources. Although described with respect to a large number of multi-threaded architectures, the beacon structure and related methods can be applied to any multi-threaded architecture regardless of the number of threads supported.The multi-threaded system 100 includes a memory hierarchy 110 that stores data and instructions to be used during execution of one or more processing cores. The memory hierarchy 110 can include dynamic random access memory (DRAM), one or more levels of instruction cache, one or more levels of data cache, and/or one or more in any manner known in the art. Level shared instructions and data cache. Thread allocator 120 is coupled to memory hierarchy 110 to receive information such as instruction pointers and data and/or data pointers associated with new threads. Thread allocator 120 is also coupled to the processing core via message bus 125. In one embodiment, thread allocator 120 is responsible for managing the thread resources of the processing core. Upon receiving a new pending thread, thread allocator 120 selects a processing core with resources for executing the pending thread and assigns the thread to the selected processing core. Upon completion of an existing thread on a processing core, thread allocator 120 is notified such that thread resources on the processing core are available to future pending threads.System 100 is illustrated as having a plurality of processor cores (130, 131, 139, 150, 151, and 159) each of which includes an execution circuit having associated control circuitry. The processor cores may be the same or each processor core may have different functions. Any number of processor cores can be included in system 100. In one embodiment, the processor cores are configured in rows with one row of controllers per row. For example, row controller 140 can be coupled to processor cores 130, 131, and 139 via row control bus 145. Similarly, row controller 160 can be coupled to processor cores 150, 151, and 159 via row control bus 165.The processor core is also coupled to the beacon entity 170 via a message bus 125. Beacon entity 170 includes memory and control logic to provide beacon functionality as described herein. In one embodiment, the beacon entity 170 interacts with the processor core by transmitting and receiving messages, as described in more detail below.Thread allocator 120 is also coupled to beacon entity 170 via message bus 125. In one embodiment, the thread allocator causes a thread to interact with the beacon entity 170 by sending and receiving messages, as described in more detail below.The control circuitry in each processing core can include thread scheduling circuitry to manage the state of multiple threads executing on the same processing core, and can also include an instruction thread to execute the active thread of the instruction. During execution of the instruction, one or more processing cores will attempt to access the shared system resources. In order to gain control of shared system resources, a thread through the response's execution core must obtain control of the beacon corresponding to the shared system resource to be accessed.In one embodiment, to obtain control of the beacon, the requesting processing core sends a beacon request message to the beacon entity 170 on the message bus 125. After transmitting the beacon request message, the requesting thread is placed in an inactive state in which execution and related operations (eg, beacon polling) are suspended.In response to receiving the beacon request message, the beacon entity 170 determines whether to grant the control of the beacon to the requesting thread. When the beacon is granted, the beacon entity 170 sends a beacon acknowledgement message to the requesting thread. In response to the beacon acknowledgement message, the requesting thread is restored to an active state in which execution of the requested resource continues. When the thread completes the use of the shared resource, the thread sends a release beacon message to the beacon entity 170. In response to the release beacon message, the beacon entity 170 releases the beacon and allows other threads to gain access to system resources.In one embodiment, the beacon is supported by instructions (beacon instructions) executed by the processing core and by messages (beacon messages) communicated between the processing core and the beacon entity, such as over message bus 125. In alternative embodiments, different and/or additional messages or instructions may be supported.Beacon entity based on linked listIn one embodiment, a conventional beaconed queue is replaced with a buffer pool having entries for forming a linked list of each beacon. Thus, each beacon can be a pointer to a pointer to a list of links formed by entries in the buffer pool. The list of links can be a list of two-way links or a list of one-way links.In one embodiment, the beacon table includes pointers for each of the supported beacons. In one embodiment, the pointer in the beacon table is a header pointer that indicates the header of the linked list to be used for the corresponding beacon. The free pool pointer indicates the header of the buffer pool and the unused beacon entity includes a NULL pointer.In one embodiment, each beacon entry includes a release status field, an acknowledgement suppression field, a thread identifier field, a previous pointer, and a next pointer. In alternative embodiments, other and/or different fields may be used, for example the previous pointer may be omitted. In one embodiment, each beacon may also include a unit (or other indicator) to indicate whether the list of links corresponding to the beacon is empty.Beacon messageA beacon message (ACQ_MSG) is obtained for a thread, or a threaded thread allocator, to request the beacon entity for ownership of a beacon. In one embodiment, the ACQ_MSG includes a Beacon Identifier field, a Thread Identifier field, an "Auto Release" field, and an "Acknowledgement-Suppression" (Acknowledgement-Suppression) field. The autorelease field is used for threads that have only the first affiliation. That is, the thread depends on the previous thread, but no subsequent threads depend on the thread. The acknowledgment-inhibition field is used for threads that have only tail affiliation. That is, the thread does not depend on any previous thread, but the thread has a subsequent thread that depends on it. The ACQ_MSG may be issued by a thread allocator or other control circuitry associated with the thread.In one embodiment, upon receiving the ACQ_MSG, the informational entity queues the beacon entry for the requesting thread to the target beacon by removing the header entry from the free pool list and adding it to the end of the selected beacon. List. The field of the beacon entry is updated based on the information in the ACQ_MSG; the Thread Identifier field, the Release Status field, and the acknowledgment-suppression field are replaced by the requester's Thread Identifier, Auto Release field, and Acknowledge-Suppress field in the ACQ_MSG. If the beacon link list is not empty before ACQ_MSG, the beacon entity does not send a message. Otherwise, if the beacon link list is empty before receiving ACQ_MSG, take one of the following actions.If the acknowledgment-suppression field is not set, the ACK_MSG with the thread identifier is sent from the beacon entity to the requesting thread on the message bus 125. If the acknowledgment-suppression field is set, ACK_MSG is not sent from the beacon entity. If the autorelease field is not set, the beacon entry that has just been queued is kept in the beacon link list. If the autorelease field is set, the queued beacon entry is removed from the beacon link list.A release beacon message (REL_MSG) is used for a thread to request ownership of the beacon from the beacon entity. In one embodiment, the REL_MSG includes a beacon identifier field and a thread identifier field. In one embodiment, the REL_MSG may only be issued by a control circuit associated with a thread having beacon ownership, ie the thread identifier is located at the top of the beacon link list. Upon receiving the REL_MSG, the beacon entity removes the entry from the header of the beacon link list.In another embodiment, the REL_MSG may be issued by a control circuit associated with any thread that shares the beacon. Upon receiving the REL_MSG, the beacon entity resets the release status field to the corresponding entry in the beacon link list regardless of the location of the entry in the link list. If the beacon entry is at the beginning of the link list, the entry is removed from the header of the beacon link list. The next entry becomes the first part of the list of links. If the next entry is not NULL, it will be checked. If the new header of the linked list will release the status field setting, it will be removed from the header of the beacon link list again.In one embodiment, the recursive process continues until the header of the linked list is NULL (beacon queue empty) or the header of the linked list resets the release status field (waiting for the beacon to be released from the thread corresponding to the entry). If the header of the linked list is not NULL and the acknowledgment-suppression field is not set, the ACK_MSG is sent by the beacon entity to the thread identified by the thread identifier field of the entry. If the header of the linked list is not NULL and the acknowledgment-suppression field is set, ACK_MSG is not sent.A beacon acknowledgement message (ACK_MSG) is generated by the beacon entity to inform a thread that the requested beacon has been obtained. In one embodiment, the ACK_MSG includes a beacon identifier field and a thread identifier field. The ACK_MSG is only issued by the beacon entity and is received by the processing core executing the thread identified in the thread identifier field.Upon receiving the ACK_MSG, the receive processing core resets the wait-beacon state variable of the thread identified by the thread identifier field. If the thread is inactive, the thread state becomes activeBeacon instructionThe Obtaining Beacon (ACS) instruction causes the ACQ_MSG message to be sent to the beacon entity with the beacon identifier of the requesting beacon, the thread identifier of the requesting thread, and the auto-release field with a reset. Put the thread into an inactive state, where the Waiting Beacon Status field is set. The ACS instruction is paired with a subsequent Release Beacon (RLS) instruction (described below). The ACS_RLS instruction pair can be used for critical section applications.The obtained beacon with an automatic release (ASR) instruction causes the ACQ_MSG to be sent to the beacon entity with the beacon identifier of the requesting beacon, the thread identifier of the requesting thread and the set autorelease field . The thread is placed in an inactive state, where the Waiting Beacon Status field is set. In one embodiment, the ASR instruction cannot be paired with the RLS instruction. In one embodiment, the ASR instruction is used for threads that only have a header affiliation.The Waiting Beacon (WTS) instruction causes the wait beacon thread status to be checked. If the state is set, the thread is placed in an inactive state. If the state is not set, the thread remains active. The message is not sent to the beacon entity in response to the WTS instruction. The use of the WTS instruction means that the thread allocator previously used ACQ_MSG to obtain a beacon for the thread to allocate time at the thread. If the acknowledgment-suppression field is set in the ACQ_MSG previously issued by the thread allocator, the WTS instruction is not used.The Release Beacon (RLS) instruction causes the REL_MSG to be sent to the beacon entity, which has a beacon identifier for the released beacon and a thread identifier for releasing the thread. The release thread remains active. If the ACS instruction has previously been issued to the release thread, then only one RLS instruction is issued. If the ASR instruction was previously issued to the release thread, the RLS instruction is not issued. If the WTS instruction has been issued to the release thread, the automatic release field of the ACQ_MSG sent by the WTS instruction according to the thread allocator may be followed by or not followed by the RLS instruction. If the autorelease field is reset, no RLS instruction is issued. If the autorelease field is set, the RLS instruction follows the WTS instruction.Exemplary access to activity beaconsWhen the thread of the instruction is executed by the processor, the instruction is executed when the resource is available. When a resource with a beacon is needed, such as a shared memory location, the beacon's ownership is required to access the resource. Thus, the execution of the instruction thread is performed in any manner known in the art before the beacon is needed.In one embodiment, an Obtained Beacon (ACS) instruction is executed when a beacon is needed. The ACS instruction can be executed by the processor of the thread executing the instruction requesting the beacon. As part of or in response to execution of the ACS instruction, the processing core executing the thread sends a Get Beacon message (ACQ_MSG) to the beacon entity on the message bus. The above describes a format of ACQ_MSG. Other formats can also be used.The thread requesting the beacon is placed in an inactive state as part of or in response to execution of the ACS instruction, wherein the Waiting Beacon Status field is set. By placing the thread in an inactive state, the instruction in the thread is not executed, it includes the polling of the requested beacon, and the initial beacon request should be rejected. By placing a thread in an inactive state, the thread polling the beacon does not consume processor resources and system bandwidth. For processing cores that support multithreading, processor resources and system bandwidth can be used by other active threads.The beacon entity receives the ACQ_MSG and places the requester information entry in a linked list of target beacons. If the beacon is not owned or controlled by another thread, the beacon entry is placed in the header of the beacon link list because there are no other entries. If the beacon is owned or controlled by another thread, the beacon entry is placed at the end of the beacon link list. In one embodiment, the tail of the linked list is identified by traversing the linked list entry in the buffer pool from the header entry to the tail entry and the new entry becomes the new tail entry. In another embodiment, the tail of the linked list is directly identified by the trailing pointer of the linked list stored in the beacon table.When a thread completes the use of a resource corresponding to a beacon, the thread with the beacon releases the control of the beacon, which will be described in more detail below. When a beacon is released, the corresponding beacon entry at the head of the beacon link list is removed and the subsequent beacon entry in the linked list becomes the head of the linked list.When a beacon entry becomes the head of the beacon link list, the beacon entity checks its status field. If the acknowledgment-suppression field is not set, an acknowledgment message (ACK_MSG) is sent from the beacon entity to the thread associated with the beacon entry. One format of ACK_MSG is described above. Other formats can also be used. The ACK_MSG indicates to the receiving entity that the receiving entity has been granted control of the corresponding beacon.In response to ACK_MSG, the corresponding thread is activated. When activated, the processing of instructions in the thread resumes and the shared resources corresponding to the beacons can be accessed. When the thread completes access to the shared resource, the beacon is released, which will be described in more detail below.Exemplary release of an activity beaconIn one embodiment, the beacon is released when a release beacon (RLS) instruction is executed. The RLS instruction may be executed by a processor executing a thread that requests the instruction of the beacon. The release beacon message (REL_MSG) is sent to the beacon entity as part of or in response to its execution. One format for REL_MSG has been described above. Other formats can also be used.In response to REL_MSG, the beacon entity matches the thread identifier field of the REL_MSG with the beacon link list. If the corresponding beacon entry is at the beginning of the linked list, the beacon entity removes the thread entry from the header of the linked list. Subsequent entries in the linked list become the first entry. The beacon can then be granted to the thread corresponding to the new header entry. If the corresponding beacon entry is not at the head of the linked list, the beacon entity sets the release status field of the beacon entry.Behavioral modelThe beacon may be classified as an associated beacon or an ordered beacon based on the form of the linked list used. The beacon may be classified as an active beacon or an active beacon based on the transmission of the ACK_MSG from the beacon entity. Therefore, four types of beacons can be supported.Overview of an embodiment of an associated beaconAssociated beacons allow concurrent threads to access beacons in any order. In one embodiment, the beacon is initialized by the thread allocator at the beginning of a session with a NULL linked list (or a single bit used to indicate an empty linked list). No other messages are sent from the thread allocator to the beacon entity. The execution circuitry executing the multi-instruction thread establishes a list of beacon links during the run.In one embodiment, a thread requests an associated beacon by executing an ACS or ASR instruction. The thread releases an associated beacon by executing an RLS instruction. In one embodiment, the new ACQ_MSG will cause the entry corresponding to the requesting thread to be placed at the end of the beacon link list. This provides a first come, first served (FCFS) beacon model.Overview of an embodiment of an ordered beaconOrdered beacons allow concurrent threads to access beacons in a predetermined order. This order is predetermined by the thread allocator at the time of allocation. This order can be application dependent. When the thread allocation is actually serial, the thread allocator can send an ACQ_MSG to the beacon entity for each assigned thread to construct a beacon link list according to the order.Threads entering the critical section can use WTS instructions to wait for ownership of the beacon. Since the thread is already placed in the beacon link list, the ACS and ARS instructions are not used. In one embodiment, the beacon entity may provide control of the beacon only in accordance with the linked list order. The threads waiting on the beacon will receive ACQ_MSG in the order of the linked list.Overview of an embodiment of an activity beaconAs described above, with active beacons, ACQ_MSG is used to cause a thread to change from an inactive state to an active state. The beacon entity receives one or more ACQ_MSGs from the execution circuitry of the execution thread. The beacon entity sends only one ACK_MSG to the execution circuitry corresponding to the thread at the head of the beacon link list. Upon removal of the header of the beacon link list, the beacon entity checks the status of the new header of the linked list and may send the subsequent ACK_MSG to the execution circuitry corresponding to the thread of the new header of the beacon link list. The activity beacon can also be an associated beacon.Overview of an embodiment of an active beaconThe active beacon sends one and only one ACK_MSG to the thread at the head of the beacon link list, whether or not the thread is inactive. This is applied to the thread using a sorted beacon with an ACQ_MSG previously sent by the thread allocator such that only one ACK_MSG is sent to one thread. Threads that use sorted beacons may include WTS and/or RLS instructions.For active beacons, the ACK_MSG is automatically sent by the beacon entity to the thread at the head of the beacon link list. In one embodiment, there is a possibility of a "dangerous condition" if the timing of entries from the thread is queued by the thread distributor in the beacon link list and may be present when the thread is visible to the execution circuitry. Because these two actions are initiated by the thread allocator but occur through different data paths, the timing of these events must be considered.If thread execution begins before beacon configuration, there will be no dangerous conditions if there are WTS instructions in the thread. Because waiting for the beacon thread state variable is set by the thread allocator, no dangerous condition occurs even if the thread's WTS instruction is reached before the thread is queued to the beacon link list. The WTS instruction causes the thread to enter an inactive state without sending a message to the beacon entity. When the beacon entity sends an ACK_MSG to the thread, the execution circuitry again causes the thread to be active.If the beacon is configured by a thread allocator, where the acknowledgment-suppression field is set, it can result in a dangerous condition. In this case, the thread will not be placed in an inactive state. Conversely, if the thread reaches the RLS instruction and sends the REL_MSG to the beacon entity before configuring the beacon for the thread, the beacon entity will not be in the condition to process REL_MSG. To avoid this dangerous condition, the thread execution and beacon entities can ensure that REL_MSG does not pass the ACQ_MSG issued by the thread allocator.Thus, in one embodiment, to avoid dangerous conditions, if the acknowledgment-suppression is not set, the thread allocator completes the thread configuration before the beacon configuration is completed. If the acknowledgment-suppression field is set, the thread allocator completes the beacon configuration before completing the thread configuration. Because the thread allocator serially allocates the prepared threads, the serial operation ensures the necessary ordering.When the thread configuration is completed before the beacon configuration, the thread allocator can allocate a thread and wait for a signal indicating that the thread configuration is complete before sending the message causing the beacon configuration. When the beacon configuration is completed before the thread configuration, the thread allocator can send a message to initiate the configuration of the beacon and wait for a signal indicating that the beacon configuration is complete before the allocation thread. Configuration operations are streamlined because the serial configuration unnecessarily limits throughput from the allocator.Thread synchronization2a-2d are conceptual diagrams of affiliations in which beacons can be used to synchronize thread execution. Figure 2a shows a 1:1:1 (one-to-one) affiliation. The affiliation of Figure 2a can be a strong serial order affiliation or an associated affiliation. For strong sequential sequential dependencies, a single active ordered beacon can be used. In one embodiment, the acknowledge-suppress field and the auto-release field are both reset in the ACQ_MSG sent from the thread allocator to the beacon entity to request the beacon. The thread of the instruction includes a pair of WTS-RLS instructions to obtain and release the beacon.For associated dependencies, a single active associated beacon can be used. In one embodiment, the acknowledge-suppress field and the auto-release field are both reset in the ACQ_MSG sent from the execution circuitry of the execution thread to the beacon entity to request the beacon. The thread of the instruction includes the ACS-RLS instruction pair to obtain and release the beacon.Figure 2b shows a 1:N (1-to-many) affiliation in which one thread has a first affiliation for N other threads, where N other threads are not dependent on each other. Here, N is a positive integer which may be one or more than one. For 1:N affiliation, a single active ordered beacon can be used. In one embodiment, for N independent threads, the ACQ_MSG is sent by the thread allocator for N threads. In the ACQ_MSG sent from the thread allocator to the beacon entity to request the beacon, the acknowledgment-suppression field is set and the auto-release field is reset. For a single thread that has a first affiliation to the other N threads, ACQ_MSG is also sent by the thread allocator. In the ACQ_MSG requesting the beacon, the acknowledgment-suppression field is reset and the auto-release field is set. The N instruction threads only include the RLS instruction to release the beacon. A single instruction thread includes a WTS-RLS instruction pair to obtain and release a beacon.Figure 2c shows an N:1 (many-to-one) dependency where N threads have a first affiliation to a single thread but the N threads are not subordinate to each other. For N:1 affiliation, a single active ordered beacon can be used. In one embodiment, the thread allocator is responsible for sending ACQ_MSG for both N dependent threads and one dependent thread. In one embodiment, for a single slave thread, in the ACQ_MSG requesting the beacon, the acknowledgment-suppression field is set and the auto-release field is reset. For N dependent threads, in the ACQ_MSG requesting the beacon, the acknowledgment-suppression field is reset and the auto-release field is set. A single slave thread of an instruction includes only the RLS instruction to release the beacon. The N dependent threads of the instruction include the WTS-RLS instruction pair to obtain and release the beacon.Figure 2d shows an N:M (many-to-many) dependency where N dependent threads have a first affiliation to M dependent threads. In this case, the N dependent threads are not dependent on each other, and the M dependent threads are not dependent on each other. The N:M dependency case is a more general case than the case of the above 1:1:1, 1:N, and N:1. For N:M dependencies, a single active ordered beacon can be used. In one embodiment, the thread allocator is responsible for sending ACQ_MSG for both N dependent threads and M dependent threads. In one embodiment, for the M slave threads, in the ACQ_MSG for requesting the beacon, the acknowledgment-suppression field is set and the auto-release is automatically reset. For N dependent threads, in the ACQ_MSG for requesting beacons, the acknowledgment-suppression field is reset and auto-release is automatically set. The M slave threads of the instruction include only one RLS instruction to release the beacon. The N dependent threads of the instruction include the WTS-RLS instruction pair to obtain and release the beacon.The affiliations of Figures 2a-2d can be used to support more complex affiliations. For example, for an N:1:N affiliation, two active ordered beacons are used. The N:1 affiliation is processed as described above with reference to Figure 2c and the 1:N affiliation is processed as described above with reference to Figure 2b.As described in more detail below, the above beaconing mechanisms and thread synchronization techniques can be applied to many operations performed in a computer or similar electronic system. In the examples given below, various graphics processing techniques can be performed using the beacon structures described herein. While graphics processing techniques provide useful examples of using active and active beacons, the use of these beaconing mechanisms is not limited to graphical data processing.Beacon and ray tracingRay tracing is a technique for rendering 3D graphics and can support complex light interactions such as mirrors, transparent surfaces, shadows, and more. In general, ray tracing is based on modeled reflections and refractions of paths employed by recursively following (tracking) light rays through a scene. A ray trajectory between two reflections (or between a screen position and a first reflection or between a screen position or a reflection and a light source) is referred to as a ray segment. Color is determined for each pixel as it is traced from the viewing perspective (eg, camera) to the light source. A variety of techniques for ray tracing are known in the art. See, for example, Cook, RL and Torrance, KE, "AReflectance Model for Computer Graphics," ACM Trans. on Graphics 1, 1 (January 1982) and Glassner, A. (ed), "An Introduction to Ray Tracing," Academic Press, New York, 1989.When rendering an image using ray tracing techniques, the image screen can be rendered by starting the eye ray at each screen position. The screen position is also referred to as the destination pixel. Each line of sight will traverse the three-dimensional scene space and generate one or more segments of light due to interaction with the reflection and refraction of the objects in the scene. The ray segments associated with different destination pixels are independent of one another. The processing of ray segments associated with different destination pixels can be computed in parallel without modifying the shared resources and thus not using the beacon mechanism.When there are many destination pixels on the image screen, the ray tracing problem works well for a large multi-threaded computing architecture. For a single destination pixel, there can be multiple segments of light. When accumulating the contribution of multiple ray segments to a single pixel, the final color can be determined as the weighted sum of each ray segment associated with that pixel. For example, when a ray segment of a single pixel is processed with different threads on a large multi-threaded computing architecture, the update of the final color of the pixel as a shared resource of the thread associated with the same pixel requires the use of a beacon mechanism. For example, the above N:1 affiliation mechanism can be used for ray tracing.Using active and/or active beacons, operations for tracking ray segments, including ray segments associated with shared pixels, can be performed in parallel. In one embodiment, the beacon identifier can be determined by hashing the destination pixel address. Some pixels can share a beacon if the available beacons are less than independent pixels. This is a performance issue and not a functional issue. In this embodiment, the beacon can be used dynamically without global synchronization. The operation for tracking the ray segments associated with the pixels is performed sequentially without using beacons. Therefore, the use of beacons with ray tracing techniques allows for greater parallel processing.Figure 3 is a simple example scenario in which light is tracked from the source to the viewer. A large amount of light travels between the light source 300 and the viewer 300; however, for the sake of simplicity of description, only a small amount of light is shown in FIG.Light ray 340 travels directly from light source 300 to viewer 330. Because light ray 340 is not reflected or refracted, the pixels corresponding to light ray 340 are represented by the color of the light provided by light source 300. Since the light is reflected by the object 310, the ray tracing calculations for the light corresponding to the ray segments 350 and 355 are more complicated.As described above, the ray tracing operations for segments 350 and 355 can be performed in parallel. Thus, the ray tracing operations of segments 350 and 355 can be performed as two threads, combining the results to provide pixel colors obtained from multiple ray tracing operations. As mentioned above, the coordination of the two threads can be done with the active beacon.4 is a flow diagram of one embodiment of ray tracing using active beacons. In 410, the ray path is determined. The determination of the ray path can be accomplished in any manner known in the art. In 420, components used in ray tracing operations (eg, hardware components, software components, etc.) determine whether multiple ray segments affect a single pixel.If a single ray segment affects a single pixel, 420, the ray path is tracked in 425. For this single ray segment, any ray tracing technique known in the art can be used. In 475, the pixel color is determined based on the result of the ray tracing operation. This pixel can then be displayed, printed or otherwise rendered for viewing.If multiple ray segments affect a single pixel, 420, one or more of the multiple ray segments can be tracked in parallel. Parallel ray tracing of multiple ray segments can form an N:1 membership, where the pixel results depend on the results of the ray tracing operations for the N ray segments. The N:1 affiliation is processed as described above.When the affiliation is resolved, 440, the result of multiple ray tracing operations can be accumulated, 450. The pixel color is determined based on the result of the accumulated ray tracing operation, 475. The pixel can then be displayed, printed, or otherwise rendered for viewing.Z-buffered 3D rendering using active beaconsIn Z-buffer-based 3D graphics rendering, the rendered objects are divided into rendering primitives such as points, lines, triangles, triangles, and the like. Project the rendering primitive onto the viewing screen. Render primitives that are projected onto different screen pixels can be rendered independently. When multiple opaque primitives are projected onto the same screen pixel, only the primitives before the other primitives (having a minimum distance metric from the destination pixel, the so-called Z value) update the screen pixel color.The Z buffer is a screen size buffer that stores the most recently updated Z value of the screen pixels pixel by pixel. Use Z-test to solve dim. For any primitive projected on a screen pixel, the Z value of the primitive is compared to the Z value stored in the Z buffer for the screen pixel. If the Z value of the primitive is less than the Z buffer value, the destination pixel color is updated with the rendered color from the primitive, and the Z buffer value is also updated. If the Z value of the primitive is equal to or greater than the Z buffer value, the destination pixel color and the corresponding Z buffer value are not changed.In order to generate a consistent screen image, primitives projected onto the same screen pixel must be rendered in a strict order. Z-buffer-based 3D rendering can be implemented using multithreading on a large number of threaded architectures. For example, independent primitives can be rendered by independent threads. The affiliation between primitives, such as Z-buffer testing and updating of common pixels by multiple primitives, can be resolved using the above-described beaconing mechanism.5 is a flow diagram of one embodiment of a Z-buffered three-dimensional graphics rendering using active ordered beacons. The object to be rendered may be segmented into a plurality of primitives and a plurality of primitive portions, 510 based on a projection of the viewing screen. A beacon is placed for these primitives or primitive portions based on the projected screen pixel location, 520.A rendering operation is performed on these primitives or primitive portions by a plurality of threads of instructions, 530. A thread may be executed by one or more processors and may resolve the affiliation using one or more of the above beacon mechanisms. For example, three-dimensional rendering of multiple primitives by different threads projected onto the same screen pixel may result in a 1:1:1 dependency, where the Z-test and destination color update for each thread depends on the thread. The result of one or more threads that update the same screen pixel before. This 1:1:1 dependency is processed as described above.When the affiliation is resolved, 540, the thread for the given primitive or primitive segment performs a Z test and updates the Z value and color value for the projected pixel, 550, on a successful Z test. The final rendered image is generated after rendering the primitive, 560. The final rendered image can then be displayed, printed or otherwise rendered for viewing.Video decoding using active beaconsIn some video coding standards, such as MPEG-2, a group of one or more segments (such as macroblocks) within a picture (visual object plane or VOP) may be decoded by separate threads of instructions. In some video coding standards, such as MPEG-4, decoding of picture segments such as macroblocks has affiliation of other picture segment decoding. Thus, a picture can be decoded by multiple threads of instructions on a multi-threaded architecture. The dependencies between these threads can be resolved using the above beacon mechanism.For example, MPEG-2 is described in ISO/IEC 13818 "Generic coding of moving pictures and associated audio information" and related standards published in October 2000. For example, MPEG-4 is described in ISO/IEC 14496 "Coding of audio-visual objects" and related standards published in March 2002.6 is a flow diagram of one embodiment of video decoding using active and/or active beacons. The flowchart depicts the decoding process for a picture of a video sequence. The same process can be repeated to decode multiple pictures of a video sequence. Determine the segment of the picture to be decoded, 610. These segments may for example be blocks, block groups, macroblocks or macroblock groups, or any other segment of any frame to be decoded.In one embodiment, prior to performing a decoding operation on the segments by different threads of the instruction, 640, an inter-segment affiliation is determined. If a segment of a segment having a header dependency depends on the decoding result of the other segment, 620, one or more beacons having a header affiliation are configured for the instruction thread that processes the segment, 625. If the segment has a tail dependency, the decoding of the subsequent segment depends on the decoding result of the segment, 630, and one or more beacons having a tail affiliation are configured for the instruction thread that processes the segment, 635.The segments are decoded by a plurality of threads of instructions, 640. These threads may be executed by one or more processors and may resolve the affiliation using one or more of the above beacon mechanisms. For example, for a segment with a header affiliation for N segments, the affiliation of these segments can be resolved using beacons configured in the N:1 Dependency Pattern. Configuring a thread of N dependent segments with a beacon having a tail affiliation and configuring that dependent segment with a beacon having a header affiliation. The N:1 affiliation is processed as described above.When the affiliation of the segment is resolved, 630, the result of the decoded segment is generated, 650. The final image is generated from the total segment result, 660. The final decoded picture can then be displayed, printed or otherwise rendered for viewing.in conclusionA reference to "one embodiment" or "an embodiment" in the specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in the <In the foregoing specification, the invention has been described with reference to the specific embodiments thereof. It is apparent that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded as
A method of packaging an integrated circuit (100), including providing a lead frame having lead fingers (115), where the lead frame has a gold layer thereon on a top surface and a bottom surface. An integrated circuit die (105) is attached to the lead frame. The gold layer is substantially removed from portions of the top surface of the lead frame. The integrated circuit die is wire bonded to the lead fingers with a plurality of wire stitches (125) subsequent to substantially removing the gold. The die is encapsulated in a mold compound (130) to form a packaged integrated circuit.
CLAIMS 1. A method of packaging an integrated circuit, comprising: attaching an integrated circuit die to a lead frame having lead fingers, said lead frame having an outermost gold layer thereon on a top surface and a bottom surface; substantially removing said gold from a portion of the top surface of said lead frame; wire bonding said integrated circuit to said lead fingers with at least one wire stitch, subsequent to substantially removing said gold; and encapsulating said die in a mold compound to form a packaged integrated circuit. 2. The method as recited in Claim 1, wherein said gold layer is removed by a plasma process using a plasma comprising Ar. 3. The method as recited in Claim 1, wherein said gold layer is substantially removed after said attaching. 4. The method as recited in Claim 1, wherein said gold layer is removed in a plasma etch process that also removes contamination on a surface of said die. 5. The method as recited in Claim 1, wherein said gold on said top surface remains under said die. 6. The method as recited in Claim 1, wherein a portion of said gold is removed prior to said attaching. 7. The method as recited in any of Claims 1- 6, wherein palladium is exposed by said removal. 8. A packaged integrated circuit, comprising: a lead assembly having a lead finger, said lead assembly having a top metallization and a bottom metallization, said bottom metallization having an outermost layer of gold located thereon; an integrated circuit die affixed to said lead assembly; an outermost wire bonding surface of said lead finger that is substantially free of gold; a wire bond having a first end connected to said integrated circuit die and a second end connected to said bonding surface; and mold compound encapsulating said die and wire bond. 9. The packaged integrated circuit recited in Claim 8, wherein said bonding surface of said lead fingers comprises palladium. 10. The packaged integrated circuit recited in Claim 9, wherein said palladium is located over a layer comprising nickel. 11. The packaged integrated circuit recited in Claim 8 or 9, wherein said lead assembly has a core, and said top metallization includes gold located between said integrated circuit die and said core. 12. A packaged integrated circuit formed by the process of: attaching an integrated circuit die to said lead frame over said top surface, said lead frame having a top and a bottom surface, said top and bottom surfaces having a gold layer thereon, and said lead frame comprising a lead finger; wire bonding said integrated circuit to said lead finger; encapsulating said die in a mold compound to form a packaged integrated circuit; and substantially removing said gold on said top surface prior to wire bonding.
SELECTIVE REMOVAL OF GOLD FROM A LEAD FRAMEThe invention is directed, in general, to integrated circuit packaging; and, more specifically, to a method of improving adhesion of mold compound to a lead frame. BACKGROUND Integrated circuits are typically produced using a semiconductor wafer on which multiple copies, or die, of the circuit are simultaneously fabricated. After fabrication, the die are tested and separated in preparation for packaging. The functional die are attached to a lead frame having positions for several die. The die is electrically connected to the lead frame by wire bonding, and encapsulated with a mold compound. The packaged die are then separated from the lead frame and may be tested prior to shipment.The lead frame metallization may include electroplated nickel (Ni) and palladium (Pd). A gold layer is typically electroplated over the palladium layer to improve the wetting of solder to the leads when the packaged die is mounted to a circuit board. The gold is typically formed on all exposed palladium surfaces of the lead frame. The described manufacturing protocol results in packaged integrated circuits that have a favorable reliability record. However, a number of packaged die failures may still occur due to imperfect adhesion of the mold compound to the lead frame that allows ingress of moisture into the package. Such failures may result in lower packaging yield or may even occur in a customer installation. Accordingly, what is needed in the art is an improved method of packaging that reduces the failure rate of packaged integrated circuits. SUMMARYTo address the above-discussed deficiencies of the prior art, the invention provides a method of packaging an integrated circuit. An integrated circuit (IC) die is attached to a lead frame having lead fingers, with the lead frame having a top surface and a bottom surface and an outermost gold layer thereon. The outermost gold layer is substantially removed from a portion of the top surface of the lead frame. At least one wire stitch is used to wire bond the integrated circuit die to the lead fingers subsequent to substantially removing the gold. The die and wire bonds are encapsulated in a mold compound to form a packaged integrated circuit. Another embodiment is a packaged integrated circuit including at least a portion of a lead frame having a lead finger. The lead frame has a top metallization and a bottom metallization, with the bottom metallization having an outermost layer of gold located thereon. An integrated circuit die is affixed to the lead frame portion. The lead fingers have an outermost wire bonding surface that is substantially free of gold. A wire bond connects the die to a lead finger, and mold compound encapsulates the die and wire bond. Another embodiment is a packaged integrated circuit. The packaged circuit is formed by the process of providing a lead frame with a top and a bottom surface. The top and bottom surfaces have a gold layer thereon, and the lead frame includes a lead finger. An integrated circuit die is attached over the top surface, and is wire bonded to the lead finger. The die is encapsulated in a mold compound to form a packaged integrated circuit. The gold on the top surface is substantially removed prior to wire bonding. BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a packaged integrated circuit; FIG. 2 illustrates a lead frame; FIG. 3 illustrates a portion of a lead frame; FIG. 4 illustrates a process that substantially removes an outermost gold layer; andFIG. 5 illustrates an embodiment of a method of manufacturing an integrated circuit. DETAILED DESCRIPTION OF THE EMBODIMENTSFIG. 1 illustrates a packaged integrated circuit (IC) 100 manufactured according to the invention. Without limitation, a Quad Flat No-lead (QFN) package is shown as an example. While the following description assumes the use of a QFN package, those skilled in the packaging arts will recognize that the invention may be practiced with numerous package types. The packaged IC 100 includes an IC die 105 mounted to a die pad 110 with adhesive 112, and lead fingers 115. Bond pads 120 on the IC die 105 may be connected to the lead fingers 115 by fine wire stitches 125. In the following discussion, it is assumed that gold wire is used, though in some cases, other materials such as copper and aluminum may be used. In addition to the lead fingers 115, a bond pad 120 may be connected to the die pad 110 by a down bond 127. The IC die 105, stitches 125 and a portion of the lead fingers 115 are encapsulated by a mold compound 130.The lead fingers 115 and die pad 110 have a top surface 145 and a bottom surface 150. As described further below, the bottom surface 150 may have an outermost layer of gold to improve wetting of solder thereto when the packaged IC 100 is mounted to a circuit assembly at a later stage of manufacturing. In contrast, as discussed below, portions of the top surface 145 are substantially free of gold.FIG. 2 illustrates an example of a lead frame 200. The lead frame 200 may be received from a manufacturer with an outermost layer of gold formed thereon. For reasons discussed below, the gold on a top surface of the lead frame 200 is substantially removed prior to forming the wire stitches 125. The lead frame 200 includes a plurality of individual lead assemblies 210, which each include a die pad 110 and at least one lead finger 115. The IC die 105 is typically attached to the die pad 110 prior to wire bonding. After wire bonding, the assembly including the IC die 105, the lead assembly 210, and stitches 125 is encapsulated using the mold compound 130 to form the packaged IC 100 and separated from other packaged ICs 100 formed on the same lead frame 200.FIG. 3 A illustrates a single lead assembly 210 in greater detail prior to processing to remove gold therefrom. FIG. 3B illustrates a sectional view of a single lead finger 115. The lead finger 115 is typically coplanar with the die pad 110. FIG. 3C illustrates a sectional view of the lead finger 115. The lead finger 115 includes a core 310. The core 310 may be a currently existing conventional material, such as copper, copper/beryllium alloy, or phosphor bronze, or may be a future-discovered material.A nickel layer 320 may be formed over the core 310, and palladium layers 330, 340 may be formed thereover. An outermost layer of gold 350 overlies the palladium layer 330 on the bottom surface 150 of the lead finger 115. The gold layer 350 is believed to improve wetting of solder to an exposed portion of the lead finger 115 when the packaged IC 100 is mounted to a circuit board in a later stage of manufacturing. An outermost layer of gold 360 overlies the palladium layer 340 on the top surface 145 of the lead finger 115. A described below; however, the gold layer 360 is substantially removed prior to formation of the wire stitches 125.The Applicants have discovered that the adhesion of some mold compounds is greater to palladium than to gold. If the adhesion is not adequate, the mold compound may delaminate from a lead finger 115 or the die pad 110. When delamination occurs, environmental moisture may find ingress into the package. The resulting proximity of moisture to the IC die 105 may lead to early failure thereof. Thus, one method of increasing adhesion of the mold compound 130 to the lead finger 115 might be to eliminate the gold layers 350, 360 from the lead finger 115. However, this would remove the aforementioned advantageous solder wetting properties of the gold layer 350. An alternative might be to mask the top surface 145 of the lead frame 200 to prevent deposition of the gold layer 360 while allowing deposition of the gold layer 350 when the lead frame 200 is manufactured. However a masking process may not be possible without adding unacceptable cost to the packaged IC 100 in a highly competitive industry.An embodiment of the invention provides for the economical and substantial removal of the gold layer 360 from the lead finger 115, and in one aspect of this embodiment, the gold layer 360 is substantially removed while retaining the benefit of the gold layer 350. In this particular embodiment, a plasma process is used to result in the substantially complete removal of the gold layer 360. The mold compound 130 may then directly contact the palladium layer 340, and adhesion to the lead finger 115 may be increased. This increase may then result in a reduction of the failure rate of the packaged IC 100. FIG. 4 illustrates a process 410 designed to substantially remove the gold layer 360. As used herein, "substantially remove" is defined as removing a sufficient portion of the gold layer 360 such that a wire stitch 125 to the top surface 145 forms an intermetallic region including metal from the wire stitch 125 and palladium from the palladium layer 340. Similarly, with respect to a wire bonding surface, the surface is "substantially free" of the gold layer 360 when such an intermetallic layer may be formed or when only trace amounts 420 of the gold layer 360 remain.Portions of the top surface 145 of the die pad 110 not protected by the IC die 105 are also exposed to the process 410. Because down bonds 127 may be formed to the die pad 110, the exposed portions of the die pad 110 may be regarded as bonding surfaces in addition to the lead fingers 115. The process 410 is expected to remove the gold layer 360 from such portions in a substantially similar manner as for the lead fingers 115. Throughout this description, it is assumed that this is so even without explicit reference to the die pad 110.The IC die 105 acts to substantially block the process 410 from removing the gold layer 360 under the IC die 105. Thus, the gold layer 360 will remain substantially unaltered from the condition thereof when the IC die 105 is attached to the die pad 110. The process 410 may be designed to remove the gold layer 360 at a minimum rate high enough to result in acceptable process throughput. Further, the process may be designed to remove the gold layer 360 at a maximum rate, above which the process would not be easily controllable. Contamination present on a surface of the IC die 105 may also be removed by the process 410. The etch gas may include a noble gas such as Ar, or may include O2 or N2. A bias voltage may optionally be used.An example of a process 410 is provided below, while recognizing that details of such a process may depend on the thickness of the gold layer 360 and on the etch tool used to implement the process. Using a Panasonic PCX-303 etch tool, an Ar plasma may be used with a pressure of about 20 Pa and 600 W power without bias. Under these conditions, a removal rate of about 35 nm/min results, and a process time ranging from about 20s to about 25s is sufficient to substantially remove a gold layer 360 with a thickness of about 3-12 nm. Those skilled in the pertinent art may determine specific process parameters based on the characteristics of the etch tool and the thickness of the gold layer 360. Without limitation, an example is provided to illustrate the improved adhesion between the mold compound 130 and the lead fingers 115 and die pad 110 resulting from use of the process 410. A package qualification test was conducted per JEDEC standard J-STD-020C using 24 IC die individually assembled in QFN packages. The J-STD-020C standard sets forth moisture sensitivity levels (MSLs) that specify a minimum lifetime of a packaged part at 30 [deg.]C/60% RH. A lower MSL indicates a longer period of time for which a package may be stored at these conditions. The standard provides an accelerated soak condition for each MSL to demonstrate survival for the minimum time specified by that MSL. Without removing the gold layer 360, all of a first group of twelve packages suffered delamination of the mold compound from the die pad 110 when tested at Level 2 soak conditions of 168 hours at 85 [deg.]C/60% RH. In contrast, all of a second group of twelve packages assembled using the process 410 survived the Level 2 soak conditions without failure. Thus, substantial removal of the gold layer 360 from the lead fingers 115 and die pad 110 may increase (improve) the MSL by at least one rank.In another embodiment, the gold layer 360 may be partially or substantially removed prior to attaching the IC die 105 to the lead frame 200. Additional gold may then be removed from the top surface 145 after the IC die 105 is attached to the die pad 110. When the gold layer 360 is removed in this manner, the time that the IC die 105 is exposed to plasma is reduced relative to removal entirely after the die 105 is attached. The lower exposure time may be advantageous for certain IC devices having particularly sensitive circuits or a thin passivation overcoat (PO) on the IC die 105. Substantial removal of the gold layer 360 may also improve the connection strength of the stitches 125 between the bond pad 120 and the lead finger 115. A wire connection may be characterized as having a ball end and a stitch end. The ball end is generally attached to the bond pad 120, and the stitch end is generally attached to the lead finger 115. When a stitch end is formed, heat and ultrasonic energy act to form an intermetallic region including metal from the stitch 125 and metal from the top surface 145. It is believed that when the gold layer 360 is substantially removed, the stitch forms an intermetallic region including the palladium layer 340, and the strength of the wire connection is increased relative to a stitch made to the gold layer 360.FIG. 5 illustrates an embodiment of a method of manufacturing an integrated circuit in which an outermost layer of gold is removed from a top surface of a lead frame. In a step 510, a lead frame is received from a manufacturer. The lead frame is received with an outermost layer of gold on a top and bottom surface, and a layer of palladium thereunder. In a step 520, an IC die is attached to a die pad of the lead frame by a suitable means such as epoxy adhesive. In a step 530, the gold is substantially removed from a portion of the top surface of the lead frame. Optionally, the removal may be performed prior to or after the IC die is attached to the lead frame. In another option, the gold may be partially removed before the IC die is attached to the lead frame, and substantially removed thereafter. The gold may be removed using a plasma process as previously described.In a step 540, electrical connections from the IC die to lead fingers are made. Such connections may be made, e.g., by gold wire bonding. In a step 550, the IC die and gold wires are encapsulated in a mold compound. In a step 560 the encapsulated IC die and lead fingers are separated from the lead frame.Those skilled in the art to which the invention relates will appreciate that other and further additions, deletions, substitutions and modifications may be made to the described embodiments without departing from the scope of the invention.
The present invention is directed to a system and method for controlling the formation of a layer of photoresist. In one illustrative embodiment, the method comprises sensing a viscosity of the photoresist material to be applied on a process layer, providing the sensed viscosity to a controller that determines, based upon the sensed viscosity, at least one parameter of a photoresist application process used to apply the photoresist material, and applying the photoresist using an application process that is comprised of said determined parameter. In one illustrative embodiment, the system is comprised of at least one sensor for sensing the viscosity of the photoresist, a controller that receives the sensed viscosity and determines, based upon the sensed viscosity, at least one parameter of the application process used to apply the photoresist, and a tool for applying the photoresist using a process that includes the determined parameter.
What is claimed: 1. A method, comprising:sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above a semiconducting substrate; providing said sensed viscosity to a controller, said controller determining, based upon said sensed viscosity, at least one parameter of a photoresist application process whereby said photoresist material is applied above said process layer; and applying said photoresist material using a photoresist application process that is comprised of said determined parameter. 2. The method of claim 1, wherein said sensing of said viscosity of said photoresist is accomplished by a viscosity sensor positioned adjacent a reservoir containing said photoresist.3. The method of claim 1, wherein said sensing of said viscosity of said photoresist is accomplished by a viscosity sensor positioned adjacent a conduit through which said photoresist material is supplied.4. The method of claim 1, wherein said controller determines at least one of a rotational speed of said substrate, an acceleration of said substrate, a volume of photoresist material to be applied, a duration of a spinning process used to spread the photoresist material, and a flow rate of the photoresist material.5. The method of claim 1, wherein said controller is a stand-alone controller.6. The method of claim 1, wherein said controller is a component of a tool wherein said photoresist material is applied on said process layer.7. The method of claim 1, wherein determining, based upon said sensed viscosity, at least one parameter of a photoresist application process comprises calculating at least one parameter of a photoresist application process based upon said sensed viscosity.8. The method of claim 1, wherein determining, based upon said sensed viscosity, at least one parameter of a photoresist application process comprises correlating at least one parameter of a photoresist application process with said sensed viscosity.9. The method of claim 1, wherein determining, based upon said sensed viscosity, at least one parameter of a photoresist application process comprises modeling at least one parameter of a photoresist application process with said sensed viscosity.10. The method of claim 1, wherein said sensed viscosity is an average viscosity that is based upon multiple viscosity readings of said photoresist material.11. A method, comprising:positioning a semiconducting substrate on a rotational element; sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate; determining, based upon said sensed viscosity, a rotational speed for said substrate; depositing said photoresist material above said process layer; and rotating said substrate on said rotational element at said determined rotational speed. 12. The method of claim 11, wherein positioning a semiconducting substrate on a rotational element comprises positioning a semiconducting substrate on a rotational element comprised of a vacuum chuck.13. The method of claim 11, wherein sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate comprises sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate using a Model TT-100 viscometer manufactured by Brookfield.14. The method of claim 11, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is rotating.15. The method of claim 11, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer through a dispenser arm that moves relative to said substrate.16. The method of claim 11, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is stationary.17. The method of claim 11, wherein said sensed viscosity is an average viscosity that is based upon multiple viscosity readings of said photoresist material.18. A method, comprising:positioning a semiconducting substrate on a rotational element; sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate; determining, based upon said sensed viscosity, an acceleration for said substrate; depositing said photoresist material above said process layer; and accelerating said substrate on said rotational element at said determined acceleration. 19. The method of claim 18, wherein positioning a semiconducting substrate on a rotational element comprises positioning a semiconducting substrate on a rotational element comprised of a vacuum chuck.20. The method of claim 18, wherein sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate comprises sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate using a Model TT-100 viscometer manufactured by Brookfield.21. The method of claim 18, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is rotating.22. The method of claim 18, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer through a dispenser arm that moves relative to said substrate.23. The method of claim 18, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is stationary.24. The method of claim 18, wherein said sensed viscosity is an average viscosity that is based upon multiple viscosity readings of said photoresist material.25. A method, comprising:positioning a semiconducting substrate on a rotational element; sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate; determining, based upon said sensed viscosity, a volume of said photoresist material to be applied above the process layer; depositing said determined volume of said photoresist material above said process layer; and rotating said substrate on said rotational element. 26. The method of claim 25, wherein positioning a semiconducting substrate on a rotational element comprises positioning a semiconducting substrate on a rotational element comprised of a vacuum chuck.27. The method of claim 25, wherein sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate comprises sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate using a Model TT-100 viscometer manufactured by Brookfield.28. The method of claim 25, wherein depositing said determined volume of said photoresist material above said process layer comprises depositing said determined volume of said photoresist material above said process layer while said substrate is rotating.29. The method of claim 25, wherein depositing said determined volume of said photoresist material above said process layer comprises depositing said determined volume of said photoresist material above said process layer through a dispenser arm that moves relative to said substrate.30. The method of claim 25, wherein depositing said determined volume of said photoresist material above said process layer comprises depositing said determined volume of said photoresist material above said process layer while said substrate is stationary.31. The method of claim 25, wherein said sensed viscosity is an aver age viscosity that is based upon multiple viscosity readings of said photoresist material.32. A method, comprising:positioning a semiconducting substrate on a rotational element; sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate; determining, based upon said sensed viscosity, a duration for rotating said substrate; depositing said photoresist material above said process layer; and rotating said substrate on said rotational element for said determined duration. 33. The method of claim 32, wherein positioning a semiconducting substrate on a rotational element comprises positioning a semiconducting substrate on a rotational element comprised of a vacuum chuck.34. The method of claim 32, wherein sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate comprises sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate using a Model TT-100 viscometer manufactured by Brookfield.35. The method of claim 32, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is rotating.36. The method of claim 32, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer through a dispenser arm that moves relative to said substrate.37. The method of claim 32, wherein depositing said photoresist material above said process layer comprises depositing said photoresist material above said process layer while said substrate is stationary.38. The method of claim 32, wherein said sensed viscosity is an average viscosity that is based upon multiple viscosity readings of said photoresist material.39. A method, comprising:positioning a semiconducting substrate on a rotational element; sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate; determining, based upon said sensed viscosity, a flow rate for said photoresist material; applying said photoresist material at said determined flow rate to said process layer; and rotating said substrate on said rotational element. 40. The method of claim 39, wherein positioning a semiconducting substrate on a rotational element comprises positioning a semiconducting substrate on a rotational element comprised of a vacuum chuck.41. The method of claim 39, wherein sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate comprises sensing a viscosity of a photoresist material to be applied above a surface of a process layer formed above said semiconducting substrate using a Model TT-100 viscometer manufactured by Brookfield.42. The method of claim 39, wherein applying said photoresist material at said determined flow rate to said process layer comprises applying said photoresist material at said determined flow rate to said process layer while said substrate is rotating.43. The method of claim 39, wherein applying said photoresist material at said determined flow rate to said process layer comprises applying said photoresist material at said determined flow rate to said process layer through a dispenser arm that moves relative to said substrate.44. The method of claim 39, wherein applying said photoresist material at said determined flow rate to said process layer comprises applying said photoresist material at said determined flow rate to said process layer while said substrate is stationary.45. The method of claim 39, wherein said sensed viscosity is an average viscosity that is based upon multiple viscosity readings of said photoresist material.
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention is generally related to the field of semiconductor processing, and, more particularly, to a method of forming a layer of photoresist above a process layer formed above a semiconducting substrate.2. Description of the Related ArtIn general, semiconductor devices are manufactured by forming many process layers comprised of various materials above a semiconducting substrate, and, thereafter, removing selected portions of the layers, i.e., patterning the layers. This patterning may be accomplished using known photolithography and etching processes to define the various features of the device, e.g., the gate insulation layer, the gate electrode, sidewall spacers, metal lines and contacts, etc. This forming and patterning of the process layers is typically performed layer by layer as the individual layers are formed, although multiple layers may be patterned at any given time.Photolithography is a common process used in patterning these various layers. Photolithography typically involves the use of a product known as photoresist. In general terms, photoresist is a product that may be changed from a relatively soluble state to a relatively insoluble state by exposure to a light source. There are positive and negative photoresist currently available on the market.In general, the photolithography process involves forming a layer of photoresist above a previously formed process layer, and exposing selected portions of the layer of photoresist to a light source to form a pattern in the photoresist that is desired to be formed in the underlying process layer. All of these steps are typically performed in well-known photolithography modules that include a section for depositing the photoresist on the wafer, e.g., a spin-coating station, a device for selectively exposing portions of the photoresist layer to a light source through a reticle, e.g., a stepper, and a section for rinsing and developing the photoresist layer after it has been selectively exposed to the light source. Thereafter, an etching process, such as a plasma etching process, is performed to remove portions of the underlying process layer that are not covered by the patterned layer of photoresist, i.e., the patterned layer of photoresist acts as a mask. After the etching process is complete, the patterned photoresist layer is removed so that additional process layers may be formed above the now patterned process layer.The purpose of the photoresist application step is to form a thin, uniform, defect-free film of photoresist above the substrate surface. A typical layer of photoresist may have a thickness varying from approximately 1500-15,000 Å, and it usually is required to have a uniformity of ±100 Å. Typically, when resist types are switched, and/or the target thickness of the layer of photoresist is changed, test wafers are run to determine the thickness of the photoresist produced by the system. In particular, when photoresist types are switched and/or when the supply of photoresist material is replenished, variations in the viscosity of the photoresist may also adversely impact the formation of layers of photoresist. For example, since the viscosity of the photoresist material is a factor in determining the thickness of a layer of photoresist, test wafers may also be run to determine the thickness of the photoresist layers produced using the new or replenished material. All of these qualification processes are time consuming and generally contribute to less efficient semiconductor manufacturing operations.The present invention is directed to a method of solving or at least reducing some or all of the aforementioned problems.SUMMARY OF THE INVENTIONThe present invention is directed to a method and system for controlling the thickness of a layer of photoresist based upon the viscosity of the photoresist. In one illustrative embodiment, the method comprises sensing a viscosity of the photoresist material to be applied above a process layer, and providing the sensed viscosity to a controller that determines, based upon the sensed viscosity, at least one parameter of a photoresist application process that will be used to apply the photoresist above the process layer. The method concludes with applying the photoresist material using the application process that includes the determined parameter.With respect to the novel system disclosed herein, in one illustrative embodiment, it is comprised of at least one sensor for sensing the viscosity of the photoresist material, and a controller that receives the sensed viscosity and determines, based upon the sensed viscosity, at least one parameter of the application process used to apply the photoresist material above a process layer. The system further comprises a tool for applying the photoresist material using an application process that includes the determined parameter.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a cross-sectional view of a process whereby a quantity of photoresist is positioned on a previously formed process layer;FIG. 2 is a cross-sectional view of a layer of photoresist formed by a spin-coating process;FIG. 3 depicts one illustrative embodiment of a system that may be employed with the present invention;FIG. 4 depicts one illustrative embodiment of the present invention in flowchart form; andFIG. 5 depicts another illustrative embodiment of the present invention depicted in flowchart form.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to FIGS. 1-5. Although the various regions and structures of a semiconductor device are depicted in the drawings as having very precise, sharp configurations and profiles, those skilled in the art recognize that, in reality, these regions and structures are not as precise as indicated in the drawings. Additionally, the relative sizes of the various features depicted in the drawings may be exaggerated or reduced as compared to the size of those feature sizes on fabricated devices. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention.In general, the present invention is directed to a method of controlling the thickness of a layer of photoresist based upon the viscosity of the photoresist. As will be readily apparent to those skilled in the art upon a complete reading of the present application, the present method is applicable to a variety of technologies, e.g., NMOS, PMOS, CMOS, etc., and it is readily applicable to a variety of devices, including, but not limited to, logic devices, memory devices, etc.As shown in FIG. 1, a semiconducting substrate 10, having a process layer 18 formed thereabove, is positioned on a rotational element, such as a vacuum chuck 12. A vacuum may be applied, as indicated by arrow 14, to secure the substrate 10 to the vacuum chuck 12. The vacuum chuck 12 and the substrate 10 are capable of being rotated in the direction indicated by arrow 26. Photoresist from a source (not shown) is applied on the process layer 18 via dispenser arm 20. As shown in FIG. 1, a puddle of photoresist 21 is initially formed above the process layer 18. The substrate 10 may or may not be rotating at the time the puddle of photoresist 21 is deposited on the process layer 18. Thereafter, the substrate 10 is rotated such that a layer of photoresist 23, as shown in FIG. 2, is formed above the surface 19 of the process layer 18. In some photoresist equipment, a given volume of photoresist is held in a container over the process layer 18. At the appropriate time, the photoresist is released or dumped onto the process layer 18. The present invention may also be used with this type of equipment.As will be recognized by those skilled in the art, the process layer 18 is meant to be illustrative only in that it may be comprised of any of a variety of materials, and there may be one or more intervening process layers between the process layer 18 and the substrate 10. For example, the process layer 18 may be comprised of an oxide, an oxynitride, a nitride, silicon dioxide, silicon nitride, a metal, polycrystalline silicon ("polysilicon"), or any other of a variety of materials used in semiconductor processing operations that may be patterned using photolithographic techniques. Moreover, the photoresist used with the present invention may be either a positive or negative type photoresist.In the disclosed embodiment, the layer of photoresist 23 is formed by a spin-coating process. In many modem fabrication facilities, the spin-coating process used to form layers of photoresist also involves movement of the dispenser arm 20 (typically in a radial fashion) as the photoresist is deposited on the process layer 18. In that situation, the substrate 10 is rotated at a relatively low speed prior to the deposition of any photoresist material 21 on the process layer 18. As the photoresist material 21 is deposited on the substrate, the dispenser arm 20 moves in a more or less radially outward fashion, beginning at the center of the substrate 10 and moving outward. This technique is used to more evenly distribute the photoresist across the surface 19 of the process layer 18.Of course, as will be apparent to those skilled in the art upon reading the present application, the present invention is not limited to this particular spin-coating technique. For example, the present invention may be used in situations in which the dispenser arm 20 remains at the approximate center of the substrate 10. In that situation, the substrate 10 is initially rotated at a relatively low speed and photoresist is dispensed in the approximate center of the process layer 18. At that time, the rotational speed of the substrate is increased so as to disperse the photoresist. In yet another alternative embodiment, a static-type spin-coating process may be used in which the photoresist is deposited in the approximate center of a process layer 18 while the process layer 18 is stationary and, thereafter, the substrate 10 is rotated to disperse the photoresist evenly across the surface 19 of the process layer 18. If desired or required, a separate primer coating process may also be used prior to depositing the photoresist above the process layer 18.In general, the present invention is directed to sensing the viscosity of the photoresist, determining, based upon the sensed viscosity, at least one parameter of the photoresist application process, and applying the photoresist using an application process comprised of the determined parameter. The process described herein is useful for controlling the thickness of the layer of photoresist 23. Viscosity is a quantitative measurement of the ability of a liquid to flow. Higher viscosity liquids, such as, for example, motor oils, tend to flow in a more sluggish manner, whereas lower viscosity fluids, such as water, tend to flow more readily. In general, for a desired resist film thickness, and a constant quantity of resist, as the viscosity of the photoresist increases, a greater rotational speed or acceleration of the substrate 10 will be required to achieve the desired thickness. Conversely, for photoresist with a lower viscosity, the rotational speed or acceleration of the substrate 10 may be reduced to achieve the same thickness.A variety of parameters of the photoresist application process may be varied to compensate for changes in the viscosity of the photoresist. For example, the rotational speed (e.g., revolutions per minute) of the substrate 10 may be changed, the acceleration (e.g, revolutions per minute per minute) of the substrate 10 may be changed, the duration of the spinning process may be increased or decreased, the volume of photoresist to be applied may be increased or decreased, the flow rate of the photoresist supplied to the photolithography tool may also be varied, etc. All of the parameters may be collectively or individually varied to compensate for changes in the viscosity of the photoresist. For example, the speed and acceleration of the spinning process may both be varied in response to a sensed viscosity of the photoresist. Additionally, it should be understood that the present invention should not be considered limited to modifying, adjusting or determining the above-listed parameters, as other parameters may also be modified.FIG. 3 depicts one illustrative embodiment of a system 30 that may be used with the present invention. As shown therein, a system 30 for processing wafers 32 is comprised of a tool 34, used for forming a layer of photoresist 23, an illustrative metrology tool 38, and an automatic process controller 36. The metrology tool 38 is used to measure or sense the viscosity of the photoresist material to be applied by the tool 34. If desired, multiple metrology tools 38 may be employed at various locations in the system. Additionally, the metrology tool 38 may be used to obtain one or more readings of the viscosity of the photoresist material.In one embodiment, the automatic process controller 36 interfaces with the metrology tool 38 to control or determine at least one parameter of the photoresist application process, e.g., the rotational speed of the substrate 10. In particular, the viscosity of the photoresist material is sensed by the metrology tool 38, via line 31, and that information is supplied to the controller 36, via line 33. Thereafter, the controller 36 determines and/or controls at least one parameter of the photoresist application process used to form the layer of photoresist 23 above the process layer 18. That is, the viscosity of the photoresist material is fed forward to the controller 36, and one or more parameters of the photoresist application process is determined or adjusted based upon this measured or determined viscosity. For example, all other things being equal, as the viscosity of the photoresist material increases, the rotational speed of the substrate 10 must also be increased to maintain the same film thickness. Conversely, as the viscosity of the photoresist material decreases, the rotational speed of the substrate 10 must also be decreased to maintain the same film thickness.The metrology tool 38 may be any type of device capable of measuring the viscosity of the photoresist material, e.g., a Model TT-100 in-line viscometer manufactured by Brookfield of Middleborough, Mass. Moreover, the metrology tool 38 may be a stand-alone device or system, or it may be incorporated into the tool 34, or a system containing both. Additionally, the metrology tool 38 may be located adjacent a reservoir (not shown) containing the photoresist material. The metrology tool 38 may also be positioned adjacent to, or in line with, a conduit carrying the photoresist.The tool 34 is used to form a layer of photoresist above the process layer 18. The tool 34 may be any tool useful for forming such layers of photoresist. The tool 34 may be part of a traditional photolithography tool or module, or it may be a separate tool. For example, a TEL Mark V, made by Tokyo Electron may be employed to form a layer of photoresist.In the illustrated embodiment, the automatic process controller 36 is a computer programmed with software to implement the functions described. Moreover, the functions described for the controller 36 may be performed by one or more controllers spread through the system. However, as will be appreciated by those of ordinary skill in the art, a hardware controller (not shown) designed to implement the particular functions may also be used. Portions of the invention and corresponding detailed description are presented in terms of software, or algorithms and symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.An exemplary software system capable of being adapted to perform the functions of the automatic process controller 36, as described, is the Catalyst system offered by KLA Tencor, Inc. The Catalyst system uses Semiconductor Equipment and Materials International (SEMI) Computer Integrated Manufacturing (CIM) Framework compliant system technologies, and is based on the Advanced Process Control (APC) Framework. CIM (SEMI E81-0699-Provisional Specification for CIM Framework Domain Architecture) and APC (SEMI E93-0999-Provisional Specification for CIM Framework Advanced Process Control Component) specifications are publicly available from SEMI.One illustrative embodiment of the present invention is depicted in flowchart form in FIG. 4. As shown therein, the method comprises sensing the viscosity of a photoresist material to be applied above the surface of a processor layer formed above a semiconducting substrate, as indicated at block 41. The method further comprises providing the sensed viscosity to a controller that determines, based upon the sensed viscosity, at least one parameter of a photoresist application process whereby the photoresist material will be applied above the processor layer 43, as recited at block 43. In this particular embodiment, the method concludes with the application of the photoresist material using an application process comprised of the determined parameter.The viscosity sensing, as indicated at block 41, may be performed by one or more sensors, and it may represent a single or multiple readings of the photoresist material. Moreover, the sensed value may represent an average or some other statistical sampling of multiple readings provided by one or more sensors 38. With respect to the step indicated at block 43, the present invention may be used to modify or determine a single process parameter, multiple process parameters, or an entire application process recipe.Moreover, determining at least one parameter of a photoresist application process to be performed, as indicated at block 43, may be accomplished by a variety of techniques. For example, a database may be developed that correlates an entire photoresist application recipe, or a parameter of the recipe, e.g., duration, rotational speed, acceleration, etc., with the sensed viscosity of the photoresist material. Alternatively, a calculation, based upon the sensed viscosity of the photoresist material, may be made to determine an adjustment to be made to one or more of the application process parameters, e.g., duration, rotational speed, acceleration, etc. Additionally, a model may be developed that correlates one or more parameters of the application process with the sensed viscosity of the photoresist material. Other methodologies are also possible.Referring to FIG. 5, yet another illustrative embodiment of the present invention is depicted in flowchart form. As shown therein, the present invention comprises positioning a semiconducting substrate on a rotational element, as indicated at block 40, and sensing the viscosity of a photoresist material to be applied above the surface of a process layer formed above the semiconducting substrate, as indicated at block 42. The method further comprises determining a rotational speed of the rotational element based upon the sensed viscosity of the photoresist material, as set forth in block 44, depositing the photoresist material above the process layer, as recited at block 46, and rotating the substrate on the rotational element at the determined rotational speed, as set forth in block 48.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
A method and apparatus to reduce the probability of programmable logic device (PLD) failure due to single event upset (SEU) of configuration memory. A first portion of configuration memory cells are initially programmed with configuration data, leaving a second portion of configuration memory cells that are un-programmed. The programmed and un-programmed configuration memory cells are grouped into voting groups, where each un-programmed configuration memory cell of each voting group is programmed with the identical configuration data as contained within the originally programmed configuration memory cell of each voting group. The logic values of each configuration memory cell of each voting group are monitored by voting circuits, which enforce a triple modular redundancy (TMR) validation policy. The logical validation results are then applied to control points to mitigate PLD configuration memory errors caused by anomalous events such as neutron induced SEUs.
What is claimed is:1. An integrated circuit, comprising:a plurality of configuration memory cells;a plurality of voting circuits, each of the voting circuits being coupled to a different subset of the configuration memory cells, each of the configuration memory cells being coupled to a plurality of the voting circuits; anda plurality of control points, each control point being coupled to an output terminal of one of the voting circuits.2. The integrated circuit of claim 1, wherein each of the voting circuits is coupled to an odd number of the configuration memory cells.3. The integrated circuit of claim 2, wherein each of the voting circuits is coupled to three of the configuration memory cells.4. The integrated circuit of claim 1, wherein each of the configuration memory cells is coupled to an odd number of the voting circuits.5. The integrated circuit of claim 4, wherein each of the configuration memory cells is coupled to three of the voting circuits.6. The integrated circuit of claim 1, wherein each voting circuit is coupled to a neighboring subset of the configuration memory cells.7. The integrated circuit of claim 1, wherein each voting circuit comprises a simple majority algorithm, wherein a majority vote is required between the contents of the configuration memory cells coupled to the voting circuit to validate a logic state presented to one of the control points.
FIELD OF THE INVENTIONThe present invention generally relates to programmable logic devices (PLDs), and more particularly to single event upset (SEU) error mitigation of configuration memory within those PLDs.BACKGROUNDPLDs are a well-known type of integrated circuit that may be programmed to perform specified logic functions. One type of PLD, the Field Programmable Gate Array (FPGA), typically includes an array of programmable tiles. These programmable tiles can include, for example, Input/Output Blocks (IOBs), Configurable Logic Blocks (CLBs), dedicated Random Access Memory Blocks (BRAM), multipliers, Digital Signal Processing blocks (DSPs), processors, clock managers, Delay Lock Loops (DLLs), Multi-Gigabit Transceivers (MGTs) and so forth.Each programmable tile typically includes both programmable interconnect and programmable logic. The programmable interconnect typically includes a large number of interconnect lines of varying lengths interconnected by Programmable Interconnect Points (PIPs). The programmable logic implements the logic of a user design using programmable elements that may include, for example, function generators, registers, arithmetic logic, and so forth.The programmable interconnect and the programmable logic are typically programmed by loading a stream of configuration data into internal configuration memory cells that define how the programmable elements are configured. The configuration data may be read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Another type of PLD is the Complex Programmable Logic Device, or CPLD. A CPLD includes two or more "function blocks" connected together and to Input/Output (I/O) resources by an interconnect switch matrix. Each function block of the CPLD includes a two-level AND/OR structure similar to those used in Programmable Logic Arrays (PLAs) and Programmable Array Logic (PAL) devices. In some CPLDs, configuration data is stored on-chip in non-volatile memory. In other CPLDs, configuration data is stored on-chip in non-volatile memory, then downloaded to volatile memory as part of an initial configuration sequence.For all of these PLDs, the functionality of the device is controlled by data bits provided to the device for that purpose. The data bits can be stored in volatile memory (e.g., static memory cells, as in FPGAs and some CPLDs), in non-volatile memory (e.g., FLASH memory, as in some CPLDs), or in any other type of memory cell.Some PLDs, such as the Xilinx Virtex(R) FPGA, can be programmed to incorporate blocks with pre-designed functionalities, i.e., "cores". A core can include a predetermined set of configuration bits that program the FPGA to perform one or more functions. Alternatively, a core can include source code or schematics that describe the logic and connectivity of a design. Typical cores can provide, but are not limited to, DSP functions, memories, storage elements, and math functions. Some cores include an optimally floor planned layout targeted to a specific family of FPGAs. Cores can also be parameterizable, i.e., allowing the user to enter parameters to activate or change certain core functionality.Programmable logic devices can be susceptible to functional failure under certain circumstances. The configuration memory cells, for example, that are used to program the PLD's functionality can inadvertently "flip", or in other words, change their logic state. Such failures may be caused by single event upsets (SEUs), or other radiation induced errors, which can lead to functional failure.With the ever decreasing geometry size of semiconductor based configuration memory cells, the configuration memory cells are becoming more susceptible to SEU failures. In particular, neutron induced SEUs have a greater impact as geometries of the memory cells are reduced, since the relative size of the neutron with respect to the configuration memory cell grows. As such, neutrons that are incident to a silicon nucleus of a semiconductor device within a particular configuration memory cell may induce an alpha particle to be released by the semiconductor device. Once the alpha particle is released, its ionic polarization may be such that the logic state of the semiconductor device is "flipped", or reversed, potentially causing soft failure (i.e., recoverable via reconfiguration), or catastrophic failure. Efforts continue, therefore, to mitigate such SEU based failures.SUMMARYTo overcome limitations in the prior art, and to overcome other limitations that will become apparent upon reading and understanding the present specification, various embodiments of the present invention include apparatus and methods for a programmable logic device that provide error mitigation through usage of unused configuration memory in a given programmable logic device (PLD).In accordance with one embodiment of the invention, an integrated circuit (IC) comprises a plurality of configuration memory cells and a plurality of voting circuits. Each of the voting circuits is coupled to a different subset of the configuration memory cells and each of the configuration memory cells is coupled to a plurality of the voting circuits. The IC further comprises a plurality of control points, where each control point is coupled to an output terminal of one of the voting circuits.In accordance with another embodiment of the invention, a method of reliably configuring a programmable logic device (PLD) comprises storing configuration data into a first portion of configuration memory cells existent within the PLD to define a logic function, identifying a second portion of configuration memory cells that are void of the configuration data that defines the logic function, and utilizing the second portion of configuration memory cells as redundant memory cells. The redundant memory cells are used to verify a validity of the configuration data stored within the first portion of configuration memory cells.In accordance with another embodiment of the invention, a method of mitigating configuration memory cell errors in a programmable logic device (PLD) comprises programming a first set of configuration memory cells to implement a logic function, identifying a second set of configuration memory cells that are not programmed for the logic function, programming a portion of the second set of configuration memory cells to reflect a logic state of a corresponding portion of configuration memory cells in the first set, comparing logic values of the portion of the second set of configuration memory cells to logic values of the corresponding portion of configuration memory cells in the first set, selecting a logic value from the compared logic values that conforms to a redundancy rule, and applying the selected logic value to a control point within the PLD to implement the logic function.BRIEF DESCRIPTION OF THE DRAWINGSVarious aspects and advantages of the invention will become apparent upon review of the following detailed description and upon reference to the drawings, in which:FIG. 1 illustrates an integrated circuit (IC) that exemplifies a Field Programmable Gate Array (FPGA) architecture;FIG. 2 illustrates an exemplary memory cell contained within a programmable logic device (PLD);FIG. 3 illustrates an exemplary memory cell voting circuit;FIG. 4 illustrates an alternative memory cell voting circuit; andFIG. 5 illustrates an exemplary flow diagram of a method used to reliably configure a PLD.DETAILED DESCRIPTIONGenerally, various embodiments of the present invention provide apparatus and methods of providing an error mitigation scheme to reduce the effects of single event upset (SEU) induced errors within programmable logic devices (PLDs). In particular, each PLD contains reconfigurable logic and interconnect resources whose functions are controlled via static memory cells. SEU upsets may occur which, in some instances, are effective to change the logic value contained within the memory cells. Since these memory cells can directly affect the particular logic function implemented by the reconfigurable logic and interconnect resources, functional failures within the PLD may be observed.The memory cells are often configured into configuration data arrays, whose contents may be accessed via address and data busses. Thus, for a given address, a data word containing multiple configuration bits intended for a portion of the configuration data array may be written to the configuration data array. Incrementing the address bus throughout the configuration data array address space and changing each data word in accordance with each address change, is effective to completely configure a PLD's configuration data array to implement a particular logic function.Unlike some other memory configurations, memory cells that define a logic function within a PLD are generally sensed in a continuous fashion. As such, if a memory cell undergoes a single event upset that is effective to "flip", i.e., change, the logic state of the memory cell, then the logic function associated with that particular memory cell may be immediately affected, which may cause soft or catastrophic failure.Statistically, only a small percentage of the available configuration memory space is typically utilized for a given logic design within a PLD. For example, approximately 10% of the available configuration memory cells within a PLD may be utilized in a design implementation, which provides for an appreciable number of configuration memory cells that may be available for other use. As such, various embodiments in accordance with the present invention provide an apparatus and various methods to utilize the unused configuration memory cells to mitigate the effects of failures caused by SEUs, or any other mechanism that causes the logic state of a configured memory cell to flip.As noted above, advanced integrated circuits (ICs), such as FPGAs, can include several different types of programmable logic blocks in the array. For example, FIG. 1 illustrates an IC that exemplifies FPGA architecture 100, including a large number of different programmable tiles such as Multi-Gigabit Transceivers (MGTs) 101, CLBs 102, BRAMs 103, IOBs 104, configuration and clocking logic CONFIG/CLOCKS 105, DSPs 106, specialized I/O 107, including configuration ports and clock ports, and other programmable logic 108, such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks PROC 110, in which specific CPU related functionality may be utilized that is separate from the FPGA fabric.In some FPGAs, each programmable tile includes programmable interconnect element INT 111 having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. INT 111 also includes the connections to and from the programmable logic element within the same tile, as shown by the examples of blocks 102 and 104.For example, a CLB 102 may include a Configurable Logic Element CLE 112 that may be programmed to implement user logic plus a single programmable interconnect element INT 111. A BRAM 103 can include a BRAM logic element (BRL 113) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile (as measured from right to left of FIG. 1). In the pictured embodiment, a BRAM tile has the same height as four CLBs, but other numbers (e.g., five) can also be used. A DSP tile 106 can include a DSP logic element (DSPL 114) in addition to an appropriate number of programmable interconnect elements. An IOB 104 may include, for example, two instances of an input/output logic element IOL 115 in addition to one instance of the programmable interconnect element INT 111.As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 115 are manufactured using metal layers above the various illustrated logic blocks, and typically are not confined to the area of the input/output logic element 115.In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 1) is used for configuration, clock, and other control logic. Horizontal areas 109 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA.Some FPGAs utilizing the architecture illustrated in FIG. 1 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, the processor block PROC 110 shown in FIG. 1 may span several columns of CLBs and BRAMs.Note that FIG. 1 is intended to illustrate only an exemplary FPGA architecture. The number of logic blocks in a column, the relative width of the columns, the number and order of columns, the type of logic blocks included in the columns, the relative size of the logic blocks, and the interconnect/logic implementations 102, 103, and 104 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic.As discussed above, configuration of a PLD may be performed via memory cells that store configuration control data, where each memory cell stores a single bit of configuration control data. The configuration control data may be used to control the conductivity state of pass transistors in multiplexers, to serve as logic values in lookup tables, or to perform some other configuration function. The configuration control data bits can be stored in volatile memory (e.g., static memory cells, as in FPGAs and some CPLDs), in non-volatile memory (e.g., FLASH memory, as in some CPLDs), or in any other type of memory cell.Turning to FIG. 2, a schematic diagram of one embodiment of a configuration memory cell is exemplified. Pass gates 202 and 208 determine access to configuration memory cell 214. In particular, pass gate 202 accepts signal ADDRESS, whose logic high level is effective to render pass gate 202 conductive if configuration memory cell 214 is addressed for either write or read access. Pass gate 208 may provide optional read/write enabling of configuration memory cell 214 and is similarly rendered conductive by signal RE/WE, whenever configuration memory cell 214 is to be read from or written to.In operation, configuration memory cell 214 is programmed with signal DATA when both ADDRESS and WE signals are active, e.g., asserted to a logic high level. Node 210 then receives signal DATA and through operation of the inverter network of inverters 204 and 206, the logic value of signal DATA is maintained at node 210. The logic value at node 210 is then routed to FPGA control point 216, which is effective to configure a portion of the FPGA for a particular logic function. It should be noted that other memory configurations, such as capacitive storage, may be used to implement configuration memory cell 214 as a static configuration memory cell.The determination of whether configuration memory cell 214 is to be utilized for a particular design is generally determined by the PLD design tool. The design tools generally accept hardware design language (HDL) definitions, or schematics, which are then used to generate net lists to indicate point to point connectivity of reconfigurable logic and interconnect resources. From the net lists, additional tools will map the design to logic, determine the optimal placement of the logic, and then route signal paths between the logic. From this "place and route" operation, a configuration bit file is generated, which may be used to program the PLD.A computing station, for example, may be used to execute core synthesis tools to aid in the minimization and optimization of the equations extracted from the HDL files and/or schematics. A compiler, for example, may parse through the HDL behavioral source code to extract known functions, e.g., arithmetic functions, multiplexers, memories, etc. An optimization block, timing block, and an integrated software environment (ISE) block may then interoperate to formulate a design that is substantially dependent upon the intended PLD target's architecture and context. The context, for example, may influence inter-function optimizations such as replication, merging, re-timing, and pipelining. The context may be defined by the timing requirements and topology of the design.As discussed above, however, configuration memory cell 214 may be susceptible to SEUs, which may be induced by a neutron strike 212, causing emission of alpha particles from within the semiconductor substrate of configuration memory cell 214. The emission of the alpha particles may then be effective to flip the logic state at node 210 to its alternate binary logic state. That is to say, for example, that a logic "1" at node 210 could be flipped to a logic "0" in response to the SEU. Conversely, a logic "0" could be flipped to a logic "1". It can be seen, therefore, that since the logic state of node 210 may be continuously sensed by FPGA control point 216, any logic reversal at node 210 could cause a failure within its respective FPGA.Turning to FIG. 3, an alternative embodiment of a configuration memory cell is exemplified, whereby a voting circuit is employed to make use of redundant memory cells that may be available to provide a high reliability mode of operation. In particular, along with configuration memory cell 314, redundant configuration memory cells 320 and 322 are utilized, such that if an incorrect logic state of configuration memory cell 314 is sensed, the logic state of configuration memory cell 320 and 322 is used to mitigate the error.In one embodiment, the logic values contained within redundant configuration memory cells 320 and 322 are written by the design tool that is used to program configuration memory cell 314. In particular, signal DATA that is used to program memory cell 314 is also used to program the logic value contained within redundant configuration memory cells 320 and 322 once they have been identified as being unused for a particular design. Once programmed, configuration memory cells 314, 320, and 322 combine to form a triple modular redundancy (TMR) configuration, which is then used by voting circuit 318 to ensure that the correct logic value is provided to FPGA control point 316 even when one of the three configuration memory cells is subjected to an SEU. In particular, voting circuit 318 compares the logic state at configuration memory cell 314 with the logic states of redundant configuration memory cells 320 and 322. By invoking a TMR algorithm, voting circuit 318 requires a majority vote between configuration memory cells 314, 320, and 322 in order to validate the logic state presented to FPGA control point 316. Thus, if any one of the three configuration memory cells 314, 320, and 322 changes state (flips) due to an SEU, the value in the flipped memory cell is ignored, and the value stored in the other two memory cells is passed to FPGA control point 316.Voting circuit 318 implements the majority rule Boolean function of equation (1),D=A&BB&CC&A, (1)where D is the output of the voting circuit, A is the logic value contained within the first configuration memory cell, B is the logic value contained within the second configuration memory cell, C is the logic value contained within third configuration memory cell, "&" is the logical AND operator, and "" is the logical OR operator.<tb><sep>TABLE 1<tb><sep>A<sep>B<sep>C<sep>D<tb><sep>0<sep>0<sep>0<sep>0<tb><sep>0<sep>0<sep>1<sep>0<tb><sep>0<sep>1<sep>0<sep>0<tb><sep>0<sep>1<sep>1<sep>1<tb><sep>1<sep>0<sep>0<sep>0<tb><sep>1<sep>0<sep>1<sep>1<tb><sep>1<sep>1<sep>0<sep>1<tb><sep>1<sep>1<sep>1<sep>1Thus, given a majority number of logic low valued configuration memory cells, i.e., 2 or more out of 3, an output logic value of "0" will be selected by the respective voting circuit as illustrated in Table 1. On the other hand, given a majority of logic high valued memory cells, i.e., 2 or more out of 3, an output logic value of "1" will be selected by the respective voting circuit as similarly illustrated in Table 1.In some embodiments, voting circuit 318 also includes logic to correct the flipped value of one of the memory cells based on the value stored in the majority of the remaining memory cells. Such correction logic is well known in TMR circuitry.The number of configuration memory cells actually utilized by a given design implementation may include, for example, only 10% of the available configuration memory cells, which leaves a large percentage of configuration memory cells as redundant configuration memory cells. This condition is due to the fact that most configuration bits control interconnect multiplexers, whose control values are "don't care", i.e., unused. Thus, FIG. 3 may depict a situation, whereby configuration memory cell 314 impacts a given design implementation of a PLD, and configuration memory cells 320 and 322 constitute two of the many configuration memory cells that do not impact the design implementation. In such an instance, the reliability of the design implementation may be enhanced through the utilization of configuration memory cells 320 and 322 as redundant memory elements used to provide error mitigation of the flipped logic state of configuration memory cell 314.Turning to FIG. 4, a voting control circuit is exemplified, which provides voting control for every three configuration memory cells as discussed above in relation to FIG. 3. It should be noted, however, that a voting circuit for any odd number of configuration memory cells, e.g., 3, 5, 7, 9, etc., may be used to achieve enhanced results as discussed in more detail below.As can be seen, voter circuits 418-430 each receive logic values from three configuration memory cells. Furthermore, two of the three logic values received by one voter circuit are also received by a neighboring voter circuit. For example, voter circuit 420 receives the logic values associated with configuration memory cells 402 C0, 404 C1, and 406 C2, and provides validated logic signal C1' to the FPGA control point affecting logic element 434. Neighboring voter circuit 422 receives the logic values associated with configuration memory cells 404 C1, 406 C2, and 408 C3, and provides validated logic signal C2' to the FPGA control point affecting logic element 436. Logic values for logic elements 432 and 438-444 may be similarly validated. For the purposes of the present exemplary description, "neighboring voter circuits" can be physically adjacent circuits (i.e., physically located next to one another), or conceptually adjacent circuits (e.g., utilizing shared memory cells programmed by bits appearing in adjacent positions in the configuration bit stream), or both.In a first embodiment, a particular design implementation may only utilize every third configuration memory cell. That is to say, for example, that for a given design implementation, out of configuration memory cells 402-414 (C0-C6), only configuration memory cells 404 C1 and 410 C4 may be utilized. In such an instance, configuration memory cells C0, C2, C3, and C5 are considered redundant, since they are not needed to implement the design.The neighboring configuration memory cells C0 and C2 may then be programmed to contain the same logic value as configuration memory cell C1 to form a configuration memory cell triplet input to voter circuit 420. Neighboring configuration memory cells C3 and C5 may also be programmed to contain the same logic value as configuration memory cell C4 to form another configuration memory cell triplet input to voter circuit 426. Such a design is effective to render a TMR mode of operation, such that no further action is required for validation of logic signals C1' and C4'.In a second embodiment, a particular design implementation may not necessarily utilize only every third configuration memory cell. In such an instance, therefore, certain neighboring configuration memory cells may be utilized in the same design and are thus not available as redundant configuration memory cells. Based upon their particular logic value and their neighbor's logic values, however, they may nevertheless be utilized in a TMR validated design implementation.For example, referring again to FIG. 4, configuration memory cells 402 C0, 404 C1, and 406 C2 contribute to provide validated logic signal C1' to control logic element 434. If C1 404 is at a logic high value, then for proper TMR operation, C0 402 and C2 406 should also be at a logic high value. For a given design implementation, however, validated logic signal C2' may also be required to control logic element 436. The configuration memory cell triplet that contributes to the logic value of C2' includes configuration memory cells C1 404, C2 406, and C3 408. Configuration memory cells C1 404 and C2 406 should be a logic high level for proper TMR operation in providing validated logic signal C1'. Thus, if the required logic value of C2' is also a logic high value and the logic value of configuration memory cell C3 is a logic high value, then C2' is a validated logic signal and as such, may continue to be used to control logic element 436.Should the logic values of C2' and C3 not match up with the logic values of C1 and C2, however, then TMR validation of output C2' is not possible. In such an instance, a third embodiment can be used, in which a software algorithm is invoked to modify, or re-route, the original design implementation to "free up" a configuration memory cell. For example, given that C1' controls logic element 434, the design implementation may be altered to require, for example, that C4' and logic element 440, instead of C2' and logic element 436, be utilized in the design implementation so that proper TMR validation may be established as discussed above.Given that only 10% of configuration memory is utilized by the exemplary design, however, a fairly low probability exists that a given configuration memory cell is utilized, such that only a small percentage of the design would need to be re-routed to free up configuration memory cells for TMR validation. For example, given that any one configuration memory cell is used in defining the exemplary design, then the probability of a neighboring configuration memory cell being used for the exemplary design is 1/10.Furthermore, even when utilized, the probability that the logic value of the neighboring configuration memory cell is different is [1/2]. Thus, the probability of the neighboring configuration memory cell being used for the exemplary design and having a different logic value is 1/10*[1/2]= 1/20. Similarly, the probability of a third configuration memory cell being both used in the exemplary design and different is also 1/20. Thus, if TMR is needed for every configuration bit used in the design, then re-routing is only required in 1/20+ 1/20= 1/10, or 10%, of the original design implementation.In a fourth embodiment, TMR validated operation may not be required for all memory cells in the design. In such instances, it may be adequate to reduce the probability of the design being affected by an SEU. In this embodiment, TMR is applied where redundant memory cells are available, and voting circuits are left enabled when only two out of three cells contain the same value (simple majority); otherwise partial re-routing is performed.For example, in the circuit of FIG. 4, configuration memory cells 402 C0, 404 C1, and 406 C2 contribute to validated logic signal C1' to help control logic element 434. If C1 404 is at a logic high value, then for simple majority operation, either of C0 402 or C2 406 should also be at a logic high value. For a given design implementation, however, voter circuit output C2' may also be required to help control logic element 436. The configuration memory cell triplet that contributes to the logic value of C2' includes configuration memory cells C1 404, C2 406, and C3 408. Given that configuration memory cells C1 404 and C2 406 are at a logic high value and that the required logic value of C2' is also a logic high value, then the logic value of configuration memory cell C3 is irrelevant. In this embodiment, C2' is a logic signal that is validated by a simple majority algorithm and may continue to be used to help control logic element 436. However, in this case the value provided by C2'' is not protected.Given that only 10% of configuration memory is utilized by the exemplary design, however, a low probability exists that another design iteration is required to free up redundant configuration memory cells for simple majority validation. For example, given that any one configuration memory cell is used in defining the exemplary design, then the probability of a neighboring configuration memory cell being used for the exemplary design is 1/10.Furthermore, the probability that the logic value of the neighboring configuration memory cell is different is [1/2]. Thus, the probability of the neighboring configuration memory cell being used for the exemplary design and having a different logic value is 1/10*[1/2]= 1/20. Similarly, the probability of a third configuration memory cell being both used in the exemplary design and different is also 1/20. Thus, if a simple majority algorithm is utilized, then re-routing is only required in 1/20* 1/20= 1/400, or 0.25%, of the original configuration design implementation. TMR would still be provided for 1-( 1/20+ 1/20)= 9/10, or 90%, of the original configuration design implementation.In a fifth embodiment, TMR validated operation may not be required for every bit in the entire design implementation. In such instances, it may be adequate to reduce the probability of the design being affected by an SEU. In this embodiment, TMR is applied where redundant memory cells are available, and is disabled where redundant memory cells are not available, optionally after attempting to re-implement the design as described above.For example, in some embodiments the voting circuits may be disabled for cases where re-routing does not yield the desired configuration output. Referring again to FIG. 4, in one embodiment, when voting circuit 420 is enabled, the output signal C1' is a TMR logical result for the contents of memory cells 402, 404, and 406 (C0, C1, and C2). When voting circuit 420 is disabled, the output signal C1' is the same as the contents of memory cell 404 (C1).Disabling each voting circuit of FIG. 4, however, would itself require a configuration memory cell for each voting circuit to be disabled, which effectively doubles the amount of configuration memory used. If, however, disabling of the voter circuits occurs using a coarse granularity of 10 or 100, then the number of configuration memory cells required to disable the voting circuits only increases by 10% or 1%, respectively.Given that only 10% of configuration memory is utilized by the exemplary design, a low probability exists that voting disabling is required for a particular voting circuit. For example, given that any one configuration memory cell is used in defining the exemplary design, then the probability of a neighboring configuration memory cell being used for the exemplary design is 1/10. Furthermore, even when utilized, the probability that the logic value of the neighboring configuration memory cell is different is [1/2]. Thus, the probability of the neighboring configuration memory cell being used for the exemplary design and having a different logic value is 1/10* 1/2= 1/20. Similarly, the probability of a third configuration memory cell being both used in the exemplary design and different is also 1/20. Thus, for a simple majority validation algorithm, disabling of voting is only required in 1/20* 1/20= 1/400, or 0.25%, of the original design implementation.If the voting circuits are disabled in blocks of 100, then on average 10% of 100, or 10 configuration memory cells, would be unprotected. Since, in this example, disabling is done in blocks of 100 memory configuration cells, the proportion of unprotected configuration memory cells is 0.25%*10 or 2.5%. TMR would still be provided for (1-( 1/20+ 1/20))%-2.5%, or 87.5%, of the original design implementation.It can be seen that by increasing the number of voting inputs from 3 to some higher odd number, such as 5, 7, etc., then the correction coverage using a voting algorithm is increased. Given a 5-input voting configuration, for example, a voting algorithm is effective to validate the exemplary design, even though 2 out of 5 configuration memory cells may have sustained a flipped logic state due to an SEU. Thus, whereas a 3-input voting algorithm may sustain a single SEU induced error and still maintain a validated design, a 5-input voting algorithm may sustain two SEU induced errors and still maintain a validated design.Similarly, a reduced probability exists such that the exemplary design needs to be re-routed when utilizing an increased number of voting inputs. Taking the 5-input voting configuration as discussed above, for example, one out of the five configuration memory cells may not qualify as redundant configuration memory cells for a particular design, because they may be used to implement the design. Those configuration memory cells may nevertheless be used in a validated design, provided that the remaining configuration memory cells can support the simple majority algorithm as discussed above. In other words, if the remaining configuration memory cells are not used in other designs, or if they are used in other designs, but their respective logic values are in agreement with one another, then the remaining configuration memory cells may be used by a simple majority algorithm to validate the exemplary design without the need for re-routing.Turning to FIG. 5, a flow diagram exemplifying a method of reliably configuring a PLD is illustrated. In step 504, configuration data is programmed into a first set of configuration memory cells within the PLD. In one embodiment, the PLD design tool being utilized may direct the configuration data into adjacent and/or non-adjacent configuration memory cells as required. Once the configuration memory cells are programmed, a survey of the un-programmed configuration memory cells is taken as in step 506. Any un-programmed configuration memory cells that are found to exist are then identified as being available for TMR.If configuration memory cell validation is possible as determined in step 508, then voting groups are formed in step 510 from the programmed and un-programmed configuration memory cells as determined in steps 504 and 506. Each voting group is configured to contain an odd number of configuration memory cells, whereby in one embodiment, a single programmed configuration memory cell is either grouped with un-programmed configuration memory cells, or with a combination of identically programmed and un-programmed configuration memory cells. Once each voting group is formed, the un-programmed configuration memory cells within each voting group are programmed in step 512 with the identical logic values that are contained within the programmed configuration memory cell voting group counterparts as formed in step 504.If configuration memory cell validation is not possible, then a decision may be made as in step 520, to re-implement all or part of the design. Such a re-implementation may be attempted, for example, in an effort to re-distribute the programmed configuration memory cells in a manner that is more conducive to validation. That is to say, for example, that the PLD design tool may attempt to create a distribution of programmed configuration memory cells that are physically, and/or logically, separated by un-programmed configuration memory cells. As such, un-programmed configuration memory cells may be strategically placed by the PLD design tool so that they may be more easily combined into voting groups with their programmed configuration memory cell counterparts. Voting circuitry may also be deactivated altogether as in step 522, in which case, only the first set of configuration memory cells programmed in step 504 are used to program the PLD's control points.As discussed above, configuration memory errors are detected and mitigated by providing the correct configuration information to the PLD control points that define a particular design. The contents of the corrupted configuration memory cells may themselves be corrected through the use of block-wise CRC techniques, or through memory scrubbing procedures. Frequency of use of these techniques may be on the order of minutes, hours, or even days due to the low occurrence of SEUs in normal operating environments.Other aspects and embodiments of the present invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims.
The present disclosure provides an interconnect fabric comprising one or more switches, a memory interface coupled to the interconnect fabric, an input/output (IO) interface coupled to the interconnect fabric and an array of processing clusters coupled to the interconnect fabric. The array of multiprocessors is to process mixed-precision instructions. At least one processing cluster comprises a plurality of registers to store a plurality of packed data elements at a first precision and an execution unit to execute mixed-precision dot-product instructions. The execution unit is to perform a plurality of multiplications of different pairs of the plurality of packed data elements to generate a corresponding plurality of products and to add the corresponding plurality of products to an accumulation value stored at a second precision greater than the first precision.
An apparatus comprising:- an interconnect fabric comprising one or more switches;- a memory interface coupled to the interconnect fabric;- an input/output (IO) interface coupled to the interconnect fabric;- an array of processing clusters coupled to the interconnect fabric, the array of multiprocessors to process mixed-precision instructions,at least one processing cluster comprising:-- a plurality of registers to store a plurality of packed data elements at a first precision; and-- an execution unit to execute mixed-precision dot-product instructions,the execution unit to perform a plurality of multiplications of different pairs of the plurality of packed data elements to generate a corresponding plurality of products andto add the corresponding plurality of products to an accumulation value stored at a second precision greater than the first precision.The apparatus of claim 1 further comprising:a parallel processing unit comprising the interconnect fabric, memory interface, the input/output (IO) unit, and the array of processing clusters,the memory interface including multiple partition units each of the multiple partition units to couple independently with respective 3D stacked memory units of a plurality of 3D stacked memory units.The apparatus of claims 1 or 2 wherein the plurality of packed data elements comprise data elements of various data sizes.The apparatus of claim 1 wherein the mixed precision dot-product instructions are primitives of a machine learning framework.The apparatus of claim 3 wherein matrix multiplications are performed by a convolutional layer of the machine-learning framework.The apparatus of claim 4 wherein the machine learning framework comprises a neural network.The apparatus of claim 6 wherein the neural network comprises a recurrent neural network (RNN).The apparatus of any of claims 1 to 6, wherein the array of processing clusters are to be shared with multiple virtual machines (VMs) in a virtualized graphics execution environment.The apparatus of claim 8, wherein the virtualized graphics execution environment comprises multiple sets of registers to store an effective address pointer to a memory location.The apparatus of any of claims 1 to 9 whereinthe memory interface is to couple the interconnect fabric to access the 3D stacked memory units, the memory interface to use virtual channels to separate traffic streams.The apparatus of any of claims 1 to 10 further comprising:a Level 1 (L1) cache and a Level 2 (L2) cache to store data for the array of processing clusters, the L1 cache and the L2 cache to be shared among all processing clusters.The apparatus of any of claims 1 to 11 further comprising:an memory management unit (MMU) coupled to the interconnect fabric, the MMU comprising an address translation lookaside buffer for caching virtual-to-physical address translations.The apparatus of claim 12 wherein the MMU is to use a shared virtual system address space distributed to the 3D stacked memory units.The apparatus of any of claims 2 to 13, wherein the 3D stacked memory units comprise a High Bandwidth Memory (HBM).
FIELDEmbodiments relate generally to data processing and more particularly to data processing via a general-purpose graphics processing unit.BACKGROUND OF THE DESCRIPTIONCurrent parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming, Chapter 3, pages 37-51 (2013 ) and/or Nicholas Wilt, CUDA Handbook, A Comprehensive Guide to GPU Programming, Sections 2.6.2 to 3.1.2 (June 2013 ).BRIEF DESCRIPTION OF THE DRAWINGSSo that the features of the present invention can be understood in detail, a more particular description of the invention may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of the scope of all embodiments.FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;FIG. 2A-2D illustrate parallel processor components, according to an embodiment;FIGs. 3A-3B are block diagrams of graphics multiprocessors, according to embodiments;FIG. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs is communicatively coupled to a plurality of multi-core processors;FIG. 5 illustrates a graphics processing pipeline, according to an embodiment;FIG. 6 illustrates a machine learning software stack, according to an embodiment;FIG. 7 illustrates a highly-parallel general-purpose graphics processing unit, according to an embodiment;FIG. 8 illustrates a multi-GPU computing system, according to an embodiment;FIG. 9A-9B illustrate layers of exemplary deep neural networks;FIG. 10 illustrates an exemplary recurrent neural network;FIG. 11 illustrates training and deployment of a deep neural network;FIG. 12 is a block diagram illustrating distributed learning;FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) suitable for performing inferencing using a trained model;FIG. 14 illustrates components of a dynamic precision floating point unit, according to an embodiment;FIG. 15 provides additional details with respect to a dynamic precision floating point unit, according to an embodiment;FIG. 16 illustrates thread assignments for a dynamic precision processing system, according to an embodiment;FIG. 17 illustrates logic to perform a numerical operation at less than a requested precision, according to an embodiment;FIG. 18 illustrates loop vectorization for SIMD units, according to an embodiment;FIG. 19 illustrates a thread processing system, according to an embodiment;FIG. 20 illustrates logic to assign threads for computation, according to an embodiment;FIG. 21 illustrates a deep neural network that may be processed using compute logic provided by embodiments described herein;FIG. 22 is a flow diagram of logic to prevent error or significant precision loss when performing low precision operations for machine learning, according to an embodiment;FIG. 23 is a block diagram of a processing system, according to an embodiment;FIG. 24 is a block diagram of an embodiment of a processor having one or more processor cores, an integrated memory controller, and an integrated graphics processor;FIG. 25 is a block diagram of a graphics processor, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores;FIG. 26 is a block diagram of a graphics processing engine of a graphics processor in accordance with some embodiments;FIG. 27 is a block diagram of a graphics processor provided by an additional embodiment;FIG. 28 illustrates thread execution logic including an array of processing elements employed in some embodiments;FIG. 29 is a block diagram illustrating a graphics processor instruction formats according to some embodiments;FIG. 30 is a block diagram of a graphics processor according to another embodiment.FIG. 31A-31B illustrate a graphics processor command format and command sequence, according to some embodiments;FIG. 32 illustrates exemplary graphics software architecture for a data processing system according to some embodiments;FIG. 33 is a block diagram illustrating an IP core development system, according to an embodiment;FIG. 34 is a block diagram illustrating an exemplary system on a chip integrated circuit, according to an embodiment;FIG. 35 is a block diagram illustrating an additional graphics processor, according to an embodiment; andFIG. 36 is a block diagram illustrating an additional exemplary graphics processor of a system on a chip integrated circuit, according to an embodiment.DETAILED DESCRIPTIONIn some embodiments, a graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.System OverviewFIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.In one embodiment the processing subsystem 101 includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. In one embodiment the one or more parallel processor(s) 112 form a computationally focused parallel or vector processing system that an include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. In one embodiment the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the I/O Hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.Within the I/O subsystem 111, a system storage unit 114 can connect to the I/O hub 107 to provide a storage mechanism for the computing system 100. An I/O switch 116 can be used to provide an interface mechanism to enable connections between the I/O hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the I/O hub 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NV-Link high-speed interconnect, or interconnect protocols known in the art.In one embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. In yet another embodiment, components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s), 112 memory hub 105, processor(s) 102, and I/O hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system 100 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, in some embodiments, system memory 104 is connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. Some embodiments may include two or more sets of processor(s) 102 attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in FIG. 1 . For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.FIG. 2A illustrates a parallel processor 200, according to an embodiment. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 is a variant of the one or more parallel processor(s) 112 shown in FIG. 1 , according to an embodiment.In one embodiment the parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. In one embodiment the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. In one embodiment the scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212. In one embodiment the scheduler 210 is implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler 210 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing array 212. In one embodiment, the host software can prove workloads for scheduling on the processing array 212 via one of multiple graphics processing doorbells. The workloads can then be automatically distributed across the processing array 212 by the scheduler 210 logic within the scheduler microcontroller.The processing cluster array 212 can include up to "N" processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. In one embodiment, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.The processing cluster array 212 can be configured to perform various types of parallel processing operations. In one embodiment the processing cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.In one embodiment the processing cluster array 212 is configured to perform parallel graphics processing operations. In embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.In one embodiment, when the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 can be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. In one implementation the number of partition units 220A-220N is configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding memory unit 224B, and an Nth partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.In various embodiments, the memory units 224A-224N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.In one embodiment, any one of the clusters 214A-214N of the processing cluster array 212 can process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one embodiment the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. In one embodiment the memory crossbar 216 can use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example and in one embodiment, some instances of the parallel processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.FIG. 2B is a block diagram of a partition unit 220, according to an embodiment. In one embodiment the partition unit 220 is an instance of one of the partition units 220A-220N of FIG. 2A . As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write-back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of FIG. 2 (e.g., within parallel processor memory 222).In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the ROP 226 can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis.In some embodiments, the ROP 226 is included within each processing cluster (e.g., cluster 214A-214N of FIG. 2 ) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s) 110 of FIG. 1 , routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of FIG. 2A .FIG. 2C is a block diagram of a processing cluster 214 within a parallel processing unit, according to an embodiment. In one embodiment the processing cluster is an instance of one of the processing clusters 214A-214N of FIG. 2 . The processing cluster 214 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime.Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of FIG. 2 and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed vis the data crossbar 240.Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete.. The functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In one embodiment the same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present.The instructions transmitted to the processing cluster 214 constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234, processing can be performed over consecutive clock cycles. In one embodiment multiple thread groups can be executed concurrently on a graphics multiprocessor 234.In one embodiment the graphics multiprocessor 234 includes an internal cache memory to perform load and store operations. In one embodiment, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., L1 cache 308) within the processing cluster 214. Each graphics multiprocessor 234 also has access to L2 caches within the partition units (e.g., partition units 220A-220N of FIG. 2 ) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the L1 cache 308.Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2 . The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the L1 cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of FIG. 2 ). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214. In one embodiment, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, L1 caches, etc.FIG. 2D shows a graphics multiprocessor 234, according to one embodiment. In such embodiment the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268.In one embodiment, the instruction cache 252 receives a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 234. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 234. In one embodiment, the register file 258 is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 258. In one embodiment, the register file 258 is divided between the different warps being executed by the graphics multiprocessor 234.The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 234. The GPGPU cores 262 can be similar in architecture or can differ in architecture, according to embodiments. For example and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. In one embodiment the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 234 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In one embodiment one or more of the GPGPU cores can also include fixed or special function logic.In one embodiment the GPGPU cores 262 include SIMD logic capable of performing a single instruction on multiple sets of data. In one embodiment GPGPU cores 262 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 234 to the register file 258 and to the shared memory 270. In one embodiment, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. Threads executing on the GPGPU cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.FIGs. 3A-3B illustrate additional graphics multiprocessors, according to embodiments. The illustrated graphics multiprocessors 325, 350 are variants of the graphics multiprocessor 234 of FIG. 2C . The illustrated graphics multiprocessors 325, 350 can be configured as a streaming multiprocessor (SM) capable of simultaneous execution of a large number of execution threads.FIG. 3A shows a graphics multiprocessor 325 according to an additional embodiment. The graphics multiprocessor 325 includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of FIG. 2D . For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A-332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, GPGPU core 337A-337B, GPGPU core 338A-338B) and multiple sets of load/store units 340A-340B. In one embodiment the execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346.The various components can communicate via an interconnect fabric 327. In one embodiment the interconnect fabric 327 includes one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. In one embodiment the interconnect fabric 327 is a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor 325 is stacked. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect fabric 327. For example, the GPGPU cores 336A-336B, 337A-337B, and 3378A-338B can each communicate with shared memory 346 via the interconnect fabric 327. The interconnect fabric 327 can arbitrate communication within the graphics multiprocessor 325 to ensure a fair bandwidth allocation between components.FIG. 3B shows a graphics multiprocessor 350 according to an additional embodiment. The graphics processor includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in FIG. 2D and FIG. 3A . The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 362. In one embodiment the execution resources 356A-356D can share an instruction cache 354 and shared memory 362, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of FIG. 3A .Persons skilled in the art will understand that the architecture described in FIGS. 1 , 2A-2D , and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of FIG. 2 , as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.In some embodiments a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.Techniques for GPU to Host Processor InterconnectionFIG. 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413 are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440-443 (e.g., buses, point-to-point interconnects, etc.). In one embodiment, the high-speed links 440-443 support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles of the invention are not limited to any particular communication protocol or throughput.In addition, in one embodiment, two or more of the GPUs 410-413 are interconnected over high-speed links 444-445, which may be implemented using the same or different protocols/links than those used for high-speed links 440-443. Similarly, two or more of the multi-core processors 405-406 may be connected over high speed link 433 which may be symmetric multi-processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or higher. Alternatively, all communication between the various system components shown in FIG. 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles of the invention are not limited to any particular type of interconnect technology.In one embodiment, each multi-core processor 405-406 is communicatively coupled to a processor memory 401-402, via memory interconnects 430-431, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450-453, respectively. The memory interconnects 430-431 and 450-453 may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the "effective address" space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64GB of the system memory address space and GPU memories 420-423 may each comprise 32GB of the system memory address space (resulting in a total of 256GB addressable memory in this example).FIG. 4B illustrates additional details for an interconnection between a multi-core processor 407 and a graphics acceleration module 446 in accordance with one embodiment. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the invention (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches 426 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402Coherency is maintained for data and instructions stored in the various caches 462A-462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles of the invention.In one embodiment, a proxy circuit 425 communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the high-speed link 440.In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.In one embodiment, the accelerator integration circuit 436 includes a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one embodiment, the accelerator integration circuit 436 includes a fetch unit 491 to fetch commands, instructions, work descriptors, etc., that define operations to be performed. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. In one embodiment, the data stored in cache 438 and graphics memories 433-434, N is kept coherent with the core caches 462A-462D, 456 and system memory 411. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, N (e.g., sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).A set of registers 449 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. In one embodiment, an interrupt management circuit 447 receives and processes interrupts received from system devices.In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 411 by the MMU 439. One embodiment of the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. In one embodiment, a virtualized graphics execution environment is presented in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.Thus, the accelerator integration circuit acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One function of the accelerator integration circuit 436, in one embodiment, is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.As mentioned, in the illustrated embodiment, one or more graphics memories 433-434, M are coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram.In one embodiment, to reduce data traffic over the high-speed link 440, biasing techniques are used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 411.FIG. 4C illustrates another embodiment in which the accelerator integration circuit 436 is integrated within the processor 407. In this embodiment, the graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to FIG. 4B, but potentially at a higher throughput given its close proximity to the coherency bus 462 and caches 462A-462D, 426.One embodiment supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.In one embodiment of the dedicated process model, graphics processing engines 431-432, N are dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431-432, N, providing virtualization within a VM/partition.In the dedicated-process programming models, the graphics processing engines 431-432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. In one embodiment, process elements are stored in system memory 411 and are addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.FIG. 4D illustrates an exemplary accelerator integration slice 490. As used herein, a "slice" comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 411 stores process elements 483. In one embodiment, the process elements 483 are stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application's address space 482.The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. Embodiments of the invention include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 449 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 446 as illustrated. For example, one embodiment of the MMU 439 includes segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.In one embodiment, the same set of registers 449 are duplicated for each graphics processing engine 431-432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. Exemplary registers that may be initialized by the hypervisor are shown in Table 1.Table 1 - Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description RegisterExemplary registers that may be initialized by the operating system are shown in Table 2.Table 2 - Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptorIn one embodiment, each WD 484 is specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.FIG. 4E illustrates additional details for one embodiment of a shared model. This embodiment includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.In one embodiment, for the shared model, the application 480 is required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. In one embodiment, the CSRP is one of the registers 449 containing the effective address of an area in the application's address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.Table 3 - OS to Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4 - Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization record pointer12The Storage Descriptor Register (SDR)In one embodiment, the hypervisor initializes a plurality of registers 449 of the accelerator integration slice 490.As illustrated in FIG. 4F, one embodiment of the invention employs a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. In one embodiment, a first portion of the virtual/effective address space is allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) is thereby distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.In one embodiment, bias/coherence management circuitry 494A-494E within one or more of the MMUs 439A-439E ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in FIG. 4F, the bias/coherence circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436.One embodiment allows GPU-attached memory 420-423 to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.In one implementation, the selection of between GPU bias and host processor bias is driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.In one implementation, the bias table entry associated with each access to the GPU-attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high-speed link as discussed above). In one embodiment, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.In one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. To access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.Graphics Processing PipelineFIG. 5 illustrates a graphics processing pipeline 500, according to an embodiment. In one embodiment a graphics processor can implement the illustrated graphics processing pipeline 500. The graphics processor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of FIG. 2 , which, in one embodiment, is a variant of the parallel processor(s) 112 of FIG. 1 . The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of FIG. 2 ) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of FIG. 3 ) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of FIG. 3 ) and a corresponding partition unit (e.g., partition unit 220A-220N of FIG. 2 ). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. In one embodiment, one or more portions of the graphics processing pipeline 500 can be performed by parallel processing logic within a general purpose processor (e.g., CPU). In one embodiment, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in FIG. 2 ) via a memory interface 528, which may be an instance of the memory interface 218 of FIG. 2 .In one embodiment the data assembler 502 is a processing unit that collects vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 50. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. In one embodiment the geometry processing unit 516 is programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.In some embodiments the geometry processing unit 516 can add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522.The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in FIG. 2 , and/or system memory 104 as in FIG 1 , to be displayed on the one or more display device(s) 110 or for further processing by one of the one or more processor(s) 102 or parallel processor(s) 112. In some embodiments the raster operations unit 526 is configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.Machine Learning OverviewA machine learning algorithm is an algorithm that can learn based on a set of data. Embodiments of machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition.An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., "fed forward") to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients ("weights") respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms.Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the "correct" labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered "trained" when the errors for each of the outputs generated from the instances of the training data set are minimized.The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices.FIG. 6 is a generalized diagram of a machine learning software stack 600. A machine learning application 602 can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application 602 can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application 602 can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation.Hardware acceleration for the machine learning application 602 can be enabled via a machine learning framework 604. The machine learning framework 604 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 604, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 604. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 604 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.The machine learning framework 604 can process input data received from the machine learning application 602 and generate the appropriate input to a compute framework 606. The compute framework 606 can abstract the underlying instructions provided to the GPGPU driver 608 to enable the machine learning framework 604 to take advantage of hardware acceleration via the GPGPU hardware 610 without requiring the machine learning framework 604 to have intimate knowledge of the architecture of the GPGPU hardware 610. Additionally, the compute framework 606 can enable hardware acceleration for the machine learning framework 604 across a variety of types and generations of the GPGPU hardware 610.GPGPU Machine Learning AccelerationFIG. 7 illustrates a highly-parallel general-purpose graphics processing unit 700, according to an embodiment. In one embodiment the general-purpose processing unit (GPGPU) 700 can be configured to be particularly efficient in processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU 700 can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks.The GPGPU 700 includes a host interface 702 to enable a connection with a host processor. In one embodiment the host interface 702 is a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU 700 receives commands from the host processor and uses a global scheduler 704 to distribute execution threads associated with those commands to a set of compute clusters 706A-706H. The compute clusters 706A-706H share a cache memory 708. The cache memory 708 can serve as a higher-level cache for cache memories within the compute clusters 706A-706H.The GPGPU 700 includes memory 714A-714B coupled with the compute clusters 706A-H via a set of memory controllers 712A-712B. In various embodiments, the memory 714A-714B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory, or 3D stacked memory, including but not limited to high bandwidth memory (HBM).In one embodiment each compute cluster 706A-706H includes a set of graphics multiprocessors, such as the graphics multiprocessor 400 of FIG. 4A . The graphics multiprocessors of the compute cluster multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example and in one embodiment at least a subset of the floating point units in each of the compute clusters 706A-H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating point units can be configured to perform 64-bit floating point operations.Multiple instances of the GPGPU 700 can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. In one embodiment the multiple instances of the GPGPU 700 communicate over the host interface 702. In one embodiment the GPGPU 700 includes an I/O hub 709 that couples the GPGPU 700 with a GPU link 710 that enables a direct connection to other instances of the GPGPU. In one embodiment the GPU link 710 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU 700. In one embodiment the GPU link 710 couples with a high speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment the multiple instances of the GPGPU 700 are located in separate data processing systems and communicate via a network device that is accessible via the host interface 702. In one embodiment the GPU link 710 can be configured to enable a connection to a host processor in addition to or as an alternative to the host interface 702.While the illustrated configuration of the GPGPU 700 can be configured to train neural networks, one embodiment provides alternate configuration of the GPGPU 700 that can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration the GPGPU 700 includes fewer of the compute clusters 706A-H relative to the training configuration. Additionally memory technology associated with the memory 714A- 714B may differ between inferencing and training configurations. In one embodiment the inferencing configuration of the GPGPU 700 can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks.FIG. 8 illustrates a multi-GPU computing system 800, according to an embodiment. The multi-GPU computing system 800 can include a processor 802 coupled to multiple GPGPUs 806A-806D via a host interface switch 804. The host interface switch 804, in one embodiment, is a PCI express switch device that couples the processor 802 to a PCI express bus over which the processor 802 can communicate with the set of GPGPUs 806A-D. Each of the multiple GPGPUs 806A-806D can be an instance of the GPGPU 700 of FIG. 7 . The GPGPUs 806A-D can interconnect via a set of high-speed point-to-point GPU to GPU links 816. The high-speed GPU to GPU links can connect to each of the GPGPUs 806A-806D via a dedicated GPU link, such as the GPU link 710 as in FIG. 7 . The P2P GPU links 816 enable direct communication between each of the GPGPUs 806A-806D without requiring communication over the host interface bus to which the processor 802 is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system 800, for example, via one or more network devices. While in the illustrated embodiment the GPGPUs 806A-806D connect to the processor 802 via the host interface switch 804, in one embodiment the processor 802 includes direct support for the P2P GPU links 816 and can connect directly to the GPGPUs 806A-806D.Machine Learning Neural Network ImplementationsThe computing architecture provided by embodiments described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described.A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of "filters" (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for a RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed.The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and nonlimiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques.Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task.Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.FIG. 9A-9B illustrate an exemplary convolutional neural network. FIG. 9A illustrates various layers within a CNN. As shown in FIG. 9A , an exemplary CNN used to model image processing can receive input 902 describing the red, green, and blue (RGB) components of an input image. The input 902 can be processed by multiple convolutional layers (e.g., convolutional layer 904, convolutional layer 906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 908. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers 908 can be used to generate an output result from the network. The activations within the fully connected layers 908 can be computed using matrix multiplication instead of convolution. Not all CNN implementations are make use of fully connected layers 908. For example, in some implementations the convolutional layer 906 can generate output for the CNN.The convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers 908. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images.FIG. 9B illustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer 912 of a CNN can be processed in three stages of a convolutional layer 914. The three stages can include a convolution stage 916, a detector stage 918, and a pooling stage 920. The convolution layer 914 can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNN.In the convolution stage 916 performs several convolutions in parallel to produce a set of linear activations. The convolution stage 916 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage 916 defines a set of linear activations that are processed by successive stages of the convolutional layer 914.The linear activations can be processed by a detector stage 918. In the detector stage 918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as f(x) = max (0, x), such that the activation is thresholded at zero.The pooling stage 920 uses a pooling function that replaces the output of the convolutional layer 906 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 920, including max pooling, average pooling, and 12-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages.The output from the convolutional layer 914 can then be processed by the next layer 922. The next layer 922 can be an additional convolutional layer or one of the fully connected layers 908. For example, the first convolutional layer 904 of FIG. 9A can output to the second convolutional layer 906, while the second convolutional layer can output to a first layer of the fully connected layers 908.FIG. 10 illustrates an exemplary recurrent neural network 1000. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN 1000 can be described has having an input layer 1002 that receives an input vector, hidden layers 1004 to implement a recurrent function, a feedback mechanism 1005 to enable a 'memory' of previous states, and an output layer 1006 to output a result. The RNN 1000 operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism 1005. For a given time step, the state of the hidden layers 1004 is defined by the previous state and the input at the current time step. An initial input (x1) at a first time step can be processed by the hidden layer 1004. A second input (x2) can be processed by the hidden layer 1004 using state information that is determined during the processing of the initial input (x1). A given state can be computed as st = f(Uxt + Wst-1), where U and W are parameter matrices. The function f is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function f(x) = max(0, x). However, the specific mathematical function used in the hidden layers 1004 can vary depending on the specific implementation details of the RNN 1000.In addition to the basic CNN and RNN networks described, variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network.FIG. 11 illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset 1102. Various training frameworks 1104 have been developed to enable hardware acceleration of the training process. For example, the machine learning framework 604 of FIG. 6 may be configured as a training framework 604. The training framework 604 can hook into an untrained neural network 1106 and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural net 1108.To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner.Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1102 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1104 can adjust to adjust the weights that control the untrained neural network 1106. The training framework 1104 can provide tools to monitor how well the untrained neural network 1106 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net 1108. The trained neural network 1108 can then be deployed to implement any number of machine learning operations.Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset 1102 will include input data without any associated output data. The untrained neural network 1106 can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1107 capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data.Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset 1102 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1108 to adapt to the new data 1112 without forgetting the knowledge instilled within the network during initial training.Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process.FIG. 12 is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly-parallel general-purpose graphics processing unit 700 as in FIG. 700. As illustrated, distributed learning can be performed model parallelism 1202, data parallelism 1204, or a combination of model and data parallelism 1204.In model parallelism 1202, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.In data parallelism 1204, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes.Combined model and data parallelism 1206 can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model.Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.Exemplary Machine Learning ApplicationsMachine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors.Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles.Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR.Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages.The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training. Exemplary parallel processors suited for training include the highly-parallel general-purpose graphics processing unit 700 of FIG. 700 and the multi-GPU computing system 800 of FIG. 800. On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) 1300 suitable for performing inferencing using a trained model. The SOC 1300 can integrate processing components including a media processor 1302, a vision processor 1304, a GPGPU 1306 and a multi-core processor 1308. The SOC 1300 can additionally include on-chip memory 1305 that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC 1300 can be used as a portion of the main control system for an autonomous vehicle. Where the SOC 1300 is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction.During operation, the media processor 1302 and vision processor 1304 can work in concert to accelerate computer vision operations. The media processor 1302 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams. The decoded video streams can be written to a buffer in the on-chip-memory 1305. The vision processor 1304 can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor 1304 can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU 1306.The multi-core processor 1308 can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor 1302 and the vision processor 1304. The multi-core processor 1308 can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU 1306. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor 1308. Such software can directly issue computational workloads to the GPGPU 1306 or the computational workloads can be issued to the multi-core processor 1308, which can offload at least a portion of those operations to the GPGPU 1306.The GPGPU 1306 can include compute clusters such as a low power configuration of the compute clusters 706A-706H within the highly-parallel general-purpose graphics processing unit 700. The compute clusters within the GPGPU 1306 can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU 1306 can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations.Dynamic Floating-Point Unit Accuracy Reduction for Machine Learning OperationsThe IEEE 754 single-precision binary floating point format specifies a 32-bit binary representation having a 1-bit sign, an 8-bit exponent, and a 24-bit significand, of which 23 bits are explicitly stored. The IEEE 754 half-precision binary floating point format specifies a 16-bit binary representation having a 1-bit sign, 5-bit exponent, and 11-bit significand, of which 10-bits are explicitly stored. The implicit significand bits are defined to be one for non-zero exponent values and zero when all exponent bits are zero. Floating point units capable of performing arithmetic operations at single and half precision are known in the art. For example, existing floating point units can perform 32-bit single precision floating point operations (FP32) or dual 16-bit half precision floating point operations (FP16).Embodiments described herein extend this capability by providing support for instruction and associated logic to enable variable precision operations. Floating point instructions that allow variable precision operations can dynamically increase throughput by performing operations at lower precision when possible. In one embodiment a set of instructions and associated logic is provided in which throughput is increased by performing floating point operations at the lowest precision possible without significant loss of data. In one embodiment a set of instructions and associated logic is provided in which the floating point logic will verify lower precision results against results performed at a higher precision to determine if any significant loss of data has occurred.FIG. 14 illustrates components of a dynamic precision floating point unit 1400, according to an embodiment. The dynamic precision floating point unit 1400 In one embodiment the dynamic precision floating point unit 1400 includes a control unit 1402, a set of internal registers 1404, an exponent block 1406, and a significand block 1408. In addition to floating point control logic known in the art, in one embodiment the control unit 1402 additionally includes precision tracking logic 1412 and a numerical transform unit 1422.In one embodiment the precision tracking logic 1412 is hardware logic configured track an available number of bits of precision for computed data relative to a target precision. The precision tracking logic 1412 can track precision registers within the exponent block 1406 and the significand block 1408 to track precision metrics such as the minimum number of bits of precision required to store computed values that are generated by the exponent block 1406 and the significand block 1408. In one embodiment the precision metrics include a running average of numerical precision required to represent data over a set of computations. In one embodiment the precision metrics include a maximum required precision within a given set of data. In one embodiment the dynamic precision floating point unit 1400 supports instructions to read or reset the register data used by the precision tracking logic 1412 to generate the precision metrics described herein. In one embodiment the compute unit housing the dynamic precision floating point unit supports instructions to set or reset the register data used by the precision tracking logic 1412. In one embodiment the precision tracking logic 1412 monitors an error accumulator 1434 in the set of internal registers 1404. The error accumulator can be used to track an accumulated error (e.g., rounding error) over a set of floating point operations. In one embodiment the dynamic precision floating point unit 1400 supports a set of instructions including an instruction to reset the error accumulator 1434 and an instruction to read the error accumulator 1434. In one embodiment the error accumulator can be reset in response to a bit or flag that is supplied as an operand to an instruction.In one embodiment the numerical transform unit 1422 can be used to perform intermediate numerical transforms on data when performing lower precision operations to prevent or mitigate the possibility of overflow or underflow while performing operations. For example, when approaching the precision limits of a given datatype, the numerical transform unit 1422 can perform multiplication or division operations using logarithms and transform the resulting value via exponentiation. Further details with respect to the precision tracking logic 1412 and the numerical transform unit 1422 are provided in FIG. 22 .The internal registers 1404 includes a set of operand registers 1414 that store input values for the dynamic precision floating point unit 1400. In one embodiment the operand registers 1414 includes two operands (A, B). For floating point input data, the input data values can be divided into exponent portions (EXA, EXB) and significand portions (SIGA, SIGB). In various embodiments the operand registers 1414 are not limited to supporting two floating point inputs. In one embodiment the operand registers 1414 include three input operands, for example, to support fused multiply-add, multiply-subtract, multiply-accumulate, or related operations. In one embodiment the operand registers 1414 can also store integer values, as in one embodiment the dynamic precision floating point unit supports 32-bit, 16-bit, and 8-bit integer operations. The specific data-type and baseline precision is configurable, in one embodiment, via an input to the control unit 1402.In one embodiment, floating point operations are performed at dynamic precision using the exponent block 1406 and the significand block 1408. In one embodiment, integer operations can be performed via the significand block 1408. In one embodiment, dual 8-bit integer operations can be performed using the exponent block 1406 and significand block 1408.In one embodiment the exponent block 1406 includes a comparator 1416 and a dynamic precision exponent adder 1426. The comparator determines the difference between exponents and determines the smaller of the two exponents. During floating point addition, the exponent of the smaller number is adjusted to match the exponent of the larger number. The dynamic precision exponent adder 1426 can be used to add exponent values for FP16 or FP32 values. The significand block 1408 includes a dynamic precision multiplier 1418, a shift unit 1428, a dynamic precision significand adder 1438, and an accumulator register 1448.In one embodiment an FP16 or FP32 data type can be specified for an operation. Where FP16 is specified, the dynamic precision floating point unit 1400 can power gate elements that are unnecessary for performing FP32 operations while maintaining logic to track precision loss or error (e.g., via error accumulator 1434). For example and in one embodiment, the error accumulator 1434 can be used to track a number of rounding operations within a period of instructions. In one embodiment the error accumulator maintains a value of the total accumulated rounding error over a set of instructions. The dynamic precision floating point unit 1400 can enable support for an instruction to clear or read the error accumulator 1434 from software. Where FP32 is specified, the dynamic precision floating point unit 1400 can attempt to perform FP32 operations at FP16 precision, while power gating elements and components beyond those required to perform operations at FP16 precision. Based on the input or intermediate values, where the dynamic precision floating point unit 1400 is requested to performed operations at FP32, the dynamic precision floating point unit 1400 can initially attempt to perform operations at FP16 and expand precision as needed up to FP32. Where FP32 operations can be performed at FP16 precision, the power demand per operation is reduced, allowing a larger number of compute elements to be enabled simultaneously. For example, dynamic capacitance and/or power budget limitations for a given configuration, such as a battery powered configuration or a passive-only cooling configuration, may not allow all floating point units or other compute elements within a GPGPU to be enabled simultaneously. Reducing the dynamic power of a set of floating point units by enabling dynamic lower precision compute can increase the overall throughput of the compute units of a GPGPU within a given power envelope, as a greater number of threads can be processed on a per-cycle basis without exceeding dynamic power limitations.FIG. 15 provides additional details with respect to the dynamic precision floating point unit 1400 of FIG. 14 , according to an embodiment. In one embodiment the dynamic precision multiplier 1418 includes a set of input buffers 1502 to store significand data In one embodiment the set of input buffers include two buffers to store two input values for a multiply or divide operation. For a fused operation (e.g., multiply-add, multiply-subtract) the product of the operation can be added to a third input via an adder and/or stored in an accumulator register.In one embodiment some configurations of the dynamic precision multiplier 1418 include input buffers that are 24-bit inputs that can explicitly store 24-bits of significand data for single precision floating point inputs or 11-bits of significand data for half precision floating point values. In some configurations the input buffers 1502 may also be 32-bit buffers to enable multiplication of 32-bit integer values. In one embodiment a single configuration of the input buffers 1502 are present that is selectable or configurable between 32-bits and 24-bits. In one embodiment the output buffer 1510 is similarly configurable or selectable between 24-bits and 32-bits to selectively enable storage of a full precision 32-bit integer or a 24-bit and/or 11-bit significand value for a 32-bit or 16-bit floating point number.In one embodiment the dynamic precision multiplier 1418 includes a multiplier 1506 and an overflow multiplier 1504. The multiplier 1506 is configurable to perform a multiply or divide operation at half precision for a data type. For example, the multiplier 1506 can perform an 11-bit multiply operation for the significand of an FP16 floating point value and/or a 16-bit multiply operation for a 16-bit integer operation. The multiplier 1506 can also perform an 8-bit multiply operation for an INT8 integer value. For a 32-bit floating point value or a 32-bit integer value, the multiplier 1506 can perform a multiplication operation for a 24-bit significand at 11-bits (e.g., FP16 precision). The multiplier 1506 can, if necessary, perform a multiplication value a 16-bits of significand precision for a 24-bit FP16 significand. In one embodiment the required and resulting precision for an operation on a given set of inputs can be tracked via the precision register 1508. In one embodiment the required and resulting precision can be represented within the precision register 1508 via a loss of precision that would result should the output of the multiplier 1506 be output via the output buffer 1510. In such embodiment the precision register 1508 can track the precision loss associated with the use of lower-precision data types as well as the precision loss associated with performing operations at a lower than requested precision.In one embodiment control logic associated with the dynamic precision multiplier 1418 (e.g., within control unit 1402 of FIG. 14 ), can monitor precision loss associated with performing operations for higher precision (e.g., FP32, INT32) operations at lower precision (e.g., FP16, INT16, INT8). If precision loss will be significant the control logic can enable the overflow multiplier 1504 to perform operations for the additional bits of precision. Additionally, if the control logic determines that an overflow or underflow will occur based on the current inputs, the overflow multiplier 1504 is enabled and the multiplication operation is performed using the overflow multiplier 1504 and the multiplier 1506.Similar control operations are performed for the dynamic precision exponent adder 1426 and the dynamic precision significand adder 1438. The dynamic precision exponent adder 1426 includes a set of 8-bit input buffers that can store exponent data for FP32 (8-bit) and FP16 (5-bit). The 8-bit input buffers 1512 can also store a set of INT-8 inputs. The output buffer 1520 for the dynamic precision exponent adder 1426 can be similarly configured. The dynamic precision significand adder 1438 includes a set of input buffers 1522 that can be selected from one of a set of 24-bit and 32-bit buffers or can be dynamically configurable to store input data of either 24-bits or 32-bits. In one embodiment the input buffers 1522 are simply 32-bit buffers that can also store 24-bit input data. The output buffer 1530 for the dynamic precision significand adder 1438 can be similarly configured. Precision register 1518 within the dynamic precision exponent adder 1426 and precision register 1528 within the dynamic precision significand adder 1438 can be configured to track precision loss for performed operations. Control logic can enable overflow adder 1514 and/or overflow adder 1524 as needed to prevent overflow or underflow conditions or to prevent precision loss exceeding a threshold.Returning to FIG. 14 , in one embodiment, dual INT8 operations can be performed by the dynamic precision floating point unit 1400 using the dynamic precision exponent adder 1426 and the dynamic precision significand adder 1438. For example, instead of disabling the exponent block 1406 during integer operations, the exponent block 1406 can be configured to perform an operation on a first set of 8-bit integer operands while the significand block 1408 can be configured to perform an operation on a second set of 8-bit operands. To enable support for dual 8-bit multiply, dual fused multiply-add, dual fused multiply-subtract, and/or other multiplication based operations, in one embodiment the exponent block 1406 can include an additional multiplier 1436. The multiplier can be a fixed 8-bit multiplier to enable simultaneous dual 8-bit multiply operations using the exponent block 1406 and the significand block 1408.FIG. 16 illustrates thread assignments for a dynamic precision processing system 1600, according to an embodiment. In one embodiment the dynamic precision processing system 1600 includes a set of dynamic floating point units 1608A-1608D. The dynamic floating-point units 1608A-1608D can execute a set of operation threads 1606A-1606D that can perform mixed precision operations and generate output data at variable precisions. In one embodiment a first operation (e.g., add, subtract, multiply, divide, etc.) can be performed by a first operational thread 1606A on a first dynamic floating point unit 1608A, where the first operational thread 1606A accepts as input two 16-bit floating point values 1602A-1602B and outputs a 16-bit floating point value FP16. The first operation can be performed as a dual operation in which a single instruction executed by a GPGPU allows a mixed precision FP16/FP32 dual operation. The second operation of the dual operation can be performed by a second operation thread 1606B that is performed by a second dynamic floating point unit 1608B, which can generate a second output 1612 that is a 32-bit floating point output. The second operation thread 1606B configures the second dynamic floating point unit 1608B to receive two 32-bit floating point input values 1603A-1603B. In one embodiment the operation on the two 32-bit floating point operation can be performed at 16-bits of precision if the operation can be performed without an underflow, overflow, or excessive precision loss would not occur by performing the operation at a lower precision.In one embodiment the dynamic precision processing system 1600 can execute a single instruction having a 16-bit operand 1604A and a 32-bit operand 1604B. Operation thread 1606C can be executed on a dynamic floating-point unit 1608C. The dynamic floating-point unit 1608C will attempt to perform a mixed precision 16-bit/32-bit operation at 16-bits of precision unless significant precision loss or error will occur. In one embodiment the dynamic precision processing system 1600 can also be configured to perform integer operations. For example, an operation on a pair of 8-bit integer inputs 1605A-1605B can be performed via an operation thread 1606D via a dynamic floating-point unit 1608D to generate an 8-bit integer output 1616. In one embodiment the dynamic floating-point unit 1608D is configurable to perform dual 8-bit integer operations in which two 8-bit integer operations can be performed in a single cycle.FIG. 17 illustrates logic 1700 to perform a numerical operation at less than a requested precision, according to an embodiment. In one embodiment the logic 1700 is implemented via hardware integrated within the dynamic precision floating point unit 1400 of FIG. 14 . In one embodiment the logic 1700 is performed in part via a control unit 1402 within the dynamic precision floating point unit 1400 of FIG. 14 .In one embodiment the logic 1700 can receive a request to perform a numerical operation at a first precision, as shown at block 1702. The numerical operation can be a floating-point operation or an integer operation. The first precision can be, for example, a 32-bit precision. In one embodiment the numerical operation may be an operation at the first precision that is performed upon operations having mixed precision. The logic 1700 can then perform the numerical operation using a number of bits associated with a second precision that is lower than the first precision, as shown at block 1704. For example and in one embodiment the number of bits used to perform the operation can be a number of bits associated with a 16-bit operation, while the first precision is a 32-bit precision. The logic 1700 can generate an intermediate result at the second precision at block 1706. The logic 1700 can then determine a precision loss of the intermediate result relative to the first precision. The precision loss can be read from a register that stores an indicator of precision loss that is stored during the operation.The logic 1700 can determine whether the precision loss is less than a threshold at block 1709. In one embodiment the threshold associated with the precision loss can be software configurable, although a hardware default threshold is used in some embodiments. In one embodiment the degree of precision loss can also be determined via execution of full precision operations in parallel on unused compute units. The reduced precision results can then be compared to the full precision results. If the precision loss is less than the threshold, the logic 1700 can output the result at the second precision, as shown at block 1712. If the precision loss is not less than the threshold at block 1709, the logic 1700 can compute the remaining bits of the result at block 1710 and output the result at the first precision, as shown at block 1714. Computing the remaining bits of the result at block 1710 can be performed, in one embodiment, via overflow logic units, such as the overflow multiplier 1504, overflow adder 1514, and/or overflow adder 1524, as in FIG. 15 .Vertical Stacking of Operations for 16-bit Floating Point OperationsWhen performing single instruction multiple thread (SIMT) operations at lower precision, in some circumstances it may be difficult to maintain full utilization of underlying single instruction multiple data (SIMD) logic due to the larger number of elements required to fill all SIMD lanes. For example, a SIMD logic unit configured for FP32 operation on 128-bit input registers can perform a single operation on four sets of input data. If that logic unit is configured to perform FP16 operations on the same four sets of input data, the underlying throughput for the operations may be increased due to the lower precision of the operation, but SIMD utilization is reduced by half. One solution to SIMD underutilization is to perform the operation on eight sets of input data. However, the software executing on the logic units may not require as much parallelism as the underlying hardware can provide.For example, a loop that performs iterative operations on input arrays can be vectorized such that each iteration of the array is performed in parallel as a separate SIMT thread. The separate SIMT threads can be performed in a single operation on underlying SIMD/vector logic within a compute unit. When performing parallel instructions derived via compiler loop vectorization logic, a loop shorter than eight iterations will not fill all eight SIMD lanes available to execute the threads spawned for those operations, reducing overall utilization of the compute units. Additionally, where the underlying hardware has N SIMD lanes, any number of vectorized iterations that are not multiple of N will require the execution of remainder iterations on a less than full SIMD unit. Furthermore, vectorization may require the separate execution of peel loops before executing the main body of the vectorized operations.Some embodiments described herein can increase SIMD utilization by stacking multiple unrelated FP16 operations into a single SIMD unit for execution. Where a SIMD unit has 8 lanes available for execution, thread scheduling logic can dispatch threads in units of N/2 or N/4, and allow unrelated sets threads that are to perform the same or compatible operations to share a single SIMD unit. Additionally, one embodiment enables SIMD lane scheduling that allows the intermingling of dynamically assembled SIMT thread groups with vector SIMD threads.FIG. 18 illustrates loop vectorization for SIMD units, according to an embodiment. In one embodiment, software logic can include a loop that is automatically vectorized by compiler software executing on a data processing system. The loop can include a peel loop 1802, a vectorized main loop 1804, and a remainder loop 1806. In some configurations loop vectorization is most efficient when performed on data that accesses aligned memory. For example, a GPGPU can be configured such that vector memory accesses may be most efficiently performed in 64-byte chunks 1801A-1801F. In such configuration, a peel loop 1802 includes a subset of loop iterations that are peeled from the main loop to enable unaligned memory accesses to be sectioned off from the main loop. The vectorized main loop 1804 includes the majority of the iterations of the loop. Each iteration of the vectorized main loop can be performed in parallel and the memory accesses for each element are aligned on a specific memory boundary. The remainder loop 1806 includes the set of iterations that follow the vectorized main loop 1804. The iterations in the remainder loop 1806 generally may not be performed in parallel as efficiently as the main loop.In one embodiment the peel loop 1802 and the remainder loop 1806 can also be vectorized. In one embodiment each of the peel loop 1802, main loop 1804, and remainder loop 1806 can be executed on FP16 SIMD8 units, where eight instances of the same operation can be performed in parallel. Loop iterations can be executed in parallel on SIMD hardware (e.g., FP16 SIMD8 units 1801A-1808C) using execution mask 1812, execution mask 1814, and execution mask 1816 that each enable and disable SIMD lanes for an operation cycle. For the illustrated peel loop 1802 and remainder loop 1806, a subset of elements is selected in execution mask 1812 and execution mask 1816. All lanes are selected in the execution mask 1814 of the vectorized main loop 1804.In one embodiment, for a SIMD unit with inactive lanes can be configured to perform other operations on those inactive lanes. For a given cycle, where scheduling logic configure a set of inactive lanes for a SIMD unit (e.g., FP16 SIMD8 1808A, FP16 SIMD8 108C), instead of idling those lanes during a cycle, the scheduler can stack other multi-element SIMD threads or assign SIMT threads to the otherwise idle SIMD lanesFIG. 19 illustrates a thread processing system 1900, according to an embodiment. In one embodiment the thread processing system 1900 includes a SIMD compute unit, such as a SIMD8 floating point unit 1920 that includes multiple dynamic floating point units 1922A-1922H. Depending on the operation, the SIMD8 floating point unit 1920 can execute eight or more of the same or similar operations in a single cycle. For example and in one embodiment, each of the eight dynamic floating point units 1922A-1922H can execute a single operation at FP16 precision. In one embodiment, each of the eight dynamic floating point units 1922A-1922H can perform two paired INT8 operations in a single cycle.Under some circumstances, such as with peel or remainder loops as illustrated in FIG. 18 , not all lanes of a SIMD floating point unit will be active during a cycle. To increase utilization, SIMD slots can be assigned at smaller granularities, to enable otherwise unused SIMD lanes to be utilized. For example, the SIMD8 floating point unit 1920 would generally be assigned threads or operations at an eight-operational granularity, where less than eight operations present a potential loss of computational efficiency. In one embodiment, SIMD lanes can be occupied a single vector SIMD thread that includes an execution mask that selects at least eight elements or a SIMT thread group having at least eight elements.To increase SIMD utilization, one embodiment divides eight SIMD lanes into two SIMD4 slots (e.g., SIMD4 slot 1910, SIMD4 slot 1912). The SIMD4 slots can be filled in a variety of ways. In one embodiment, two separate SIMD threads (SIMD thread 1902, SIMD thread 1904) that combine to cover a total of four SIMD lanes are assigned to a SIMD4 slot (e.g., SIMD4 slot 1910). In one embodiment, the SIMT thread group 1906 can be assigned to a SIMD4 slot 1912. The SIMT thread group 1906 can includes any number of threads that is a multiple of four threads (e.g., 4, 8, 12, 16, etc.). The threads within the SIMT thread group 1906 can be processed four threads at a time, with the number of cycles required to process all threads within the SIMT thread group 1906 dependent upon the number of threads in the group.FIG. 20 illustrates logic 2000 to assign threads for computation, according to an embodiment. In one embodiment the logic 2000 is performed via a thread processing system 1900 as in FIG. 19 . In one embodiment the logic 2000 can receive a first set of threads at a SIMD unit having a first number of lanes, as shown at block 2002. The logic 2000 can then determine if the first set of threads fill all SIMD lanes of the SIMD unit, as shown at block 2003. If the first set of threads include enough SIMT threads or the threads of the first set of threads include enough SIMD vector elements to fill all SIMD lanes, the logic 2000 can assign the first set of threads to the SIMD unit, as shown at block 2004.If the first set of threads do not fill all SIMD lanes, as determined at block 2003, the logic 2000 can assign a first set of threads to a second number of lanes that is less than the first number of lanes at block 2006. The assignment can be performed by assigning a SIMD thread to the SIMD unit and masking out the inactive lanes. The assignment can also be performed by assigning a set of of SIMT threads to the SIMD unit. The logic can then stack one or more additional sets of threads to fill all SIMD lanes, as shown at bock 2008. The additional set of threads can specify active SIMD lanes that occupy lanes that are not occupied by the initial threads.System to Enable Normalization and Transformations for Low Precision DataWhen performing operations with low precision data types, care must be taken to avoid overflow or underflow of data during numerical operations. This responsibility typically falls upon the data scientist that is developing the low precision algorithm. Due to the limitations of low precision arithmetic, many neural networks have been adapted to use binary and/or ternary values that only occupy one or two bits per element. However, there is a need for integer and floating point arithmetic logic units that can enable N-bit low precision arithmetic with guard logic to warn against or attempt to prevent significant loss of precision during arithmetic operations. In one embodiment the dynamic precision floating point units described herein include logic to warn when numerical calculations are approaching the limits of low precision calculations.As shown in FIG. 14 , a dynamic precision floating point unit 1400 can include precision tracking logic 1412 and a numerical transform unit 1422. In one embodiment the precision tracking logic 1412 tracks the available bits of precision remaining for computed data relative to a target precision. The available bits if precision can be tracked for intermediate data to determine whether an intermediate data value, which, in one embodiment is computed at a higher precision relative to input data or output data, can be stored at an output precision without significant precision loss or rounding error. For example and in one embodiment, specific low precision operations can be efficiently performed at a higher precision and the precision tracking logic 1412 can determine whether a result of a calculation would overflow a given output precision. In one embodiment the logic units described herein can output status information that indicates the degree of lost precision due to rounding error. In one embodiment the logic units can perform intermediate numeric transformations on data to prevent significant data loss. The logic units can then output the transformed value. In one embodiment a full precision or near-full precision output value can be programmatically derived based on the output and status information provided with the output.FIG. 21 illustrates a deep neural network 2100 that may be processed using compute logic provided by embodiments described herein. The deep neural network (DNN) is an artificial neural network including multiple neural network layers 2102A-2102N. Each layer represents a set of non-linear compute operations to perform feature extraction and transformation in a manner consistent with the machine learning neural networks described herein. Each successive layer uses the output from the previous layer as input. In the case of a convolutional neural network, fused multiply add logic (e.g., FMA logic 2104A, 2104B) can be used to compute dot products between feature map and filter data to generate activation map data that is provided as input for a successive layer.Low precision neural networks may be implemented using binary or ternary weights in combination with binary, ternary, or N-bit feature maps. Some neural networks may still benefit from the added precision of calculations using N-bit feature maps and N-bit filters. In some implementations, N-bit features and weights for a neural network can be processed at low precision without significant reduction in output error. However, a data scientist implementing a low precision N-bit neural network (e.g., FP16, INT8) generally should be aware of rounding errors or out of bounds data that may arise due to successive calculations at low precision. Should precision tracking logic (e.g., precision tracking logic 1412 of FIG. 14 ) in the FMA logic 2104A-2106B determine that weight or feature map data is approaching the limits of the available precision of the data type, a status bit can be set by the FMA logic 2104A-2015B. The status bit can serve as an indicator to a data scientist that is developing the neural network model that is present within the neural network layers 2102A-2012N that the model may require optimization or greater numerical precision may be warranted.In one embodiment normalize and transform logic 2106A-2106B can be enabled to perform weight normalization or numerical transforms on feature map data before providing the feature map data to the next neural network layer for input. The application of the normalize and transform logic 2106A-2106B is optional at each stage and may be performed only if significant precision loss, overflow, or underflow conditions are likely during processing of an upcoming layer. In one embodiment weights or feature maps output from a layer of a neural network can be automatically normalized via an instance of the normalize and transform logic 2106A-2106B.In one embodiment the normalize and transform logic 2106A-2016B can use the numerical transform unit 1422 of FIG. 14 to transform feature map data or weight data. Feature map data output from a neural layer can be based on a set of data output from a set of functions In such embodiment a specific set of low precision instructions are provided that enable automatic adjustment of N-bit neural network data to prevent catastrophic loss of precision. Exemplary transformations or normalizations that may be performed by the normalize and transform logic 2106A-2016B include weight normalization to a range of values or a set of persistent and reversible feature data transformation. In one embodiment, weight normalization can be performed to compress the dynamic range of a set of filter weights to within a predetermined range. Weight data can be normalized, for example, within a range of [-1,1], which can preserve the relative differences between weight values while reducing the overall magnitude of the weight values. In one embodiment, neural network weight or feature map data can be normalized via the mean value of the dataset.In one embodiment, neural network calculations using data that approaches the range limits of the data type can be transformed prior to the data is used in calculations. For example, a multiply operation using large values that may result in an overflow can be performed as an addition of logarithms instead of a multiply operation. While such transformations may result in some degree of precision loss, the calculations will be able to be performed without overflowing the number of bits allocated to perform the operation. For example, a series of operations may be presented as in equation (1). f = A × B × C D × EShould precision tracking logic within a compute unit determine that such operation may overflow or underflow, the operation can be transformed to equation (2). f = 10 log A + log B + log C − log D − log EEquation (2) can be performed to produce a result without triggering an overflow of the datatype. In one embodiment, the normalize and transform logic 2106A-2016B can transform the output values to logarithmic values for storage and transform the values via exponentiation before using the values for machine learning calculations described herein.FIG. 22 is a flow diagram of logic 2200 to prevent error or significant precision loss when performing low precision operations for machine learning, according to an embodiment. In one embodiment the logic 2200 can be implemented via precision tracking logic 1412 and a numerical transform unit 1422 within a dynamic precision floating point unit 1400, as in FIG. 14 .In one embodiment the logic 2200 can compute an activation map based on filter and feature map data associated with a layer of a neural network, as shown at block 2202. The logic 2200 can then track loss of precision that occurs during computation of an activation map for the layer of the neural network. The logic 2200 can then determine if the loss of precision is approaching a threshold at block 2205. If the loss of precision does not approach the default or configured threshold at block 2205, the logic 2200 can continue to compute activation maps (and apply activation functions) for successive layers until and unless a loss of precision approaching a threshold occurs at block 2205. When the loss of precision approaches the threshold, the logic 2200 can determine if automatic numerical transformations are enabled at block 2207. If automatic transforms are enabled at block 2207, for example, via the instructions used to perform the set of numerical operations, the logic 2200 can transform the neural network data to reduce error due to precision loss at block 2208. The logic 2200 can perform any of the numerical transformations described herein, including data normalization within a range or via a mean value. Without regard to whether automatic transformations are enabled at block 2207, the logic 2200 can output status indicating that precision loss is approaching the threshold at block 2210. The status can be output as a status flag that is output from a compute unit as a result of a performed operation. A programmer can configure software logic to response to such status by performing algorithmic adjustments to an executing program or adjusting the neural network model used to perform machine learning.Additional Exemplary Graphics Processing SystemDetails of the embodiments described above can be incorporated within graphics processing systems and devices described below. The graphics processing system and devices of FIG. 23 through FIG. 36 illustrate alternative systems and graphics processing hardware that can implement any and all of the techniques described above.Additional Exemplary Graphics Processing System OverviewFIG. 23 is a block diagram of a processing system 2300, according to an embodiment. In various embodiments the system 2300 includes one or more processors 2302 and one or more graphics processors 2308, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 2302 or processor cores 2307. In one embodiment, the system 2300 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.An embodiment of system 2300 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system 2300 is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system 2300 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system 2300 is a television or set top box device having one or more processors 2302 and a graphical interface generated by one or more graphics processors 2308.In some embodiments, the one or more processors 2302 each include one or more processor cores 2307 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 2307 is configured to process a specific instruction set 2309. In some embodiments, instruction set 2309 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 2307 may each process a different instruction set 2309, which may include instructions to facilitate the emulation of other instruction sets. Processor core 2307 may also include other processing devices, such a Digital Signal Processor (DSP).In some embodiments, the processor 2302 includes cache memory 2304. Depending on the architecture, the processor 2302 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 2302. In some embodiments, the processor 2302 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 2307 using known cache coherency techniques. A register file 2306 is additionally included in processor 2302 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 2302.In some embodiments, processor 2302 is coupled with a processor bus 2310 to transmit communication signals such as address, data, or control signals between processor 2302 and other components in system 2300. In one embodiment the system 2300 uses an exemplary 'hub' system architecture, including a memory controller hub 2316 and an Input Output (I/O) controller hub 2330. A memory controller hub 2316 facilitates communication between a memory device and other components of system 2300, while an I/O Controller Hub (ICH) 2330 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 2316 is integrated within the processor.Memory device 2320 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 2320 can operate as system memory for the system 2300, to store data 2322 and instructions 2321 for use when the one or more processors 2302 executes an application or process. Memory controller hub 2316 also couples with an optional external graphics processor 2312, which may communicate with the one or more graphics processors 2308 in processors 2302 to perform graphics and media operations.In some embodiments, ICH 2330 enables peripherals to connect to memory device 2320 and processor 2302 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 2346, a firmware interface 2328, a wireless transceiver 2326 (e.g., Wi-Fi, Bluetooth), a data storage device 2324 (e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller 2340 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers 2342 connect input devices, such as keyboard and mouse 2344 combinations. A network controller 2334 may also couple with ICH 2330. In some embodiments, a high-performance network controller (not shown) couples with processor bus 2310. It will be appreciated that the system 2300 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub 2330 may be integrated within the one or more processor 2302, or the memory controller hub 2316 and I/O controller hub 2330 may be integrated into a discreet external graphics processor, such as the external graphics processor 2312.FIG. 24 is a block diagram of an embodiment of a processor 2400 having one or more processor cores 2402A-2402N, an integrated memory controller 2414, and an integrated graphics processor 2408. Those elements of FIG. 24 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 2400 can include additional cores up to and including additional core 2402N represented by the dashed lined boxes. Each of processor cores 2402A-2402N includes one or more internal cache units 2404A-2404N. In some embodiments each processor core also has access to one or more shared cached units 2406.The internal cache units 2404A-2404N and shared cache units 2406 represent a cache memory hierarchy within the processor 2400. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 2406 and 2404A-2404N.In some embodiments, processor 2400 may also include a set of one or more bus controller units 2416 and a system agent core 2410. The one or more bus controller units 2416 manage a set of peripheral buses, such as one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core 2410 provides management functionality for the various processor components. In some embodiments, system agent core 2410 includes one or more integrated memory controllers 2414 to manage access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 2402A-2402N include support for simultaneous multi-threading. In such embodiment, the system agent core 2410 includes components for coordinating and operating cores 2402A-2402N during multi-threaded processing. System agent core 2410 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 2402A-2402N and graphics processor 2408.In some embodiments, processor 2400 additionally includes graphics processor 2408 to execute graphics processing operations. In some embodiments, the graphics processor 2408 couples with the set of shared cache units 2406, and the system agent core 2410, including the one or more integrated memory controllers 2414. In some embodiments, a display controller 2411 is coupled with the graphics processor 2408 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 2411 may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 2408 or system agent core 2410.In some embodiments, a ring based interconnect unit 2412 is used to couple the internal components of the processor 2400. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 2408 couples with the ring interconnect 2412 via an I/O link 2413.The exemplary I/O link 2413 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 2418, such as an eDRAM module. In some embodiments, each of the processor cores 2402A-2402N and graphics processor 2408 use embedded memory modules 2418 as a shared Last Level Cache.In some embodiments, processor cores 2402A-2402N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 2402A-2402N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 2402A-2402N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 2402A-2402N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 2400 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.FIG. 25 is a block diagram of a graphics processor 2500, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 2500 includes a memory interface 2514 to access memory. Memory interface 2514 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 2500 also includes a display controller 2502 to drive display output data to a display device 2520. Display controller 2502 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. In some embodiments, graphics processor 2500 includes a video codec engine 2506 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.In some embodiments, graphics processor 2500 includes a block image transfer (BLIT) engine 2504 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 2510. In some embodiments, GPE 2510 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 310 includes a 3D pipeline 2512 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 2512 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 2515. While 3D pipeline 2512 can be used to perform media operations, an embodiment of GPE 2510 also includes a media pipeline 2516 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 2516 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 2506. In some embodiments, media pipeline 2516 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 2515. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 2515.In some embodiments, 3D/Media subsystem 2515 includes logic for executing threads spawned by 3D pipeline 2512 and media pipeline 2516. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 2515, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 2515 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.Exemplary Additional Graphics Processing EngineFIG. 26 is a block diagram of a graphics processing engine 2610 of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE) 2610 is a version of the GPE 2510 shown in FIG. 25. Elements of FIG. 26 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline 2512 and media pipeline 2516 of FIG. 25 are illustrated. The media pipeline 2516 is optional in some embodiments of the GPE 2610 and may not be explicitly included within the GPE 2610. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 2610.In some embodiments, GPE 2610 couples with or includes a command streamer 2603, which provides a command stream to the 3D pipeline 2512 and/or media pipelines 2516. In some embodiments, command streamer 2603 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 2603 receives commands from the memory and sends the commands to 3D pipeline 2512 and/or media pipeline 2516. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 2512 and media pipeline 2516. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 2512 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 2512 and/or image data and memory objects for the media pipeline 2516. The 3D pipeline 2512 and media pipeline 2516 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 2614.In various embodiments the 3D pipeline 2512 can execute one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 2614. The graphics core array 2614 provides a unified block of execution resources. Multi-purpose execution logic (e.g., execution units) within the graphic core array 2614 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.In some embodiments the graphics core array 2614 also includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units additionally include general-purpose logic that is programmable to perform parallel general purpose computational operations, in addition to graphics processing operations. The general purpose logic can perform processing operations in parallel or in conjunction with general purpose logic within the processor core(s) 2307 of FIG. 23, core 2402A-2402N as in FIG. 24, or any other processor described herein.Output data generated by threads executing on the graphics core array 2614 can output data to memory in a unified return buffer (URB) 2618. The URB 2618 can store data for multiple threads. In some embodiments the URB 2618 may be used to send data between different threads executing on the graphics core array 2614. In some embodiments the URB 2618 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic 2620.In some embodiments, graphics core array 2614 is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 2610. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.The graphics core array 2614 couples with shared function logic 2620 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 2620 are hardware logic units that provide specialized supplemental functionality to the graphics core array 2614. In various embodiments, shared function logic 2620 includes but is not limited to sampler 2621, math 2622, and inter-thread communication (ITC) 2623 logic. Additionally, some embodiments implement one or more cache(s) 2625 within the shared function logic 2620. A shared function is implemented where the demand for a given specialized function is insufficient for inclusion within the graphics core array 2614. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 2620 and shared among the execution resources within the graphics core array 2614. The precise set of functions that are shared between the graphics core array 2614 and included within the graphics core array 2614 varies between embodiments.FIG. 27 is a block diagram of another embodiment of a graphics processor 2700. Elements of FIG. 27 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 2700 includes a ring interconnect 2702, a pipeline front-end 2704, a media engine 2737, and graphics cores 2780A-2780N. In some embodiments, ring interconnect 2702 couples the graphics processor to other processing units, including other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated within a multi-core processing system.In some embodiments, graphics processor 2700 receives batches of commands via ring interconnect 2702. The incoming commands are interpreted by a command streamer 2703 in the pipeline front-end 2704. In some embodiments, graphics processor 2700 includes scalable execution logic to perform 3D geometry processing and media processing via the graphics core(s) 2780A-2780N. For 3D geometry processing commands, command streamer 2703 supplies commands to geometry pipeline 2736. For at least some media processing commands, command streamer 2703 supplies the commands to a video front end 2734, which couples with a media engine 2737. In some embodiments, media engine 2737 includes a Video Quality Engine (VQE) 2730 for video and image post-processing and a multi-format encode/decode (MFX) 2733 engine to provide hardware-accelerated media data encode and decode. In some embodiments, geometry pipeline 2736 and media engine 2737 each generate execution threads for the thread execution resources provided by at least one graphics core 2780A.In some embodiments, graphics processor 2700 includes scalable thread execution resources featuring modular cores 2780A-2780N (sometimes referred to as core slices), each having multiple sub-cores 2750A-550N, 2760A-2760N (sometimes referred to as core sub-slices). In some embodiments, graphics processor 2700 can have any number of graphics cores 2780A through 2780N. In some embodiments, graphics processor 2700 includes a graphics core 2780A having at least a first sub-core 2750A and a second sub-core 2760A. In other embodiments, the graphics processor is a low power processor with a single sub-core (e.g., 2750A). In some embodiments, graphics processor 2700 includes multiple graphics cores 2780A-2780N, each including a set of first sub-cores 2750A-2750N and a set of second sub-cores 2760A-2760N. Each sub-core in the set of first sub-cores 2750A-2750N includes at least a first set of execution units 2752A-2752N and media/texture samplers 2754A-2754N. Each sub-core in the set of second sub-cores 2760A-2760N includes at least a second set of execution units 2762A-2762N and samplers 2764A-2764N. In some embodiments, each sub-core 2750A-2750N, 2760A-2760N shares a set of shared resources 2770A-2770N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in the various embodiments of the graphics processor.Exemplary Additional Execution UnitsFIG. 28 illustrates thread execution logic 2800 including an array of processing elements employed in some embodiments of a GPE. Elements of FIG. 28 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, thread execution logic 2800 includes a shader processor 2802, a thread dispatcher 2804, instruction cache 2806, a scalable execution unit array including a plurality of execution units 2808A-2808N, a sampler 2810, a data cache 2812, and a data port 2814. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit 2808A, 2808B, 2808C, 2808D, through 2808N-1 and 2808N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 2800 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 2806, data port 2814, sampler 2810, and execution units 2808A-2808N. In some embodiments, each execution unit (e.g. 2808A) is a stand-alone programmable general purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 2808A-2808N is scalable to include any number individual execution units.In some embodiments, the execution units 2808A-2808N are primarily used to execute shader programs. A shader processor 2802 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 2804. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units 2808A-2808N. For example, the geometry pipeline (e.g., 2736 of FIG. 27) can dispatch vertex, tessellation, or geometry shaders to the thread execution logic 2800 (FIG. 28) for processing. In some embodiments, thread dispatcher 2804 can also process runtime thread spawning requests from the executing shader programs.In some embodiments, the execution units 2808A-2808N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units 2808A-2808N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 2808A-2808N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader.Each execution unit in execution units 2808A-2808N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 2808A-2808N support integer and floating-point data types.The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.One or more internal instruction caches (e.g., 2806) are included in the thread execution logic 2800 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 2812) are included to cache thread data during thread execution. In some embodiments, a sampler 2810 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 2810 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 2800 via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 2802 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor 2802 then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 2802 dispatches threads to an execution unit (e.g., 2808A) via thread dispatcher 2804. In some embodiments, pixel shader 2802 uses texture sampling logic in the sampler 2810 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.In some embodiments, the data port 2814 provides a memory access mechanism for the thread execution logic 2800 output processed data to memory for processing on a graphics processor output pipeline. In some embodiments, the data port 2814 includes or couples to one or more cache memories (e.g., data cache 2812) to cache data for memory access via the data port.FIG. 29 is a block diagram illustrating a graphics processor instruction formats 2900 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 2900 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format 2910. A 64-bit compacted instruction format 2930 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 2910 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2930. The native instructions available in the 64-bit format 2930 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 2913. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 2910.For each format, instruction opcode 2912 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 2914 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 2910 an exec-size field 2916 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 2916 is not available for use in the 64-bit compact instruction format 2930.Some execution unit instructions have up to three operands including two source operands, src0 2920, src1 2922, and one destination 2918. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 2924), where the instruction opcode 2912 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 2910 includes an access/address mode field 2926 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.In some embodiments, the 128-bit instruction format 2910 includes an access/address mode field 2926, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.In one embodiment, the address mode portion of the access/address mode field 2926 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.In some embodiments instructions are grouped based on opcode 2912 bit-fields to simplify Opcode decode 2940. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 2942 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 2942 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 2944 (e.g., call, jump (jmp)) includes instructions in the form of OOlOxxxxb (e.g., 0x20). A miscellaneous instruction group 2946 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group 2948 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 2948 performs the arithmetic operations in parallel across data channels. The vector math group 2950 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands.Exemplary Additional Graphics PipelineFIG. 30 is a block diagram of another embodiment of a graphics processor 3000. Elements of FIG. 30 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 3000 includes a graphics pipeline 3020, a media pipeline 3030, a display engine 3040, thread execution logic 3050, and a render output pipeline 3070. In some embodiments, graphics processor 3000 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 3000 via a ring interconnect 3002. In some embodiments, ring interconnect 3002 couples graphics processor 3000 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 3002 are interpreted by a command streamer 3003, which supplies instructions to individual components of graphics pipeline 3020 or media pipeline 3030.In some embodiments, command streamer 3003 directs the operation of a vertex fetcher 3005 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 3003. In some embodiments, vertex fetcher 3005 provides vertex data to a vertex shader 3007, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 3005 and vertex shader 3007 execute vertex-processing instructions by dispatching execution threads to execution units 3052A-3052B via a thread dispatcher 3031.In some embodiments, execution units 3052A-3052B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 3052A-3052B have an attached L1 cache 3051 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, graphics pipeline 3020 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 811 configures the tessellation operations. A programmable domain shader 817 provides back-end evaluation of tessellation output. A tessellator 3013 operates at the direction of hull shader 3011 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to graphics pipeline 3020. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 3011, tessellator 3013, and domain shader 3017) can be bypassed.In some embodiments, complete geometric objects can be processed by a geometry shader 3019 via one or more threads dispatched to execution units 3052A-3052B, or can proceed directly to the clipper 3029. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 3019 receives input from the vertex shader 3007. In some embodiments, geometry shader 3019 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.Before rasterization, a clipper 3029 processes vertex data. The clipper 3029 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component 3073 in the render output pipeline 3070 dispatches pixel shaders to convert the geometric objects into their per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 3050. In some embodiments, an application can bypass the rasterizer and depth test component 3073 and access un-rasterized vertex data via a stream out unit 3023.The graphics processor 3000 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 3052A-3052B and associated cache(s) 3051, texture and media sampler 3054, and texture/sampler cache 3058 interconnect via a data port 3056 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 3054, caches 3051, 3058 and execution units 3052A-3052B each have separate memory access paths.In some embodiments, render output pipeline 3070 contains a rasterizer and depth test component 3073 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 3078 and depth cache 3079 are also available in some embodiments. A pixel operations component 3077 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 3041, or substituted at display time by the display controller 3043 using overlay display planes. In some embodiments, a shared L3 cache 3075 is available to all graphics components, allowing the sharing of data without the use of main system memory.In some embodiments, graphics processor media pipeline 3030 includes a media engine 3037 and a video front end 3034. In some embodiments, video front end 3034 receives pipeline commands from the command streamer 3003. In some embodiments, media pipeline 3030 includes a separate command streamer. In some embodiments, video front-end 3034 processes media commands before sending the command to the media engine 3037. In some embodiments, media engine 3037 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 3050 via thread dispatcher 3031.In some embodiments, graphics processor 3000 includes a display engine 3040. In some embodiments, display engine 3040 is external to processor 3000 and couples with the graphics processor via the ring interconnect 3002, or some other interconnect bus or fabric. In some embodiments, display engine 3040 includes a 2D engine 3041 and a display controller 3043. In some embodiments, display engine 3040 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 3043 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.In some embodiments, graphics pipeline 3020 and media pipeline 3030 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline ProgrammingFIG. 31A is a block diagram illustrating a graphics processor command format 3100 according to some embodiments. FIG. 31B is a block diagram illustrating a graphics processor command sequence 3110 according to an embodiment. The solid lined boxes in FIG. 31A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 3100 of FIG. 31A includes data fields to identify a target client 3102 of the command, a command operation code (opcode) 3104, and the relevant data 3106 for the command. A sub-opcode 3105 and a command size 3108 are also included in some commands.In some embodiments, client 3102 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 3104 and, if present, sub-opcode 3105 to determine the operation to perform. The client unit performs the command using information in data field 3106. For some commands an explicit command size 3108 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word.The flow diagram in FIG. 31B shows an exemplary graphics processor command sequence 3110. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.In some embodiments, the graphics processor command sequence 3110 may begin with a pipeline flush command 3112 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 3122 and the media pipeline 3124 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. In some embodiments, pipeline flush command 3112 can be used for pipeline synchronization or before placing the graphics processor into a low power state.In some embodiments, a pipeline select command 3113 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 3113 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command 3112 is required immediately before a pipeline switch via the pipeline select command 3113.In some embodiments, a pipeline control command 3114 configures a graphics pipeline for operation and is used to program the 3D pipeline 3122 and the media pipeline 3124. In some embodiments, pipeline control command 3114 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 3114 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, return buffer state commands 3116 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 3116 includes selecting the size and number of return buffers to use for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 3120, the command sequence is tailored to the 3D pipeline 3122 beginning with the 3D pipeline state 3130 or the media pipeline 3124 beginning at the media pipeline state 3140.The commands to configure the 3D pipeline state 3130 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state 3130 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.In some embodiments, 3D primitive 3132 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 3132 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 3132 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 3132 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 3122 dispatches shader execution threads to graphics processor execution units.In some embodiments, 3D pipeline 3122 is triggered via an execute 3134 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a 'go' or 'kick' command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.In some embodiments, the graphics processor command sequence 3110 follows the media pipeline 3124 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 3124 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.In some embodiments, media pipeline 3124 is configured in a similar manner as the 3D pipeline 3122. A set of commands to configure the media pipeline state 3140 are dispatched or placed into a command queue before the media object commands 3142. In some embodiments, media pipeline state commands 3140 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, media pipeline state commands 3140 also support the use of one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, media object commands 3142 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 3142. Once the pipeline state is configured and media object commands 3142 are queued, the media pipeline 3124 is triggered via an execute command 3144 or an equivalent execute event (e.g., register write). Output from media pipeline 3124 may then be post processed by operations provided by the 3D pipeline 3122 or the media pipeline 3124. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.Graphics Software ArchitectureFIG. 32 illustrates exemplary graphics software architecture for a data processing system 3200 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 3210, an operating system 3220, and at least one processor 3230. In some embodiments, processor 3230 includes a graphics processor 3232 and one or more general-purpose processor core(s) 3234. The graphics application 3210 and operating system 3220 each execute in the system memory 3250 of the data processing system.In some embodiments, 3D graphics application 3210 contains one or more shader programs including shader instructions 3212. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 3214 in a machine language suitable for execution by the general-purpose processor core 3234. The application also includes graphics objects 3216 defined by vertex data.In some embodiments, operating system 3220 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 3220 can support a graphics API 3222 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 3220 uses a front-end shader compiler 3224 to compile any shader instructions 3212 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 3210. In some embodiments, the shader instructions 3212 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.In some embodiments, user mode graphics driver 3226 contains a back-end shader compiler 3227 to convert the shader instructions 3212 into a hardware specific representation. When the OpenGL API is in use, shader instructions 3212 in the GLSL high-level language are passed to a user mode graphics driver 3226 for compilation. In some embodiments, user mode graphics driver 3226 uses operating system kernel mode functions 3228 to communicate with a kernel mode graphics driver 3229. In some embodiments, kernel mode graphics driver 3229 communicates with graphics processor 3232 to dispatch commands and instructions.IP Core ImplementationsOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.FIG. 33 is a block diagram illustrating an IP core development system 3300 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 3300 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 3330 can generate a software simulation 3310 of an IP core design in a high-level programming language (e.g., C/C++). The software simulation 3310 can be used to design, test, and verify the behavior of the IP core using a simulation model 3312. The simulation model 3312 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 3315 can then be created or synthesized from the simulation model 3312. The RTL design 3315 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 3315, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.The RTL design 3315 or equivalent may be further synthesized by the design facility into a hardware model 3320, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 3365 using non-volatile memory 3340 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 3350 or wireless connection 3360. The fabrication facility 3365 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.Exemplary System on a Chip Integrated CircuitFIGs. 34-36 illustrated exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.FIG. 34 is a block diagram illustrating an exemplary system on a chip integrated circuit 3400 that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit 3400 includes one or more application processor(s) 3405 (e.g., CPUs), at least one graphics processor 3410, and may additionally include an image processor 3415 and/or a video processor 3420, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 3400 includes peripheral or bus logic including a USB controller 3425, UART controller 3430, an SPI/SDIO controller 3435, and an I2S/I2C controller 3440. Additionally, the integrated circuit can include a display device 3445 coupled to one or more of a high-definition multimedia interface (HDMI) controller 3450 and a mobile industry processor interface (MIPI) display interface 3455. Storage may be provided by a flash memory subsystem 3460 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 3465 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 3470.FIG. 35 is a block diagram illustrating an exemplary graphics processor 3510 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 3510 can be a variant of the graphics processor 3410 of FIG. 34. Graphics processor 3510 includes a vertex processor 3505 and one or more fragment processor(s) 3515A-3515N (e.g., 3515A, 3515B, 3515C, 3515D, through 3515N-1, and 3515N). Graphics processor 3510 can execute different shader programs via separate logic, such that the vertex processor 3505 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 3515A-3515N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 3505 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 3515A-3515N use the primitive and vertex data generated by the vertex processor 3505 to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s) 3515A-3515N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API.Graphics processor 3510 additionally includes one or more memory management units (MMUs) 3520A-3520B, caches 3525A-3525B, and circuit interconnects 3530A-3530B. The one or more MMU(s) 3520A-3520B provide for virtual to physical address mapping for graphics processor 3510, including for the vertex processor 3505 and/or fragment processor(s) 3515A-3515N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more caches 3525A-3525B. In one embodiment the one or more MMU(s) 3520A-3520B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 3405, image processor 3415, and/or video processor 3420 of FIG. 34, such that each processor 3405-3420 can participate in a shared or unified virtual memory system. The one or more circuit interconnects 3530A-3530B enable graphics processor 3510 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments.FIG. 36 is a block diagram illustrating an additional exemplary graphics processor 3610 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 3610 can be a variant of the graphics processor 3410 of FIG. 34. Graphics processor 3610 includes the one or more MMU(s) 3520A-3520B, caches 3525A-3525B, and circuit interconnects 3530A-3530B of the integrated circuit 3500 of FIG. 35.Graphics processor 3610 includes one or more shader cores 3615A-3615N (e.g., 3615A, 3615B, 3615C, 3615D, 3615E, 3615F, through 3615N-1, and 3615N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 3610 includes an inter-core task manager 3605, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 3615A-3615N and a tiling unit 3618 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.The following clauses and/or examples pertain to specific embodiments or examples thereof. Specifics in the examples may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform acts of the method, or of an apparatus or system according to embodiments and examples described herein. Various components can be a means for performing the operations or functions described.The embodiments described herein refer to specific configurations of hardware, such as application specific integrated circuits (ASICs), configured to perform certain operations or having a predetermined functionality. Such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic respectively represent one or more machine-readable storage media and machine-readable communication media. Thus, the storage devices of a given electronic device typically store code and/or data for execution on the set of one or more processors of that electronic device.Of course, one or more parts of an embodiment may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the embodiments may be practiced without some of these specific details. In certain instances, well-known structures and functions were not described in elaborate detail to avoid obscuring the inventive subject matter of the embodiments. Accordingly, the scope and spirit of the invention should be judged in terms of the claims that follow.
A processor includes a decode unit to decode an instruction that is to indicate a source packed data that is to include a plurality of adjoining data elements, a number of data elements, and a destination. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the instruction, is to store a result packed data in the destination. The result packed data is to have a plurality of lanes that are each to store a different non-overlapping set of the indicated number of adjoining data elements aligned with a least significant end of the respective lane. The different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data are to be separated from one another by at least one most significant data element position of the less significant lane.
CLAIMSWhat is claimed is:1. A processor comprising: a plurality of packed data registers; a decode unit to decode an instruction, the instruction to indicate a source packed data that is to include a plurality of adjoining data elements, the instruction to indicate a number of data elements, and the instruction to indicate a destination storage location; and an execution unit coupled with the plurality of packed data registers, and coupled with the decode unit, the execution unit, in response to the instruction, to store a result packed data in the destination storage location, the result packed data to have a plurality of lanes, each of the lanes of the result packed data to store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane, the different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data to be separated from one another by at least one most significant data element position of the less significant lane.2. The processor of claim 1, wherein the decode unit is to decode the instruction that is to have an immediate to indicate the number of the data elements.3. The processor of claim 1, wherein the decode unit is to decode the instruction that is to indicate the number of data elements through an indication of a number of structures that each are to include a same number of data elements.4. The processor of claim 1, wherein the decode unit is to decode the instruction that is to indicate the source packed data in system memory, and wherein the execution unit, in response to the instruction, is to load at least each of the different non-overlapping sets of the data elements from the system memory with a single load operation. 5. The processor of claim 4, wherein the decode unit is to decode the instruction that is to indicate a mask that is to include a plurality of mask elements, and wherein the execution unit, in response to the instruction, is to load from the system memory only data elements of the source packed data that correspond to unmasked mask elements of the mask.6. The processor of any one of claims 1 to 5, wherein the decode unit is to decode the instruction that is to indicate the source packed data which is to include 32-bit single precision floating point data elements, and that is to indicate a 512-bit destination packed data register, and wherein the execution unit, in response to the instruction, is to store the result packed data, which is to be a 512-bit result packed data, and which is to have two 256-bit lanes, wherein each of the 256-bit lanes is to store the corresponding different non-overlapping set of the adjoining 32-bit single precision floating point data elements aligned with the least significant end of the respective 256-bit lane, wherein the different non-overlapping sets of the adjoining 32-bit single precision floating point data elements in the adjoining 256-bit lanes of the 512-bit result packed data are to be separated from one another by the at least one most significant 32-bit data element position of the less significant 256-bit lane.7. The processor of any one of claims 1 to 5, wherein the decode unit is to decode the instruction that is to indicate the source packed data which is to include 32-bit single precision floating point data elements, and that is to indicate an at least 256-bit destination packed data register, and wherein the execution unit, in response to the instruction, is to store the result packed data, which is to be an at least 256-bit result packed data, and which is to have at least two 128-bit lanes, wherein each of the at least two 128-bit lanes is to store the corresponding different non-overlapping set of the adjoining 32-bit single precision floating point data elements aligned with the least significant end of the respective 128-bit lane, wherein the different non- overlapping sets of the adjoining 32-bit single precision floating point data elements in the adjoining 128-bit lanes of the at least 256-bit result packed data are to be separated from one another by the at least one most significant 32-bit data element position of the less significant 128-bit lane.8. The processor of any one of claims 1 to 5, wherein the decode unit is to decode the instruction that is to indicate the source packed data that is to include 32-bit data elements.9. The processor of any one of claims 1 to 5, wherein the decode unit is to decode the instruction that is to indicate the source packed data that is to include 64-bit data elements.10. The processor of any one of claims 1 to 5, wherein the execution unit, in response to the instruction, is to store the result packed data in which each of the lanes is a 128-bit lane. 11. The processor of any one of claims 1 to 5, wherein the execution unit, in response to the instruction, is to store the result packed data in which each of the lanes is a 256-bit lane.12. The processor of any one of claims 1 to 5, wherein the decode unit is to decode the instruction that is to have a field to indicate a size of the lanes of the result packed data.13. The processor of any one of claims 1 to 5, wherein it is to be implicit to the instruction to align each different non-overlapping set of the indicated number of adjoining data elements with the least significant end of the respective lane.14. A method performed by a processor comprising: receiving an instruction at the processor, the instruction indicating a source packed data that includes a plurality of adjoining data elements, the instruction indicating a number of data elements, and the instruction indicating a destination storage location; and storing a result packed data in the destination storage location in response to the instruction, the result packed data having a plurality of lanes, each of the lanes of the result packed data storing a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane, the different non- overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data separated from one another by at least one most significant data element position of the less significant lane.15. The method of claim 14, wherein receiving comprises receiving the instruction that has an immediate that indicates the number of the data elements, and wherein storing comprises storing the result packed data having the plurality of lanes that are one of 128-bit lanes and 256- bit lanes. 16. The method of claim 14, wherein receiving comprises receiving the instruction indicating the source packed data that includes a first array of multiple element structures and a second array of multiple element structures, and wherein storing comprises storing the result packed data in which the first array of the multiple element structures is stored in a least significant lane of the result packed data and the second array of the multiple element structures is stored in an adjoining more significant lane of the result packed data with the at least one most significant data element position of the least significant lane separating the first array of the multiple element structures and the second array of the multiple element structures.17. The method of claim 14, wherein receiving comprises receiving the instruction indicating the source packed data that includes a first array of adjoining pairs of real and imaginary complex numbers and a second array of adjoining pairs of real and imaginary complex numbers, and wherein storing comprises storing the result packed data in which the first array of the adjoining pairs of the real and the imaginary complex numbers is stored in a least significant lane of the result packed data and the second array of the adjoining pairs of the real and the imaginary complex numbers is stored in an adjoining more significant lane of the result packed data with at least two most significant data element positions of the least significant lane separating the first and second arrays of the adjoining pairs of the real and the imaginary complex numbers.18. The method of claim 14, wherein receiving comprises receiving the instruction indicating the source packed data that includes a first array of three adjoining pairs of 32-bit real and 32-bit imaginary complex numbers and a second array of three adjoining pairs of 32-bit real and 32-bit imaginary complex numbers, and wherein storing comprises storing an at least 512-bit result packed data in which the first array of the three adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers is stored in a least significant 256-bit lane of the at least 512-bit result packed data and the second array of the three adjoining pairs of the 32-bit real and the 32- bit imaginary complex numbers is stored in an adjoining more significant 256-bit lane of the at least 512-bit result packed data with at least two most significant 32-bit data element positions of the least significant 256-bit lane separating the first and second arrays of the three adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers.19. The method of claim 14, wherein receiving comprises receiving the instruction indicating the source packed data that includes a first adjoining pair of 32-bit real and 32-bit imaginary complex numbers and a second adjoining pair of 32-bit real and 32-bit imaginary complex numbers, and wherein storing comprises storing an at least 256-bit result packed data in which the first adjoining pair of the 32-bit real and the 32-bit imaginary complex numbers is stored in a least significant 128-bit lane of the at least 256-bit result packed data and the second adjoining pair of the 32-bit real and the 32-bit imaginary complex numbers is stored in an adjoining more significant 128-bit lane of the at least 256-bit result packed data with at least two most significant 32-bit data element positions of the least significant 128-bit lane separating the first and second adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers.20. The method of any one of claims 14 to 19, wherein receiving comprises receiving the instruction that indicates the source packed data in system memory, and further comprising loading at least each of the different non-overlapping sets of the data elements from the system memory with a single load operation.21. The method of any one of claims 14 to 19, wherein storing comprises storing the result in which each different non-overlapping set of the indicated number of the adjoining data elements being aligned with the least significant end of the respective lane is implicit to the instruction.22. A computer system to process instructions comprising: an interconnect; a processor coupled with the interconnect, the processor to receive an instruction that is to indicate a source packed data that is to include a plurality of adjoining data elements, the instruction to indicate a number of data elements, and the instruction to indicate a destination packed data register, the processor, in response to the instruction, to store a result packed data in the destination packed data register, the result packed data to have a plurality of lanes, each of the lanes of the result packed data to store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane, the different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data to be separated from one another by at least one most significant data element position of the less significant lane; and a dynamic random access memory (DRAM) coupled with the interconnect, the DRAM storing a set of instructions of an algorithm, the set of the instructions of the algorithm is to expect the different non-overlapping sets of the indicated number of the adjoining data elements to be aligned with the least significant ends of the respective lanes.23. The computer system of claim 22, wherein the processor is to store the result packed data in which the lanes are one of 128-bit lanes and 256-bit lanes.24. An apparatus comprising means for performing the method of any one of claims 14 to 19.25. A machine-readable medium that provides an instruction that if executed by a machine is operative to cause the machine to perform the method of any one of claims 14 to 19.
PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS TO PARTITION ASOURCE PACKED DATA INTO LANES BACKGROUNDTechnical FieldEmbodiments described herein generally relate to processors. In particular, embodiments described herein generally relate to processors to operate on packed data in response to instructions.Background InformationMany processors have Single Instruction, Multiple Data (SFMD) architectures. In SFMD architectures, multiple data elements may be packed within one register or memory location as packed data or vector data. In packed or vector data, the bits of the register or memory location may be logically divided into a sequence of data elements. For example, a 128-bit wide packed data register may have two 64-bit data elements, four 32-bit data elements, eight 16-bit data elements, or sixteen 8-bit data elements. Each of the data elements may represent a separate piece of data (e.g., a pixel color component, a floating point number, etc.) that may be operated upon separately and/or independently of the others.In such SFMD architectures, a packed data instruction, vector instruction, or SFMD instruction may be used to operate on multiple data elements of such a packed data or vector operand, or multiple pairs of data elements of two such packed data or vector operands, simultaneously and/or in parallel. The processor may have parallel execution hardware responsive to the instruction to operate on the data simultaneously and/or in parallel.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments. In the drawings:Figure 1 is a block diagram of an embodiment of a processor that is operative to perform an embodiment of a partition into lanes instruction.Figure 2 is a block flow diagram of an embodiment of a method of performing an embodiment of a partition source packed data into lanes instruction.Figure 3 is a block diagram of an embodiment of a partition source packed data into lanes operation.Figure 4 is a block diagram of a first specific example embodiment of a partition source packed data into lanes operation.Figure 5 is a block diagram of a second even more specific example embodiment of a partition source packed data into lanes operation.Figure 6 is a block diagram of a third specific example embodiment of a partition source packed data into lanes operation.Figure 7 is a block diagram of a fourth even more specific example embodiment of a partition source packed data into lanes operation.Figure 8 is a block diagram of an example embodiment of a suitable set of packed data registers.Figure 9 is a block diagram of an example embodiment of a suitable set of packed data operation mask registers.Figures lOA-lOC are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof, according to embodiments of the invention.Figure 11A-B is a block diagram illustrating an exemplary specific vector friendly instruction format and an opcode field, according to embodiments of the invention.Figure 12A-D is a block diagram illustrating an exemplary specific vector friendly instruction format and fields thereof, according to embodiments of the invention.Figure 13 is a block diagram of an embodiment of a register architecture.Figure 14A is a block diagram illustrating an embodiment of an in-order pipeline and an embodiment of a register renaming out-of-order issue/execution pipeline.Figure 14B is a block diagram of an embodiment of processor core including a front end unit coupled to an execution engine unit and both coupled to a memory unit.Figure 15A is a block diagram of an embodiment of a single processor core, along with its connection to the on-die interconnect network, and with its local subset of the Level 2 (L2) cache.Figure 15B is a block diagram of an embodiment of an expanded view of part of the processor core of Figure 15A.Figure 16 is a block diagram of an embodiment of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics.Figure 17 is a block diagram of a first embodiment of a computer architecture.Figure 18 is a block diagram of a second embodiment of a computer architecture.Figure 19 is a block diagram of a third embodiment of a computer architecture.Figure 20 is a block diagram of a fourth embodiment of a computer architecture.Figure 21 is a block diagram of use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to embodiments of the invention.DETAILED DESCRIPTION OF EMBODIMENTSDisclosed herein are partition into lanes instructions, processors to perform the instructions, methods performed by the processors when processing or performing the instructions, and systems incorporating one or more processors to process or perform the instructions. In some embodiments, the processors may have a decode unit or other logic to receive and/or decode the instructions, and an execution unit or other logic to execute or otherwise perform the instructions. In the following description, numerous specific details are set forth (e.g., specific instruction operations, data formats, processor configurations, microarchitectural details, sequences of operations, etc.). However, embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to avoid obscuring the understanding of the description.Figure 1 is a block diagram of an embodiment of a processor 100 that is operative to perform an embodiment of a partition into lanes instruction 102. In some embodiments, the processor may represent an integrated circuit and/or may include integrated circuitry or logic disposed on a semiconductor die. In some embodiments, the processor may be a general- purpose processor (e.g., a general -purpose microprocessor or central processing unit (CPU) of the type used in desktop, laptop, or other computers). Alternatively, the processor may be a special-purpose processor. Examples of suitable special-purpose processors include, but are not limited to, network processors, communications processors, cryptographic processors, graphics processors, co-processors, embedded processors, digital signal processors (DSPs), and controllers (e.g., microcontrollers). The processor may have any of various complex instruction set computing (CISC) architectures, reduced instruction set computing (RISC) architectures, very long instruction word (VLrvV) architectures, hybrid architectures, other types of architectures, or have a combination of different architectures (e.g., different cores may have different architectures).During operation, the processor 100 may receive the partition into lanes instruction 102. For example, the instruction may be received from memory over a bus or other interconnect. The instruction may represent a macroinstruction, machine code instruction, or other instruction or control signal of an instruction set of the processor. In some embodiments, the partition into lanes instruction may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), a source packed data 112 that is to include a plurality of adjoining data elements, and may explicitly specify or otherwise indicate a destination storage location 116 (e.g., a destination packed data register) where a result packed data 118 is to be stored in response to the instruction.As shown, in some embodiments, the source packed data 112 may optionally be stored in system memory 108. In such embodiments, the instruction may specify or otherwise indicate memory address information to be used to address a memory location 110 where the source packed data is to be stored. Various different types of address information are possible. The address information may either represent absolute memory address information or relative memory address information, which may indicate a memory location relative to a base memory address or other memory location. In addition, various different indirect memory addressing modes may optionally be used. As one specific example, the fetch instruction may implicitly indicate a register (e.g., a general -purpose register) that is used to store relative memory address information that may be combined with additional memory address information stored in another implicit register (e.g., a code, data, or extended segment register) to generate the final memory address used to identify the memory location where the source packed data is to be stored. The implicitly indicated register may be understood by the processor although unexpressed through an explicit value. For example, the processor may understand or recognize after identifying an opcode of the instruction that it is inherent or implicit to use the register(s). This is just one example. Other forms of the address information are also possible. Also, rather than the address information being provided in one or more registers, potentially some or all of the address information may be provided by bits of the instruction (e.g., an immediate).In other embodiments, the source packed data 112 may optionally be stored in one of the packed data registers 1 14 of the processor. As further shown, in some embodiments, the destination storage location may optionally be one of the packed data registers 114 of the processor, although this is not required. In other embodiments, other storage locations may optionally instead be used for one or more of these operands. The instruction may have source and/or destination operand specification fields to specify packed data registers, memory locations, or other storage locations for such operands. Alternatively, one or more of these storage locations may optionally be implicit to the instruction (e.g., implicit to an opcode of the instruction) instead of being explicitly specified. Moreover, in some embodiments, a packed data register or other storage location used for the source packed data may optionally be implicitly reused as a destination storage location for the result packed data, and may be specified only once. In one aspect, a source/destination packed data register may be implicitly or impliedly understood to be used for both the source operand and the result operand.In various embodiments, the data elements of the source packed data may be 8-bit data elements, 16-bit data elements, 32-bit data elements, or 64-bit data elements. The data elements may be integer, fixed point, or floating point. In some embodiments, the data elements may optionally be floating point data elements, such as, for example, 32-bit single precision floating point data elements or 64-bit double precision floating point data elements, although the scope of the invention is not so limited. The data elements of the source packed data are adjoining data elements in that the data elements are contiguous and/or conterminous and/or that there may be no extra intervening data elements or bits between the adjoining data elements. For example, the most significant bit of the less significant data element in each pair of adjoining data elements may be one bit less than the least significant bit of the more significant data element in each pair of adjoining data elements.In some embodiments, the instruction may also indicate a number of data elements, which may be used to partition, split, or divide the source packed data. For example, the instruction may indicate the number of data elements as two data elements, three data elements, six data elements, or some other number of data elements, to indicate the point or points where the source packed data is to be partitioned, split, or divided into equal sized non-overlapping segments or portions of data elements. The instruction may indicate the number of data elements in different ways in different embodiments. In some embodiments, the instruction may have an immediate (e.g., a two, four, six, or eight bit immediate) or other field to have a value to specify or otherwise indicate the actual number of data elements. In other embodiments, the instruction may have an immediate or other field to have a value to specify or otherwise indicate a number of multiple data element structures (e.g., a number of two element structures or three elements structures) to thereby indirectly indicate the actual number of data elements.Referring again to Figure 1, the processor includes a decode unit or decoder 104. The decode unit may receive and decode the partition into lanes instruction. The decode unit may output one or more relatively lower-level instructions or control signals (e.g., one or more microinstructions, micro-operations, micro-code entry points, decoded instructions or control signals, etc.), which reflect, represent, and/or are derived from the relatively higher-level partition into lanes instruction. In some embodiments, the decode unit may include one or more input structures (e.g., port(s), interconnect(s), an interface) to receive the partition into lanes instruction, an instruction recognition and decode logic coupled therewith to recognize and decode the partition into lanes instruction, and one or more output structures (e.g., port(s), interconnect(s), an interface) coupled therewith to output the lower-level instruction(s) or control signal(s). The decode unit may be implemented using various different mechanisms including, but not limited to, microcode read only memories (ROMs), look-up tables, hardware implementations, programmable logic arrays (PLAs), and other mechanisms suitable to implement decode units. In some embodiments, the decode unit may be included on a die of the processor.In some embodiments, instead of the partition into lanes instruction being provided directly to the decode unit, an instruction emulator, translator, morpher, interpreter, or other instruction conversion module may optionally be used. Various types of instruction conversion modules may be implemented in software, hardware, firmware, or a combination thereof. In some embodiments, the instruction conversion module may be located outside the processor, such as, for example, on a separate die and/or in a memory (e.g., as a static, dynamic, or runtime emulation module). By way of example, the instruction conversion module may receive the partition into lanes instruction, which may be of a first instruction set, and may emulate, translate, morph, interpret, or otherwise convert the partition into lanes instruction into one or more corresponding intermediate instructions or control signals, which may be of a second different instruction set. The one or more intermediate instructions or control signals of the second instruction set may be provided to a decode unit (e.g., decode unit 104), which may decode them into one or more lower-level instructions or control signals executable by native hardware of the processor (e.g., one or more execution units).The processor 100 also includes the set of packed data registers 114. Each of the packed data registers may represent an on-die storage location that is operative to store packed data, vector data, or SEVID data. The packed data registers may represent architecturally-visible or architectural registers that are visible to software and/or a programmer and/or are the registers indicated by instructions of the instruction set of the processor to identify operands. These architectural registers are contrasted to other non-architectural registers in a given microarchitecture (e.g., temporary registers, reorder buffers, retirement registers, etc.). The packed data registers may be implemented in different ways in different microarchitectures and are not limited to any particular type of design. Examples of suitable types of registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, and combinations thereof.Referring again to Figure 1, the execution unit 106 is coupled with the decode unit 104 and is coupled with the packed data registers 114. In some embodiments, the execution unit may be on-die with the decode unit. During operation, if the source packed data is in memory, the execution unit may be coupled with the memory in order to receive the source packed data. The execution unit may be coupled with these components through other intervening components (not shown). The execution unit may receive the one or more decoded or otherwise converted instructions or control signals that represent and/or are derived from the partition into lanes instruction. The execution unit may also receive the source packed data 112. The execution unit may be operative in response to and/or as a result of the partition into lanes instruction (e.g., in response to one or more instructions or control signals decoded from the instruction) to store the result packed data 118 in the destination storage location (e.g., a destination packed data register) indicated by the instruction.In some embodiments, the result packed data 118 and/or the destination storage location may have a plurality of lanes. In various embodiments, the lanes of the result packed data and/or the destination storage location may be 64-bit lanes, 128-bit lanes, 256-bit lanes, or 512-bit lanes, although the scope of the invention is not so limited. In some embodiments, the lanes may have a same size as a size of an architectural packed data register of the processor. In some embodiments, the lanes may be half, or one quarter, the size of the largest architectural packed data registers of the processor. In some embodiments, each of the lanes may be large enough to store a plurality of 32-bit data elements, or may be large enough to store a plurality of 64-bit data elements. In some embodiments, the partition into lanes instruction may optionally have one or more bits or a field to indicate a size of the lanes. For example, a single bit may indicate a size of a lane as being either 128-bits or 256-bits, or two bits may be used to indicate up to four different lane sizes. Alternatively, the size of the lane may optionally be implicit to the instruction (e.g., implicit to an opcode).In some embodiments, each of the lanes of the result packed data and/or destination storage location may store a different non-overlapping set of the indicated number (i.e., the number indicated by the instruction) of adjoining data elements of the source packed data. In other words, each lane may store a different non-overlapping same sized portion of the source packed data. Each of these different non-overlapping sets of (the indicated number of) adjoining data elements (or different non-overlapping same sized portions of the source packed data) may be aligned with a least significant end of the respective lanes in which they are stored. In some embodiments, the different non-overlapping sets of (the indicated number of) the adjoining data elements (or different non-overlapping same sized portions of the source packed data) that are stored in adjoining lanes of the result packed data (or destination storage location) may be separated from one another by at least one most significant data element position of the less significant lane. For example, a second non-overlapping same sized portion of the source packed data that is stored in a next-to-least significant lane of the result packed data may be separated from a first non-overlapping same sized portion of the source packed data that is stored in a least significant lane of the result packed data by at least one most significant data element position of the least significant lane which is not used to store either of these two non- overlapping same sized portions of the source packed data. Rather, zeroes, existing values, or some other values may be stored in these positions. In some embodiments, the result packed data may be any of those shown and described for Figures 3-7, although the scope of the invention is not so limited.In the source packed data, all of the data elements that are to be stored to the result packed data may be adjoining or contiguous. In the case of the source packed data being in memory, the execution unit and/or the processor, in response to the instruction, may be operative to load each of the data elements of the source packed data (or at least those that are to be stored to the result packed data) from the system memory by performing a single load operation. The execution unit, in response to the instruction, may split or partition the data elements of the source packed data into the multiple non-overlapping segments or portions of contiguous data elements, and then distribute or apportion each of these different segments or portions to a different corresponding lane. This may be useful for various different purposes. For example, it may be easier or more convenient from an overall the different data element segments or portions of the source packed data if they are split and aligned along lane boundaries.Advantageously, the partition source packed data into lanes instruction 102 may allow the source packed data 112 to be split into two or more portions and have those portions aligned along respective lane boundaries within the confines of the performance of a single instruction. In addition, in some embodiments where the source packed data is initially stored in memory (which is not required), the instruction may allow each of the portions to be loaded from memory with a single load operation (e.g., use of a single load port once) also within the confines of performing the same single instruction. Another possible approach would be to perform multiple instructions to mimic this operation. As one illustrative example, a first instruction may be performed to load a first segment or portion from memory into a first packed data register, and then a second instruction may be performed to load a second segment or portion from the memory, with broadcasting of the loaded data into both an upper and lower lane of a second packed data register, and with masking (e.g., using a mask in a mask register) to mask out the lower lane of the second packed data register and merge the first and second packed data registers. However, one possible drawback to such an alternate approach is that two loads are used, instead of just a single load. This may tend to unnecessarily consume load ports and other resources and/or reduce performance. Additionally, two instructions are performed, instead of just a single instruction, which may also tend to reduce performance and/or increase power consumption. Also, multiple packed data registers and a mask register are used, instead of just a single destination packed data register. Such registers generally tend to be somewhat scarce resources which could instead be used for other purposes.In some embodiments, it may be implicit to the instruction and/or fixed for the instruction (e.g., implicit to an opcode of the instruction or fixed for the opcode of the instruction) to split the source operand into multiple non-overlapping portions and align each of the different non- overlapping portions with the least significant ends of the respective lanes. This may be implicitly or impliedly understood by the processor, although not explicitly unexpressed other than through the opcode and any opcode related bits. For example, the processor may understand or recognize after identifying an opcode of the instruction that such an operation is inherent or implicit. An alternate possible approach would be to use a flexible instruction, such as a shuffle or permute instruction having a set of shuffle or permute control bits to control flexible shuffling or permutation of source data elements into flexible positions in the destination according to the control bits. The shuffle or permute instruction may be used to shuffle or permute data elements from one or more source packed data, to different data element positions in a result packed data, according to corresponding the shuffle or permute control bits for each data element that is shuffled or permuted. These sets of shuffle or permute control bits may be provided in an immediate of the instruction, or in another source operand generally stored in a register, for example. However, there are potential drawbacks with such an alternate approach of using such flexible shuffle or permute instructions, at least for certain applications. For one thing, it generally takes extra time and/or effort to generate the sets of shuffle or permute control bits. For example, either a programmer may need to generate these explicitly, or a compiler may need to generate them through additional workload on the compiler. In addition, storing the shuffle or permute control bits in a register may tie up the register and prevent it from being used for another purpose. Further, when the instruction has an additional field to specify a register to store the shuffle or permute control bits, or when the shuffle or permute control bits are provided by an immediate of the instruction, the length of the instruction may be increased. This may tend to reduce the number of instructions that can be fetched in an instruction bundle and/or increase the complexity of decoding the instruction and/or the time needed to decode the instruction, which may tend to reduce front end throughput. Also, this may tend to increase code size. In addition, in the case of an immediate, generally only a certain number of control bits are able to fit within the immediate, which may limit the number of data elements that can be shuffled or permuted.The execution unit and/or the processor may include specific or particular logic (e.g., transistors, integrated circuitry, or other hardware potentially combined with firmware (e.g., instructions stored in non-volatile memory) and/or software) that is operative to perform the partition into lanes instruction and/or store the result packed data in response to and/or as a result of the partition into lanes instruction (e.g., in response to one or more instructions or control signals decoded from the partition into lanes instruction). In some embodiments, the execution unit may include one or more input structures (e.g., port(s), interconnect(s), an interface) to receive the source packed data, logic coupled therewith to split the source packed data into portions each having the indicated number of data elements, and one or more output structures (e.g., port(s), interconnect(s), an interface) coupled therewith to distribute, apportion, or otherwise output these portions to the corresponding different lanes of the result packed data and/or destination storage location.To avoid obscuring the description, a relatively simple processor 100 has been shown and described. However, the processor may optionally include other processor components. For example, various different embodiments may include various different combinations and configurations of the components shown and described for any of Figures 13, 14A B, 15A B, 16. By way of example, considering Figure 14B, the instruction fetch unit 1438 may fetch the instruction, the decode unit 1440 may decode the instruction, the scheduler unit 1456 may schedule the associated operations, the retirement unit 1454 may retire the instruction, etc. All of the components of the processor may be coupled together to allow them to operate as intended.Figure 2 is a block flow diagram of an embodiment of a method 220 of performing an embodiment of a partition source packed data into lanes instruction. In various embodiments, the method may be performed by a processor, instruction processing apparatus, digital logic device, or integrated circuit. In some embodiments, the method 220 may be performed by and/or with the processor of Figure 1 and/or using the instruction of Figure 1. The components, features, and specific optional details described herein for the processor and/or the instruction of Figure 1, also optionally apply to the method 220, which may optionally be performed by the processor and/or using the instruction. Alternatively, the method 220 may be performed by and/or within a similar or different processor or apparatus and/or using a similar or different instruction. Moreover, the processor of Figure 1 may perform methods the same as, similar to, or different than the method of Figure 2.The method includes receiving the partition into lanes instruction, at block 221. In various aspects, the instruction may be received at a processor or a portion thereof (e.g., an instruction fetch unit, a decode unit, a bus interface unit, etc.). In various aspects, the instruction may be received from an off-processor and/or off-die source (e.g., from memory, interconnect, etc.), or from an on-processor and/or on-die source (e.g., from an instruction cache, an instruction fetch unit, etc.). The instruction may specify or otherwise indicate a source packed data that includes a plurality of adjoining data elements. The instruction may also specify or otherwise indicate a number of data elements. The instruction may also specify or otherwise indicate a destination packed data register or other storage location.A result packed data may be stored in the indicated destination storage location in response to and/or as a result of the instruction, at block 222. In some embodiments, the result packed data and/or the destination storage location may have a plurality of lanes. In some embodiments, each of the lanes of the result packed data and/or destination storage location may store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data. In other words, each lane may store a different non-overlapping same sized portion of the source packed data. Each of these different non-overlapping sets of data elements or different non-overlapping same sized portions of the source packed data may be aligned with a least significant end of the respective lanes in which they are stored. In some embodiments, the different non-overlapping sets of the indicated number of the adjoining data elements that are stored in adjoining lanes of the result packed data may be separated from one another by at least one most significant data element position of the less significant lane. For example, a second non-overlapping same sized portion of the source packed data that is stored in a next-to-least significant lane of the result packed data may be separated from a first non-overlapping same sized portion of the source packed data that is stored in a least significant lane of the result packed data by at least one most significant data element position of the least significant lane which is not used to store either of these two non-overlapping same sized portions of the source packed data. Rather, zeroes, existing values, or some other values may be stored in these positions.The illustrated method involves architectural operations (e.g., those visible from a software perspective). In other embodiments, the method may optionally include one or more microarchitectural operations. By way of example, the instruction may be fetched, decoded, scheduled out-of-order, a source packed data may be accessed, an execution unit may perform microarchitectural operations to implement the instruction, etc. In some embodiments, the operations to implement the instruction may include generating memory address information and accessing the source packed data from system memory using the generated memory address information. In some embodiments, the operations to implement the instruction may also include zeroing at least one most significant data element position in each lane which separates the non-overlapping portions stored from the source packed data.Figure 3 is a block diagram illustrating an embodiment of a partition source packed data into lanes operation 330 that may be performed in response to an embodiment of a partition source packed data into lanes instruction. The instruction may specify or otherwise indicate a source packed data 312 (e.g., as a source operand) that is to have a plurality of adjoining data elements. In some embodiments, the source packed data may be stored in memory, whereas in other embodiments the source packed data may be stored in a packed data register or other storage location. The instruction may also specify or otherwise indicate a number of data elements at which the source packed data is to be split. The instruction may also specify or otherwise indicate a destination storage location where a result packed data 318 is to be stored.As shown, the source packed data may include at least two, or in some cases more than two (e.g., at least four), different non-overlapping sets of the indicated number of adjoining data elements. In the specific illustrated embodiment, the source packed data includes a first set of the indicated number of adjoining data elements 332, and a second set of the indicated number of adjoining data elements 334. Each set may include the number of data elements indicated by the instruction. For example, if the number of data elements indicated by the instruction is six, then each of the sets may include six data elements. The data elements of the source packed data are adjoining data elements, in that the data elements are contiguous and/or conterminous and/or in that there are no extra intervening data elements or bits between the adjoining data elements. In addition, in the source packed data, the first set of the indicated number of adjoining data elements 332 is adjoining, contiguous, or conterminous with the second set of the indicated number of adjoining data elements 334. For example, the most significant bit of the most significant data element of the first set of the indicated number of adjoining data elements 332 may be one bit less than the least significant bit of the least significant data element of the adjoining second set of the indicated number of adjoining data elements 334. In the illustration, the least significant bit (e.g., labeled as bit-0) of the source packed data is the rightmost bit, whereas the most significant bit of the source packed data is the leftmost bit. The source packed data may also optionally include one or more other data elements 336, although this is not required.Commonly the number of data elements in the source packed data 312 may be equal to the size in bits of the source packed data divided by the size in bits of a single data element. In various embodiments, the width of the source packed data may be 128-bits, 256-bits, 512-bits, or 1024-bits, although the scope of the invention is not so limited. In various embodiments, the size of each data element may be 8-bits, 16-bits, 32-bits, or 64-bits, although the scope of the invention is not so limited. Other sizes or widths of the source packed data and the data elements are also suitable. In various embodiments, there may be at least at least four, at least eight, at least sixteen, at least thirty-two, or more than thirty-two data elements (e.g., at least sixty four), in the source packed data. The data elements may be integer, fixed point, or floating point. In some embodiments, the data elements may optionally be floating point data elements, such as, for example, 32-bit single precision floating point data elements, or 64-bit double precision floating point data elements, although the scope of the invention is not so limited.During the partition source packed data into lanes operation, the source packed data 312 may be provided to an execution unit 306. The execution unit may generate and store a result packed data 318 in response to and/or as a consequence of the instruction and/or operation. In some embodiments, the result packed data may be stored in a destination packed data register, or other destination storage location, that is specified or otherwise indicated by the instruction. In some embodiments, the result packed data may have at least two, or in some cases more than two (e.g., at least four), different non-overlapping lanes. In the specific illustrated embodiment, the result packed data includes a first, least significant lane 346, and an adjoining second, more significant lane 348. In other embodiments, the result packed data may optionally include additional lanes (e.g., a same number of lanes as the number of different non-overlapping sets of the indicated number of adjoining data elements of the source packed data). In some embodiments, each of the lanes of the result packed data may be used to store a different non- overlapping set of the indicated number of adjoining data elements of the source packed data which is aligned with a least significant end of the respective lane. For example, as shown in the specific illustrated embodiment, in the result packed data, the first set of the indicated number of adjoining data elements 338 may be stored in the first, least significant lane 346, and the second set of the indicated number of adjoining data elements 342 may be stored in the second, more significant lane 348.In some embodiments, in the result packed data, the different non-overlapping sets of the indicated number of the adjoining data elements that are stored in the adjoining lanes of the result packed data may be separated from one another by at least one most significant data element position of the less significant lane. For example, as shown in the specific illustrated embodiment, at least one most significant (e.g., leftmost as viewed) data element 340 of the first, least significant lane may separate the most significant data element of the first set of the indicated number of adjoining data elements 338 from the least significant data element of the second set of the indicated number of adjoining data elements 342. Similarly, at least one most significant (e.g., leftmost as viewed) data element 344 of the second, more significant lane would separate an optional additional set of the indicated number of adjoining data elements (if there were one, which is not shown in the illustrated example). In the source packed data, the first set of the indicated number of adjoining data elements 332 was adjoining, contiguous, or conterminous with the second set of the indicated number of adjoining data elements 334, whereas in the result packed data the first and second sets of the indicated number of adjoining data elements have been split so that they are no longer adjoining, contiguous, or conterminous with one another, and so that each is aligned with the least significant bit or end of the corresponding lane in which they are stored. The first set 338 is aligned with bit-0 of the result, and the second set 342 is aligned with the least significant bit (shown by reference numeral 349) of the second lane 348. Since these sets don't entirely fill the corresponding lanes, in at least some embodiments, one or more additional data elements in each lane separate these different sets of the indicated number of adjoining data elements.In some embodiments, when the source packed data is in memory, the instruction may merely not load the other data elements 336. In other embodiments, the instruction may load these data elements but based on the indicated number of data elements may implicitly or inherently not use these other data elements 336. In still other embodiments, the instruction may indicate a mask or mask operand that may include a plurality of mask bits or other mask elements. The execution unit, in response to the instruction, may load from memory only data elements of the source packed data that correspond to unmasked mask elements of the mask. For example, each of the mask elements corresponding to the other data elements 336 may be masked out (e.g., cleared to zero), whereas each of the mask elements corresponding to the first and second sets 332, 334 may be unmasked (e.g., set to one).Figure 4 is a block diagram illustrating a first specific example embodiment of a partition source packed data into lanes operation 450 that may be performed in response to a first specific example embodiment of a partition source packed data into lanes instruction. The instruction may specify or otherwise indicate a source packed data 412 that is to have a plurality of adjoining data elements Al through A16. The source packed data, and the data elements, may have various sizes or widths as previously described. Also, the data elements may have floating point, integer, or fixed point formats, as previously described. In some embodiments, the instruction may also specify or otherwise indicate a number of data elements at which the source packed data is to be split. In this specific example, the number of data elements indicated by the instruction is six data elements, although this is merely illustrative. The instruction may also specify or otherwise indicate a destination storage location.In this specific example, since the number of data elements indicated by the instruction is six data elements, the data elements A1-A6 of the source packed data represent a first non- overlapping set of six adjoining data elements, and the data elements A7-A12 represent a second non-overlapping set of six adjoining data elements. In the source packed data, the data elements A1-A6 are adjoining, contiguous, or conterminous with the data elements A7-A12 (e.g., the most significant bit of the data element A6 is adjoining, contiguous, or conterminous with the least significant bit of the data element A7).During the partition source packed data into lanes operation, the source packed data 412 may be provided to an execution unit 406. The execution unit may generate and store the result packed data 418 in response to and/or as a consequence of the instruction and/or operation. In some embodiments, the result packed data may be stored in a destination packed data register or other destination storage location indicated by the instruction. The result packed data includes a first, least significant lane 446, and an adjoining second, more significant lane 448. In other embodiments, the result packed data may optionally include additional lanes. In this specific example, the first set of the data elements A1-A6 are stored in the first lane 446, and the second set of the data elements A7-A12 are stored in the second adjoining lane.In the result packed data, the first set of the data elements A1-A6 stored in the first lane are separated from the second set of the data elements A7-A12 stored in the second lane, in this specific example, by the two most significant (leftmost as viewed) data element positions of the first least significant lane. Whereas in the source packed data the data elements A1-A6 were adjoining, contiguous, or conterminous with the data elements A7-A12, in the result packed data the elements A1-A6 and A7-A12 are split or separated from one another with, in this specific example, two intervening data element positions disposed between them. Also, the data elements A1-A6 are aligned with the least significant bit or end of the first lane, whereas the data elements A7-A12 re aligned with the least significant bit or end of the second lane.One use, but certainly not the only use, of the partition into lanes instructions and/or operations as disclosed herein is to process vectors or arrays of two data element, three data element, four data element, or other multiple data element structures. A complex number is one example of a two data element structure that includes a real number or component and an imaginary number or component. The real and imaginary numbers together constitute the complex number. One example of a three data element structure is a red, green, and blue color component structure for a pixel. Various other types of multiple data element structures are also known in the arts.Such multiple data element structures are commonly moved and/or processed together in various different algorithms. Furthermore, arrays of such multiple data element structures are often moved and/or processed together in algorithms. For example, this is often the case when adding, multiplying, or otherwise processing matrices of complex numbers. By way of example, an algorithm may operate on an array having a relatively small number of complex numbers (e.g., often ranging from two to six) that is a portion of a potentially much larger matrix of complex numbers. As one specific illustrative example, certain algorithms may process arrays of three complex numbers each in which each complex number includes a 32-bit single precision floating point real component and a 32-bit single precision floating point imaginary component. Such arrays may represent portions of different adjoining rows of the larger matrix of complex numbers. Other algorithms may process arrays of complex numbers of different sizes.Certain algorithms that process arrays of complex numbers and/or other multiple data element structures may expect each of the arrays to be aligned along lane boundaries. As one specific illustrative example, certain algorithms that process arrays of three complex numbers having 32-bit real and imaginary components, may expect each of the arrays to be aligned along 256-bit lane boundaries instead of spanning different lanes or being offset from a lane boundary. In other cases, even if the algorithm doesn't expect or require such alignment, it may be convenient and/or efficient for a given algorithm or application to have the arrays aligned along lane boundaries based on the ways in which the arrays can be processed, moved, managed by the overall algorithm, or the like.Figure 5 is a block diagram illustrating a second even more specific example embodiment of a partition source packed data into lanes operation 552 that may be performed on first and second arrays of three complex numbers in response to a second even more specific example embodiment of a partition source packed data into lanes instruction. The instruction may specify or otherwise indicate a source packed data 512. In some embodiments the source packed data may be in memory, whereas in other embodiments it may be in a packed data register. In this specific example, the source packed data has eight pairs of complex numbers. Each complex number includes a real number (r) and an imaginary number (i). For example, the least significant (rightmost as viewed) complex number includes a first real number (rl) and a first imaginary number (il), the next to least significant complex number includes a second real number (r2) and a second imaginary number (i2), and so on. Conventionally, complex numbers are often stored in memory with the real numbers being stored in relatively less significant bit positions, and the corresponding imaginary numbers being stored in adjoining relatively more significant bit positions, although this is not required. The eight real numbers rl-r8, and eight imaginary numbers il-i8, respectively form eight complex numbers. Each of the complex numbers may broadly represent a two data element structure. In some embodiments, the source packed data may include two arrays each having three complex numbers. By way of example, each of these arrays may represent a portion of a different adjoining row of a larger matrix of complex numbers, although the scope of the invention is not so limited. In this specific example, the real and imaginary numbers or components are each represented by a 32-bit single precision floating point data element, although in other embodiments other data element sizes and formats may optionally be used. In this specific example, the source packed data is a 512-bit source packed data, although in other embodiments other sizes may optionally be used.As previously mentioned, in some embodiments, the instruction may also specify or otherwise indicate a number of data elements at which the source packed data is to be split (e.g., with a value of an immediate or a value in a specified or implicit register). In this specific example, the number of data elements indicated by the instruction is six data elements, although this is merely one illustrative example. For structures this number may be indicated in different ways. As one example, the instruction may indicate a value of six to indicate the six data elements. As another example, the instruction may indicate a value of three two data element structures to indicate the six data elements. In this specific example, since the number of data elements indicated by the instruction is six data elements, the least significant six real and imaginary numbers (i.e., rl, il, r2, i2, r3, and i3) data represent a first array of three complex numbers, and the adjoining more significant six real and imaginary numbers (i.e., r4, i4, r5, i5, r6, and i6) represent a second array of three complex numbers. In the source packed data, the adjoining first and second arrays of complex numbers (or two data element structures) are adjoining, contiguous, or conterminous with one another.During the operation, the source packed data 512 may be provided to an execution unit 506. The execution unit may generate and store the result packed data 518 in response to and/or as a consequence of the instruction and/or operation. In some embodiments, the result packed data may be stored in a destination packed data register or other destination storage location indicated by the instruction. The result packed data includes a first, least significant lane 546, and an adjoining second, more significant lane 548. In other embodiments, the result packed data may optionally include additional lanes. As shown, in this specific example, each of the lanes is a 256-bit lane, although the scope of the invention is not so limited. In this specific example, the first array of three complex numbers 554 (i.e., rl, il, r2, i2, r3, and i3) is stored in the first lane, and the second array of three complex numbers 556 (i.e., r4, i4, r5, i5, r6, and i6) is stored in the second adjoining lane.In the result packed data, the first array of three complex numbers 554 in the first lane is separated from the second array of three complex numbers 556 in the second lane, in this specific example, by two most significant (leftmost as viewed) 32-bit data element positions of the first lane. Whereas in the source packed data the first and second arrays of complex numbers were adjoining, contiguous, or conterminous, in the result packed data the first and second arrays of complex numbers are split or separated from one another with, in this specific example, two intervening 32-bit data element positions disposed between them. In some embodiments, the split may be along whole complex number or other multi data element structure boundaries so that you don't split a complex number or other multi data element structure. Also, each of the first and second arrays of three complex numbers is aligned with the least significant bit or end of the respective lane in which it is stored.It is to be appreciated that this is just one illustrative example and that many other examples are also contemplated. As one example, the approach may be extended to 1024-bit operands where the result packed data has four 256-bit lanes. As another example, an analogous approach may be used with 16-bit data elements and 128-bit lanes of a 256-bit, 512-bit, or 1024- bit result packed data. Alternatively, in the case of 16-bit data elements, each 256-bit lane may be used to store twelve data elements (e.g., an array of six two data element structures). As another example, an analogous approach may be used with 8-bit data elements and 64-bit lanes of a 128-bit, 256-bit, or 512-bit result packed data. As a still further example, the arrays of three two data element structures may each be replaced by an array of two three data element structures. Still other variations will be apparent to those skilled in the art and having the benefit of the present disclosure. In addition, it is to be appreciated that the instructions/operations are not limited to operating on multiple data element structures, but in embodiments may represent general-purpose instructions/operations that may be used to operate on data elements that are not parts of multiple data element structures.Figure 6 is a block diagram illustrating a third specific example embodiment of a partition source packed data into lanes operation 660 that may be performed in response to a third specific example embodiment of a partition source packed data into lanes instruction. The instruction may specify or otherwise indicate a source packed data 612 that is to have a plurality of adjoining data elements Al through A16. The source packed data, and the data elements, may have various sizes or widths as previously described. Also, the data elements may have floating point, integer, or fixed point formats, as previously described. In some embodiments, the instruction may also specify or otherwise indicate a number of data elements at which the source packed data is to be split. In this specific example, the number of data elements indicated by the instruction is two data elements, although this is merely illustrative. The instruction may also specify or otherwise indicate a destination storage location.In this specific example, since the number of data elements indicated by the instruction is two data elements, the data elements A1-A2 of the source packed data represent a first non- overlapping set of two adjoining data elements, the data elements A3-A4 represent a second non- overlapping set of two adjoining data elements, the data elements A5-A6 of the source packed data represent a third non-overlapping set of two adjoining data elements, and the data elements A7-A8 represent a fourth non-overlapping set of two adjoining data elements. In the source packed data, the data elements A1-A2 are adjoining, contiguous, or conterminous with the data elements A3-A4, the data elements A3-A4 are adjoining, contiguous, or conterminous with the data elements A5-A6, and the data elements A5-A6 are adjoining, contiguous, or conterminous with the data elements A7-A8.During the operation, the source packed data 612 may be provided to an execution unit 606. The execution unit may generate and store the result packed data 618 in response to and/or as a consequence of the instruction and/or operation. In some embodiments, the result packed data may be stored in a destination packed data register or other destination storage location indicated by the instruction. The result packed data includes a first, least significant lane 662, a second, more significant lane 663, a third still more significant lane 664, and a fourth most significant lane 665. In other embodiments, the result packed data may optionally include fewer or more lanes. In this specific example, the first set of the data elements A1-A2 is stored in the first lane 662, the second set of the data elements A3-A4 is stored in the second lane 663, the third set of the data elements A5-A6 is stored in the third lane 664, and the fourth set of the data elements A7-A8 is stored in the fourth lane 665.In the result packed data, the first set of the data elements A1-A2 are separated from the second set of the data elements A3-A4, in this specific example, by the two most significant (leftmost as viewed) data element positions of the first least significant lane. Likewise, in the result packed data, the third set of the data elements A5-A6 are separated from the second set of the data elements A3-A4, in this specific example, by the two most significant (leftmost as viewed) data element positions of the second lane. Similarly, in the result packed data, the fourth set of the data elements A7-A8 are separated from the third set of the data elements A5- A6, in this specific example, by the two most significant (leftmost as viewed) data element positions of the third lane. Also, the first set of the data elements A1-A2 are aligned with the least significant bit or end of the corresponding first lane, the second set of the data elements A3- A4 are aligned with the least significant bit or end of the corresponding second lane, the third set of the data elements A5-A6 are aligned with the least significant bit or end of the corresponding third lane, and the fourth set of the data elements A7-A8 are aligned with the least significant bit or end of the corresponding fourth lane.Figure 7 is a block diagram illustrating a fourth even more specific example embodiment of a partition source packed data into lanes operation 770 that may be performed on complex numbers in response to a fourth even more specific example embodiment of a partition source packed data into lanes instruction. The instruction may specify or otherwise indicate a source packed data 712. In this specific example, the source packed data has eight real numbers rl-r8, and eight imaginary numbers il-i8, that pairwise form eight complex numbers. In this specific example, the real and imaginary numbers or components are each represented by a 32-bit single precision floating point data element, although in other embodiments other data element sizes and formats may optionally be used. In this specific example, the source packed data is a 512-bit source packed data, although in other embodiments other sizes may optionally be used.In some embodiments, the instruction may specify or otherwise indicate a number of data elements at which the source packed data is to be split. In this specific example, the number of data elements indicated by the instruction is two data elements, although this is merely one illustrative example. As one example, the instruction may indicate a value of two to indicate the two data elements, or as another example the instruction may indicate a value of one to indicate one two data element structure to indicate the two data elements.During the operation, the source packed data 712 may be provided to an execution unit706. The execution unit may generate and store the result packed data 718 in response to and/or as a consequence of the instruction and/or operation. In some embodiments, the result packed data may be stored in a destination packed data register or other destination storage location indicated by the instruction. The result packed data includes a first, least significant lane 762, a second, more significant lane 763, a third still more significant lane 764, and a fourth most significant lane 765. In other embodiments, the result packed data may optionally include fewer or more lanes. In this specific example, each of the lanes is a 128-bit lane, although the scope of the invention is not so limited. In this specific example, first, least significant complex number 771 (i.e., rl and il) is stored in the first lane, the second least significant complex number 772 (i.e., r2 and i2) is stored in the second lane, the third least significant complex number 773 (i.e., r3 and i3) is stored in the third lane, and the fourth least significant complex number 774 (i.e., r4 and i4) is stored in the fourth lane.In the result packed data, the first complex number 771 is separated from the second complex number 772, in this specific example, by two most significant 32-bit data element positions of the first lane. Likewise, in the result packed data, the third complex number 773 is separated from the second complex number 773, in this specific example, by two most significant 32-bit data element positions of the second lane. Similarly, in the result packed data, the fourth complex number 774 is separated from the third complex number 773, in this specific example, by two most significant 32-bit data element positions of the third lane. Also, in the result packed data, the first complex number is aligned with the least significant bit or end of the first lane, the second complex number is aligned with the least significant bit or end of the second lane, the third complex number is aligned with the least significant bit or end of the third lane, and the fourth complex number is aligned with the least significant bit or end of the fourth lane.To avoid obscuring the description, the different and/or additional characteristics for the respective operations of Figures 3-7 have primarily been described, without repeating all the optionally similar or common characteristics and details. However, it is to be appreciated that the characteristics and details described for one of the operations may also optionally apply to the other operations, unless contrary to the description or otherwise clearly apparent.Examples of suitable packed data formats include, but are not limited to, 64-bit wide, 128- bit wide, 256-bit wide, and 512-bit wide packed data formats. A 64-bit packed byte format may include eight 8-bit byte data elements. A 64-bit packed word format may include four 16-bit word data elements. A 64-bit packed doubleword format may include two 32-bit doubleword data elements. A 128-bit packed byte format may include sixteen 8-bit byte data elements. A 128-bit packed word format may include eight 16-bit word data elements. A 128-bit packed doubleword format may include four 32-bit doubleword data elements. A 128-bit packed quadword format 846 may include two 64-bit quadword data elements. A 256-bit packed byte format may include thirty-two 8-bit byte data elements. A 256-bit packed word format may include sixteen 16-bit word data elements. A 256-bit packed doubleword format may include eight 32-bit doubleword data elements. A 256-bit packed quadword format may include four 64- bit quadword data elements. A 512-bit packed byte format may include sixty-four 8-bit byte data elements. A 512-bit packed word format may include thirty-two 16-bit word data elements. A 512-bit packed doubleword format may include sixteen 32-bit doubleword data elements. A 512-bit packed quadword format may include eight 64-bit quadword data elements. Other packed data formats may include packed 32-bit single-precision floating point data elements or packed 64-bit double-precision floating data elements. Moreover, wider packed data widths (e.g., 1024-bit wide packed data) and/or narrower packed data widths (e.g., 32-bit wide packed data) are also suitable. Generally, the number of packed data elements in a packed data operand is equal to the size in bits of the packed data operand divided by the size in bits of each of the packed data elements.Figure 8 is a block diagram of an example embodiment of a suitable set of packed data registers 814. The packed data registers include thirty -two 512-bit packed data registers labeled ZMMO through ZMM31. In the illustrated embodiment, the lower order 256-bits of the lower sixteen registers, namely ZMM0-ZMM15, are aliased or overlaid on respective 256-bit packed data registers labeled YMM0-YMM15, although this is not required. Likewise, in the illustrated embodiment, the lower order 128-bits of the registers YMM0-YMM15 are aliased or overlaid on respective 128-bit packed data registers labeled XMM0-XMM15, although this also is not required. The 512-bit registers ZMMO through ZMM31 are operative to hold 512-bit packed data, 256-bit packed data, or 128-bit packed data. The 256-bit registers YMM0-YMM15 are operative to hold 256-bit packed data or 128-bit packed data. The 128-bit registers XMM0- XMM15 are operative to hold 128-bit packed data. In some embodiments, each of the registers may be used to store either packed floating-point data or packed integer data. Different data element sizes are supported including at least 8-bit byte data, 16-bit word data, 32-bit doubleword, 32-bit single-precision floating point data, 64-bit quadword, and 64-bit double- precision floating point data. In alternate embodiments, different numbers of registers and/or different sizes of registers may be used. In still other embodiments, registers may or may not use aliasing of larger registers on smaller registers and/or may or may not be used to store floating point data.Figure 9 is a block diagram of an example embodiment of a suitable set of packed data operation mask registers 990. In the illustrated embodiment, the set includes eight registers labeled kO through k7. Alternate embodiments may include either fewer than eight registers (e.g., two, four, six, etc.), or more than eight registers (e.g., sixteen, thirty-two, etc.). Each of these registers may be used to store a packed data operation mask. In the illustrated embodiment, each of the registers is 64-bits. In alternate embodiments, the widths of the registers may be either wider than 64-bits (e.g., 80-bits, 128-bits, etc.), or narrower than 64-bits (e.g., 8-bits, 16-bits, 32-bits, etc.). The registers may be implemented in different ways and are not limited to any particular type of circuit or design. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, and combinations thereof.In some embodiments, the packed data operation mask registers 990 may be a separate, dedicated set of architectural registers. In some embodiments, the instructions may encode or specify the packed data operation mask registers in different bits or one or more different fields of an instruction format than those used to encode or specify other types of registers (e.g., packed data registers). By way of example, an instruction may use three bits (e.g., a 3 -bit field) to encode or specify any one of the eight packed data operation mask registers kO through k7. In alternate embodiments, either fewer or more bits may be used, respectively, when there are fewer or more packed data operation mask registers. In one particular implementation, only packed data operation mask registers kl through k7 (but not kO) may be addressed as a predicate operand to predicate a masked packed data operation. The register kO may be used as a regular source or destination, but may not be encoded as a predicate operand (e.g., if kO is specified it has a "no mask" encoding), although this is not required.An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed (opcode) and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (sourcel/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SEVID extensions referred to the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme, has been , has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developers Manual, October 2011; and see Intel® Advanced Vector Extensions Programming Reference, June 2011). Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.VEX Instruction FormatVEX encoding allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The use of a VEX prefix provides for three-operand (or more) syntax. For example, previous two-operand instructions performed operations such as A = A + B, which overwrites a source operand. The use of a VEX prefix enables operands to perform nondestructive operations such as A = B + C.Figure 10A illustrates an exemplary AVX instruction format including a VEX prefix 1002, real opcode field 1030, Mod R/M byte 1040, SIB byte 1050, displacement field 1062, and EVIM8 1072. Figure 10B illustrates which fields from Figure 10A make up a full opcode field 1074 and a base operation field 1042. Figure IOC illustrates which fields from Figure 10A make up a register index field 1044.VEX Prefix (Bytes 0-2) 1002 is encoded in a three-byte form. The first byte is the Format Field 1040 (VEX Byte 0, bits [7:0]), which contains an explicit C4 byte value (the unique value used for distinguishing the C4 instruction format). The second-third bytes (VEX Bytes 1-2) include a number of bit fields providing specific capability. Specifically, REX field 1005 (VEX Byte 1, bits [7-5]) consists of a VEX.R bit field (VEX Byte 1, bit [7] - R), VEX.X bit field (VEX byte 1, bit [6] - X), and VEX.B bit field (VEX byte 1, bit[5] - B). Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding VEX.R, VEX.X, and VEX.B. Opcode map field 1015 (VEX byte 1, bits [4:0] - mmmmm) includes content to encode an implied leading opcode byte. W Field 1064 (VEX byte 2, bit [7] - W) - is represented by the notation VEX.W, and provides different functions depending on the instruction. The role of VEX. vvvv 1020 (VEX Byte 2, bits [6:3]-vvvv) may include the following: 1) VEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) VEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) VEX.vvvv does not encode any operand, the field is reserved and should contain 1011b. If VEX.L 1068 Size field (VEX byte 2, bit [2]-L) = 0, it indicates 128 bit vector; if VEX.L = 1, it indicates 256 bit vector. Prefix encoding field 1025 (VEX byte 2, bits [l :0]-pp) provides additional bits for the base operation field. Real Opcode Field 1030 (Byte 3) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 1040 (Byte 4) includes MOD field 1042 (bits [7-6]), Reg field 1044 (bits [5-3]), and R/M field 1046 (bits [2-0]). The role of Reg field 1044 may include the following: encoding either the destination register operand or a source register operand (the rrr of Rrrr), or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1046 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) - The content of Scale field 1050 (Byte 5) includes SS1052 (bits [7-6]), which is used for memory address generation. The contents of SIB.xxx 1054 (bits [5-3]) and SIB.bbb 1056 (bits [2-0]) have been previously referred to with regard to the register indexes Xxxx and Bbbb.The Displacement Field 1062 and the immediate field (FMM8) 1072 contain address data. Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.Figures 11A-11B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. Figure 11A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while Figure 1 IB is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 1100 for which are defined class A and class B instruction templates, both of which include no memory access 1105 instruction templates and memory access 1120 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in Figure 11A include: 1) within the no memory access 1105 instruction templates there is shown a no memory access, full round control type operation 1110 instruction template and a no memory access, data transform type operation 1115 instruction template; and 2) within the memory access 1120 instruction templates there is shown a memory access, temporal 1125 instruction template and a memory access, non-temporal 1130 instruction template. The class B instruction templates in Figure 11B include: 1) within the no memory access 1105 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1112 instruction template and a no memory access, write mask control, vsize type operation 1117 instruction template; and 2) within the memory access 1120 instruction templates there is shown a memory access, write mask control 1127 instruction template.The generic vector friendly instruction format 1100 includes the following fields listed below in the order illustrated in Figures 11A-11B.Format field 1140 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 1142 - its content distinguishes different base operations.Register index field 1144 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 1146 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 1105 instruction templates and memory access 1120 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 1150 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 1168, an alpha field 1152, and a beta field 1154. The augmentation operation field 1150 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 1160 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale* index + base).Displacement Field 1162A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).Displacement Factor Field 1162B (note that the juxtaposition of displacement field 1162A directly over displacement factor field 1162B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 1174 (described later herein) and the data manipulation field 1154C. The displacement field 1162A and the displacement factor field 1162B are optional in the sense that they are not used for the no memory access 1105 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 1164 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 1170 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging- writemasking, while class B instruction templates support both merging- and zeroing- writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 1170 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 1170 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 1170 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 1170 content to directly specify the masking to be performed.Immediate field 1172 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 1168 - its content distinguishes between different classes of instructions. With reference to Figures 11A-B, the contents of this field select between class A and class B instructions. In Figures 11A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1168 A and class B 1168B for the class field 1168 respectively in Figures 11A-B)Instruction Templates of Class AIn the case of the non-memory access 1105 instruction templates of class A, the alpha field 1152 is interpreted as an RS field 1152A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1152A.1 and data transform 1152A.2 are respectively specified for the no memory access, round type operation 1110 and the no memory access, data transform type operation 1115 instruction templates), while the beta field 1154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1105 instruction templates, the scale field 1160, the displacement field 1162A, and the displacement scale filed 1162B are not present. No-Memory Access Instruction Templates - Full Round Control Type OperationIn the no memory access full round control type operation 1110 instruction template, the beta field 1154 is interpreted as a round control field 1154A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 1154A includes a suppress all floating point exceptions (SAE) field 1156 and a round operation control field 1158, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 1158).SAE field 1156 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 1156 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.Round operation control field 1158 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round- to-nearest). Thus, the round operation control field 1158 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 1150 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type OperationIn the no memory access data transform type operation 1115 instruction template, the beta field 1154 is interpreted as a data transform field 1154B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 1120 instruction template of class A, the alpha field 1152 is interpreted as an eviction hint field 1152B, whose content distinguishes which one of the eviction hints is to be used (in Figure 11A, temporal 1152B.1 and non-temporal 1152B.2 are respectively specified for the memory access, temporal 1125 instruction template and the memory access, non-temporal 1130 instruction template), while the beta field 1154 is interpreted as a data manipulation field 1154C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 1120 instruction templates include the scale field 1160, and optionally the displacement field 1162A or the displacement scale field 1162B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - TemporalTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporalNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the lst-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class BIn the case of the instruction templates of class B, the alpha field 1152 is interpreted as a write mask control (Z) field 1152C, whose content distinguishes whether the write masking controlled by the write mask field 1170 should be a merging or a zeroing.In the case of the non-memory access 1105 instruction templates of class B, part of the beta field 1154 is interpreted as an RL field 1157A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1157A.1 and vector length (VSIZE) 1157A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1112 instruction template and the no memory access, write mask control, VSIZE type operation 1117 instruction template), while the rest of the beta field 1154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1105 instruction templates, the scale field 1160, the displacement field 1162A, and the displacement scale filed 1162B are not present.In the no memory access, write mask control, partial round control type operation 1110 instruction template, the rest of the beta field 1 154 is interpreted as a round operation field 1159A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).Round operation control field 1159A - just as round operation control field 1158, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1159A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 1150 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 1117 instruction template, the rest of the beta field 1154 is interpreted as a vector length field 1159B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 1120 instruction template of class B, part of the beta field 1154 is interpreted as a broadcast field 1157B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 1154 is interpreted the vector length field 1159B. The memory access 1120 instruction templates include the scale field 1160, and optionally the displacement field 1162A or the displacement scale field 1162B.With regard to the generic vector friendly instruction format 1100, a full opcode field 1174 is shown including the format field 1140, the base operation field 1142, and the data element width field 1164. While one embodiment is shown where the full opcode field 1174 includes all of these fields, the full opcode field 1174 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 1174 provides the operation code (opcode).The augmentation operation field 1150, the data element width field 1164, and the write mask field 1170 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general- purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction FormatFigure 12 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. Figure 12 shows a specific vector friendly instruction format 1200 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1200 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 11 into which the fields from Figure 12 map are illustrated.It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 1200 in the context of the generic vector friendly instruction format 1100 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 1200 except where claimed. For example, the generic vector friendly instruction format 1100 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1200 is shown as having fields of specific sizes. By way of specific example, while the data element width field 1164 is illustrated as a one bit field in the specific vector friendly instruction format 1200, the invention is not so limited (that is, the generic vector friendly instruction format 1100 contemplates other sizes of the data element width field 1164).The generic vector friendly instruction format 1100 includes the following fields listed below in the order illustrated in Figure 12A.EVEX Prefix (Bytes 0-3) 1202 - is encoded in a four-byte form.Format Field 1140 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 1140 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 1205 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 1157BEX byte 1, bit[5] - B).The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMMO is encoded as 101 IB, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 1110 - this is the first part of the REX' field 1110 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 1215 (EVEX byte 1, bits [3 :0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).Data element width field 1164 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 1220 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1011b. Thus, EVEX.vvvv field 1220 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 1168 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.Ul .Prefix encoding field 1225 (EVEX byte 2, bits [l :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SFMD prefix (rather than requiring a byte to express the SFMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SFMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SEVID prefixes are encoded into the SEVID prefix encoding field; and at runtime are expanded into the legacy SEVID prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SEVID prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SEVID prefix encodings, and thus not require the expansion.Alpha field 1152 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.Beta field 1154 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rrl, EVEX.LLO, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 1110 - this is the remainder of the REX' field and is the EVEX.V bit field(EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V, EVEX.vvvv.Write mask field 1170 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 1230 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 1240 (Byte 5) includes MOD field 1242, Reg field 1244, and R/M field 1246. As previously described, the MOD field's 1242 content distinguishes between memory access and non-memory access operations. The role of Reg field 1244 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1246 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 1150 content is used for memory address generation. SIB.xxx 1254 and SIB.bbb 1256 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 1162A (Bytes 7-10) - when MOD field 1242 contains 10, bytes 7-10 are the displacement field 1162A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 1162B (Byte 7) - when MOD field 1242 contains 01, byte 7 is the displacement factor field 1162B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 117 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1162B is a reinterpretation of disp8; when using displacement factor field 1162B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1162B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1162B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).Immediate field 1172 operates as previously described.Full Opcode FieldFigure 12B is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the full opcode field 1174 according to one embodiment of the invention. Specifically, the full opcode field 1174 includes the format field 1140, the base operation field 1142, and the data element width (W) field 1164. The base operation field 1142 includes the prefix encoding field 1225, the opcode map field 1215, and the real opcode field 1230.Register Index FieldFigure 12C is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the register index field 1144 according to one embodiment of the invention. Specifically, the register index field 1144 includes the REX field 1205, the REX' field 1210, the MODR/M.reg field 1244, the MODR/M.r/m field 1246, the WW field 1220, xxx field 1254, and the bbb field 1256.Augmentation Operation FieldFigure 12D is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the augmentation operation field 1 150 according to one embodiment of the invention. When the class (U) field 1168 contains 0, it signifies EVEX.U0 (class A 1168 A); when it contains 1, it signifies EVEX.U1 (class B 1168B). When U=0 and the MOD field 1242 contains 11 (signifying a no memory access operation), the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 1152A. When the rs field 1152A contains a 1 (round 1152A.1), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 1154A. The round control field 1154A includes a one bit SAE field 1156 and a two bit round operation field 1158. When the rs field 1152A contains a 0 (data transform 1152A.2), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 1154B. When U=0 and the MOD field 1242 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 1152B and the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 1154C.When U=l, the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 1152C. When U=l and the MOD field 1242 contains 11 (signifying a no memory access operation), part of the beta field 1 154 (EVEX byte 3, bit [4]- S0) is interpreted as the RL field 1157A; when it contains a 1 (round 1157A.1) the rest of the beta field 1154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the round operation field 1159A, while when the RL field 1157A contains a 0 (VSIZE 1157.A2) the rest of the beta field 1154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the vector length field 1159B (EVEX byte 3, bit [6-5]- Lw). When U=l and the MOD field 1242 contains 00, 01, or 10 (signifying a memory access operation), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 1159B (EVEX byte 3, bit [6-5]- Li-o) and the broadcast field 1157B (EVEX byte 3, bit [4]- B).Exemplary Register ArchitectureFigure 13 is a block diagram of a register architecture 1300 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 1310 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 1200 operates on these overlaid register file as illustrated in the below tables.In other words, the vector length field 1159B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 1159B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1200 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 1315 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1315 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.General-purpose registers 1325 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1345, on which is aliased the MMX packed integer flat register file 1350 - in the embodiment illustrated, the x87 stack is an eight- element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers.Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general -purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFigure 14A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 14B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 14A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 14A, a processor pipeline 1400 includes a fetch stage 1402, a length decode stage 1404, a decode stage 1406, an allocation stage 1408, a renaming stage 1410, a scheduling (also known as a dispatch or issue) stage 1412, a register read/memory read stage 1414, an execute stage 1416, a write back/memory write stage 1418, an exception handling stage 1422, and a commit stage 1424.Figure 14B shows processor core 1490 including a front end unit 1430 coupled to an execution engine unit 1450, and both are coupled to a memory unit 1470. The core 1490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 1430 includes a branch prediction unit 1432 coupled to an instruction cache unit 1434, which is coupled to an instruction translation lookaside buffer (TLB) 1436, which is coupled to an instruction fetch unit 1438, which is coupled to a decode unit 1440. The decode unit 1440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1440 or otherwise within the front end unit 1430). The decode unit 1440 is coupled to a rename/allocator unit 1452 in the execution engine unit 1450.The execution engine unit 1450 includes the rename/allocator unit 1452 coupled to a retirement unit 1454 and a set of one or more scheduler unit(s) 1456. The scheduler unit(s) 1456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1456 is coupled to the physical register file(s) unit(s) 1458. Each of the physical register file(s) units 1458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1458 is overlapped by the retirement unit 1454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1454 and the physical register file(s) unit(s) 1458 are coupled to the execution cluster(s) 1460. The execution cluster(s) 1460 includes a set of one or more execution units 1462 and a set of one or more memory access units 1464. The execution units 1462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1456, physical register file(s) unit(s) 1458, and execution cluster(s) 1460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of- order issue/execution and the rest in-order.The set of memory access units 1464 is coupled to the memory unit 1470, which includes a data TLB unit 1472 coupled to a data cache unit 1474 coupled to a level 2 (L2) cache unit 1476. In one exemplary embodiment, the memory access units 1464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1472 in the memory unit 1470. The instruction cache unit 1434 is further coupled to a level 2 (L2) cache unit 1476 in the memory unit 1470. The L2 cache unit 1476 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1400 as follows: 1) the instruction fetch 1438 performs the fetch and length decoding stages 1402 and 1404; 2) the decode unit 1440 performs the decode stage 1406; 3) the rename/allocator unit 1452 performs the allocation stage 1408 and renaming stage 1410; 4) the scheduler unit(s) 1456 performs the schedule stage 1412; 5) the physical register file(s) unit(s) 1458 and the memory unit 1470 perform the register read/memory read stage 1414; the execution cluster 1460 perform the execute stage 1416; 6) the memory unit 1470 and the physical register file(s) unit(s) 1458 perform the write back/memory write stage 1418; 7) various units may be involved in the exception handling stage 1422; and 8) the retirement unit 1454 and the physical register file(s) unit(s) 1458 perform the commit stage 1424.The core 1490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1434/1474 and a shared L2 cache unit 1476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFigures 15A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 15A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1502 and with its local subset of the Level 2 (L2) cache 1504, according to embodiments of the invention. In one embodiment, an instruction decoder 1500 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1508 and a vector unit 1510 use separate register sets (respectively, scalar registers 11512 and vector registers 1514) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1506, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1504. Data read by a processor core is stored in its L2 cache subset 1504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction.Figure 15B is an expanded view of part of the processor core in Figure 15A according to embodiments of the invention. Figure 15B includes an LI data cache 1506A part of the LI cache 1504, as well as more detail regarding the vector unit 1510 and the vector registers 1514. Specifically, the vector unit 1510 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1528), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1520, numeric conversion with numeric convert units 1522A-B, and replication with replication unit 1524 on the memory input. Write mask registers 1526 allow predicating resulting vector writes. Processor with integrated memory controller and graphicsFigure 16 is a block diagram of a processor 1600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 16 illustrate a processor 1600 with a single core 1602 A, a system agent 1610, a set of one or more bus controller units 1616, while the optional addition of the dashed lined boxes illustrates an alternative processor 1600 with multiple cores 1602A-N, a set of one or more integrated memory controller unit(s) 1614 in the system agent unit 1610, and special purpose logic 1608.Thus, different implementations of the processor 1600 may include: 1) a CPU with the special purpose logic 1608 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1602A-N being a large number of general purpose in-order cores. Thus, the processor 1600 may be a general -purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1600 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or MOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1606, and external memory (not shown) coupled to the set of integrated memory controller units 1614. The set of shared cache units 1606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1612 interconnects the integrated graphics logic 1608, the set of shared cache units 1606, and the system agent unit 1610/integrated memory controller unit(s) 1614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1606 and cores 1602-A-N.In some embodiments, one or more of the cores 1602A-N are capable of multi -threading. The system agent 1610 includes those components coordinating and operating cores 1602A-N. The system agent unit 1610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1602A-N and the integrated graphics logic 1608. The display unit is for driving one or more externally connected displays.The cores 1602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFigures 17-21 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 17, shown is a block diagram of a system 1700 in accordance with one embodiment of the present invention. The system 1700 may include one or more processors 1710, 1715, which are coupled to a controller hub 1720. In one embodiment the controller hub 1720 includes a graphics memory controller hub (GMCH) 1790 and an Input/Output Hub (IOH) 1750 (which may be on separate chips); the GMCH 1790 includes memory and graphics controllers to which are coupled memory 1740 and a coprocessor 1745; the IOH 1750 is couples input/output (I/O) devices 1760 to the GMCH 1790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1740 and the coprocessor 1745 are coupled directly to the processor 1710, and the controller hub 1720 in a single chip with the IOH 1750.The optional nature of additional processors 1715 is denoted in Figure 17 with broken lines. Each processor 1710, 1715 may include one or more of the processing cores described herein and may be some version of the processor 1600.The memory 1740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1720 communicates with the processor(s) 1710, 1715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1795.In one embodiment, the coprocessor 1745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1720 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1710, 1715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1745. Accordingly, the processor 1710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1745. Coprocessor(s) 1745 accept and execute the received coprocessor instructions.Referring now to Figure 18, shown is a block diagram of a first more specific exemplary system 1800 in accordance with an embodiment of the present invention. As shown in Figure 18, multiprocessor system 1800 is a point-to-point interconnect system, and includes a first processor 1870 and a second processor 1880 coupled via a point-to-point interconnect 1850. Each of processors 1870 and 1880 may be some version of the processor 1600. In one embodiment of the invention, processors 1870 and 1880 are respectively processors 1710 and 1715, while coprocessor 1838 is coprocessor 1745. In another embodiment, processors 1870 and 1880 are respectively processor 1710 coprocessor 1745.Processors 1870 and 1880 are shown including integrated memory controller (EVIC) units 1872 and 1882, respectively. Processor 1870 also includes as part of its bus controller units point-to-point (P-P) interfaces 1876 and 1878; similarly, second processor 1880 includes P-P interfaces 1886 and 1888. Processors 1870, 1880 may exchange information via a point-to-point (P-P) interface 1850 using P-P interface circuits 1878, 1888. As shown in Figure 18, EVICs 1872 and 1882 couple the processors to respective memories, namely a memory 1832 and a memory 1834, which may be portions of main memory locally attached to the respective processors.Processors 1870, 1880 may each exchange information with a chipset 1890 via individual P-P interfaces 1852, 1854 using point to point interface circuits 1876, 1894, 1886, 1898. Chipset 1890 may optionally exchange information with the coprocessor 1838 via a high-performance interface 1839. In one embodiment, the coprocessor 1838 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1890 may be coupled to a first bus 1816 via an interface 1896. In one embodiment, first bus 1816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 18, various I/O devices 1814 may be coupled to first bus 1816, along with a bus bridge 1818 which couples first bus 1816 to a second bus 1820. In one embodiment, one or more additional processor(s) 1815, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1816. In one embodiment, second bus 1820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1820 including, for example, a keyboard and/or mouse 1822, communication devices 1827 and a storage unit 1828 such as a disk drive or other mass storage device which may include instructions/code and data 1830, in one embodiment. Further, an audio I/O 1824 may be coupled to the second bus 1820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 18, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 19, shown is a block diagram of a second more specific exemplary system 1900 in accordance with an embodiment of the present invention. Like elements in Figures 18 and 19 bear like reference numerals, and certain aspects of Figure 18 have been omitted from Figure 19 in order to avoid obscuring other aspects of Figure 19.Figure 19 illustrates that the processors 1870, 1880 may include integrated memory and I/O control logic ("CL") 1872 and 1882, respectively. Thus, the CL 1872, 1882 include integrated memory controller units and include I/O control logic. Figure 19 illustrates that not only are the memories 1832, 1834 coupled to the CL 1872, 1882, but also that I/O devices 1914 are also coupled to the control logic 1872, 1882. Legacy I/O devices 1915 are coupled to the chipset 1890.Referring now to Figure 20, shown is a block diagram of a SoC 2000 in accordance with an embodiment of the present invention. Similar elements in Figure 16 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 20, an interconnect unit(s) 2002 is coupled to: an application processor 2010 which includes a set of one or more cores 192A-N and shared cache unit(s) 1606; a system agent unit 1610; a bus controller unit(s) 1616; an integrated memory controller unit(s) 1614; a set or one or more coprocessors 2020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2030; a direct memory access (DMA) unit 2032; and a display unit 2040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 2020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1830 illustrated in Figure 18, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RW s), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine- readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 21 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 21 shows a program in a high level language 2102 may be compiled using an x86 compiler 2104 to generate x86 binary code 2106 that may be natively executed by a processor with at least one x86 instruction set core 2116. The processor with at least one x86 instruction set core 2116 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 2104 represents a compiler that is operable to generate x86 binary code 2106 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2116. Similarly, Figure 21 shows the program in the high level language 2102 may be compiled using an alternative instruction set compiler 2108 to generate alternative instruction set binary code 2110 that may be natively executed by a processor without at least one x86 instruction set core 2114 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2112 is used to convert the x86 binary code 2106 into code that may be natively executed by the processor without an x86 instruction set core 2114. This converted code is not likely to be the same as the alternative instruction set binary code 2110 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 21 12 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2106.Components, features, and details described for any of Figures 3-9 may also optionally apply to any of Figures 1-2. Components, features, and details described for any of the processors disclosed herein (e.g., processor 100) may optionally apply to any of the methods disclosed herein (e.g., method 220), which in embodiments may optionally be performed by and/or with such processors. Any of the processors described herein (e.g., processor 100) in embodiments may optionally be included in any of the systems disclosed herein (e.g., any of the systems of Figures 17-20). Any of the processors described herein (e.g., processor 100) in embodiments may optionally have any of the microarchitectures shown herein (e.g., Figures 14B, 15A B). In addition, any of the instructions disclosed herein may in some embodiments optionally have any of the features or details of the instruction formats shown herein (e.g., the formats described for Figures lOA/B/C, 11A B, 12A B/C D).In the description and claims, the terms "coupled" and/or "connected," along with their derivatives, may have be used. These terms are not intended as synonyms for each other. Rather, in embodiments, "connected" may be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical and/or electrical contact with each other. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. For example, an execution unit may be coupled with a register and/or a decode unit through one or more intervening components. In the figures, arrows are used to show connections and couplings.The components disclosed herein and the methods depicted in the preceding figures may be implemented with logic, modules, or units that includes hardware (e.g., transistors, gates, circuitry, etc.), firmware (e.g., a non-volatile memory storing microcode or control signals), software (e.g., stored on a non-transitory computer readable storage medium), or a combination thereof. In some embodiments, the logic, modules, or units may include at least some or predominantly a mixture of hardware and/or firmware potentially combined with some optional software.The term "and/or" may have been used. As used herein, the term "and/or" means one or the other or both (e.g., A and/or B means A or B or both A and B).In the description above, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. The scope of the invention is not to be determined by the specific examples provided above, but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form and/or without detail in order to avoid obscuring the understanding of the description. Where considered appropriate, reference numerals, or terminal portions of reference numerals, have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar or the same characteristics, unless specified or clearly apparent otherwise.Certain operations may be performed by hardware components, or may be embodied in machine-executable or circuit-executable instructions, that may be used to cause and/or result in a machine, circuit, or hardware component (e.g., a processor, potion of a processor, circuit, etc.) programmed with the instructions performing the operations. The operations may also optionally be performed by a combination of hardware and software. A processor, machine, circuit, or hardware may include specific or particular circuitry or other logic (e.g., hardware potentially combined with firmware and/or software) is operative to execute and/or process the instruction and store a result in response to the instruction.Some embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-readable medium. The medium may include a mechanism that provides, for example stores, information in a form that is readable by the machine. The machine-readable medium may provide, or have stored thereon, an instruction or sequence of instructions, that if and/or when executed by a machine are operative to cause the machine to perform and/or result in the machine performing one or operations, methods, or techniques disclosed herein.In some embodiments, the machine-readable medium may include a tangible and/or non- transitory machine-readable storage medium. For example, the non-transitory machine-readable storage medium may include a floppy diskette, an optical storage medium, an optical disk, an optical data storage device, a CD-ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically-erasable-and-programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Flash memory, a phase- change memory, a phase-change data storage material, a non-volatile memory, a non-volatile data storage device, a non-transitory memory, a non-transitory data storage device, or the like. The non-transitory machine-readable storage medium does not consist of a transitory propagated signal. In some embodiments, the storage medium may include a tangible medium that includes solid-state matter or material, such as, for example, a semiconductor material, a phase change material, a magnetic solid material, a solid data storage material, etc. Alternatively, a non- tangible transitory computer-readable transmission media, such as, for example, an electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, and digital signals, may optionally be used.Examples of suitable machines include, but are not limited to, a general -purpose processor, a special-purpose processor, a digital logic circuit, an integrated circuit, or the like. Still other examples of suitable machines include a computer system or other electronic device that includes a processor, a digital logic circuit, or an integrated circuit. Examples of such computer systems or electronic devices include, but are not limited to, desktop computers, laptop computers, notebook computers, tablet computers, netbooks, smartphones, cellular phones, servers, network devices (e.g., routers and switches.), Mobile Internet devices (MIDs), media players, smart televisions, nettops, set-top boxes, and video game controllers.Reference throughout this specification to "one embodiment," "an embodiment," "one or more embodiments," "some embodiments," for example, indicates that a particular feature may be included in the practice of the invention but is not necessarily required to be. Similarly, in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention.EXAMPLE EMBODIMENTSThe following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments.Example 1 is a processor including a plurality of packed data registers, and a decode unit to decode an instruction. The instruction to indicate a source packed data that is to include a plurality of adjoining data elements, the instruction to indicate a number of data elements, and the instruction to indicate a destination storage location. The processor also includes an execution unit coupled with the plurality of packed data registers, and coupled with the decode unit. The execution unit, in response to the instruction, to store a result packed data in the destination storage location. The result packed data to have a plurality of lanes. Each of the lanes of the result packed data to store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane. The different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data to be separated from one another by at least one most significant data element position of the less significant lane. Example 2 includes the processor of Example 1, in which the decode unit is to decode the instruction that is to have an immediate to indicate the number of the data elements.Example 3 includes the processor of any one of Examples 1 to 2, in which the decode unit is to decode the instruction that is to indicate the number of data elements through an indication of a number of structures that each are to include a same number of data elements.Example 4 includes the processor of any one of Examples 1 to 3, in which the decode unit is to decode the instruction that is to indicate the source packed data in system memory. Also, optionally in which the execution unit, in response to the instruction, is to load at least each of the different non-overlapping sets of the data elements from the system memory with a single load operation.Example 5 includes the processor of Example 4, in which the decode unit is to decode the instruction that is to indicate a mask that is to include a plurality of mask elements. Also, optionally in which the execution unit, in response to the instruction, is to load from the system memory only data elements of the source packed data that correspond to unmasked mask elements of the mask.Example 6 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction that is to indicate the source packed data which is to include 32-bit single precision floating point data elements, and that is to indicate a 512-bit destination packed data register. Also, optionally in which the execution unit, in response to the instruction, is to store the result packed data, which is to be a 512-bit result packed data, and which is to have two 256-bit lanes. Also, optionally in which each of the 256-bit lanes is to store the corresponding different non-overlapping set of the adjoining 32-bit single precision floating point data elements aligned with the least significant end of the respective 256-bit lane. Also, optionally in which the different non-overlapping sets of the adjoining 32-bit single precision floating point data elements in the adjoining 256-bit lanes of the 512-bit result packed data are to be separated from one another by the at least one most significant 32-bit data element position of the less significant 256-bit lane.Example 7 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction that is to indicate the source packed data which is to include 32-bit single precision floating point data elements, and that is to indicate an at least 256-bit destination packed data register. Also, optionally in which the execution unit, in response to the instruction, is to store the result packed data, which is to be an at least 256-bit result packed data, and which is to have at least two 128-bit lanes. Also, optionally in which each of the at least two 128-bit lanes is to store the corresponding different non-overlapping set of the adjoining 32-bit single precision floating point data elements aligned with the least significant end of the respective 128- bit lane. Also, optionally in which the different non-overlapping sets of the adjoining 32-bit single precision floating point data elements in the adjoining 128-bit lanes of the at least 256-bit result packed data are to be separated from one another by the at least one most significant 32-bit data element position of the less significant 128-bit lane.Example 8 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction that is to indicate the source packed data that is to include 32-bit data elements.Example 9 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction that is to indicate the source packed data that is to include 64-bit data elements.Example 10 includes the processor of any one of Examples 1 to 5, in which the execution unit, in response to the instruction, is to store the result packed data in which each of the lanes is a 128-bit lane.Example 11 includes the processor of any one of Examples 1 to 5, in which the execution unit, in response to the instruction, is to store the result packed data in which each of the lanes is a 256-bit lane.Example 12 includes the processor of any one of Examples 1 to 11, in which the decode unit is to decode the instruction that is to have a field to indicate a size of the lanes of the result packed data.Example 13 includes the processor of any one of Examples 1 to 11, in which it is to be implicit to the instruction to align each different non-overlapping set of the indicated number of adjoining data elements with the least significant end of the respective lane.Example 14 is a method performed by a processor including receiving an instruction at the processor. The instruction indicating a source packed data that includes a plurality of adjoining data elements, the instruction indicating a number of data elements, and the instruction indicating a destination storage location. The method also includes storing a result packed data in the destination storage location in response to the instruction. The result packed data having a plurality of lanes. Each of the lanes of the result packed data storing a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane. The different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data separated from one another by at least one most significant data element position of the less significant lane.Example 15 includes the method of Example 14, in which receiving includes receiving the instruction that has an immediate that indicates the number of the data elements. Also, optionally in which storing includes storing the result packed data having the plurality of lanes that are one of 128-bit lanes and 256-bit lanes.Example 16 includes the method of any one of Examples 14 to 15, in which receiving includes receiving the instruction indicating the source packed data that includes a first array of multiple element structures and a second array of multiple element structures. Also, optionally in which storing includes storing the result packed data in which the first array of the multiple element structures is stored in a least significant lane of the result packed data and the second array of the multiple element structures is stored in an adjoining more significant lane of the result packed data with the at least one most significant data element position of the least significant lane separating the first array of the multiple element structures and the second array of the multiple element structures.Example 17 includes the method of Example 14, in which receiving includes receiving the instruction indicating the source packed data that includes a first array of adjoining pairs of real and imaginary complex numbers and a second array of adjoining pairs of real and imaginary complex numbers. Also, optionally in which storing includes storing the result packed data in which the first array of the adjoining pairs of the real and the imaginary complex numbers is stored in a least significant lane of the result packed data and the second array of the adjoining pairs of the real and the imaginary complex numbers is stored in an adjoining more significant lane of the result packed data with at least two most significant data element positions of the least significant lane separating the first and second arrays of the adjoining pairs of the real and the imaginary complex numbers.Example 18 includes the method of Example 14, in which receiving includes receiving the instruction indicating the source packed data that includes a first array of three adjoining pairs of 32-bit real and 32-bit imaginary complex numbers and a second array of three adjoining pairs of 32-bit real and 32-bit imaginary complex numbers. Also, optionally in which storing includes storing an at least 512-bit result packed data in which the first array of the three adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers is stored in a least significant 256- bit lane of the at least 512-bit result packed data and the second array of the three adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers is stored in an adjoining more significant 256-bit lane of the at least 512-bit result packed data with at least two most significant 32-bit data element positions of the least significant 256-bit lane separating the first and second arrays of the three adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers.Example 19 includes the method of Example 14, in which receiving includes receiving the instruction indicating the source packed data that includes a first adjoining pair of 32-bit real and 32-bit imaginary complex numbers and a second adjoining pair of 32-bit real and 32-bit imaginary complex numbers. Also, optionally in which storing includes storing an at least 256- bit result packed data in which the first adjoining pair of the 32-bit real and the 32-bit imaginary complex numbers is stored in a least significant 128-bit lane of the at least 256-bit result packed data and the second adjoining pair of the 32-bit real and the 32-bit imaginary complex numbers is stored in an adjoining more significant 128-bit lane of the at least 256-bit result packed data with at least two most significant 32-bit data element positions of the least significant 128-bit lane separating the first and second adjoining pairs of the 32-bit real and the 32-bit imaginary complex numbers.Example 20 includes the method of any one of Examples 14 to 19, in which receiving includes receiving the instruction that indicates the source packed data in system memory. Also further including optionally loading at least each of the different non-overlapping sets of the data elements from the system memory with a single load operation.Example 21 includes the method of any one of Examples 14 to 19, in which storing includes storing the result in which each different non-overlapping set of the indicated number of the adjoining data elements being aligned with the least significant end of the respective lane is implicit to the instruction.Example 22 is a computer system to process instructions including an interconnect, and a processor coupled with the interconnect. The processor to receive an instruction that is to indicate a source packed data that is to include a plurality of adjoining data elements, the instruction to indicate a number of data elements, and the instruction to indicate a destination packed data register. The processor, in response to the instruction, to store a result packed data in the destination packed data register, the result packed data to have a plurality of lanes. Each of the lanes of the result packed data to store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data aligned with a least significant end of the respective lane. The different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data to be separated from one another by at least one most significant data element position of the less significant lane. The computer system also includes a dynamic random access memory (DRAM) coupled with the interconnect. The DRAM storing a set of instructions of an algorithm. The set of the instructions of the algorithm is to expect the different non-overlapping sets of the indicated number of the adjoining data elements to be aligned with the least significant ends of the respective lanes.Example 23 includes the computer system of any one of Examples 22, in which the processor is to store the result packed data in which the lanes are one of 128-bit lanes and 256-bit lanes.Example 24 is an article of manufacture including a non-transitory machine-readable storage medium. The non-transitory machine-readable storage medium storing an instruction.The instruction to indicate a source packed data that is to include a plurality of adjoining data elements, the instruction to indicate a number of data elements, and the instruction to indicate a destination packed data register. The instruction if executed by a machine is to cause the machine to perform operations including store a result packed data in the destination packed data register. The result packed data to have a plurality of lanes. Each of the lanes of the result packed data to store a different non-overlapping set of the indicated number of adjoining data elements of the source packed data that are to be aligned with a least significant end of the respective lane. The different non-overlapping sets of the indicated number of the adjoining data elements in adjoining lanes of the result packed data to be separated from one another by at least one most significant data element position of the less significant lane.Example 25 includes the article of manufacture of Example 24, in which the instruction is to have an immediate to indicate the number of the data elements. Also, optionally in which the lanes are to be one of 128-bit lanes and 256-bit lanes. Also, optionally in which it is to be implicit to the instruction to align each different non-overlapping set of the indicated number of adjoining data elements with the least significant end of the respective lane.Example 26 includes the processor of any one of Examples 1 to 13, further including an optional branch prediction unit to predict branches, and an optional instruction prefetch unit, coupled with the branch prediction unit, the instruction prefetch unit to prefetch instructions including the instruction. The processor may also optionally include an optional level 1 (LI) instruction cache coupled with the instruction prefetch unit, the LI instruction cache to store instructions, an optional LI data cache to store data, and an optional level 2 (L2) cache to store data and instructions. The processor may also optionally include an instruction fetch unit coupled with the decode unit, the LI instruction cache, and the L2 cache, to fetch the instruction, in some cases from one of the LI instruction cache and the L2 cache, and to provide the instruction to the decode unit. The processor may also optionally include a register rename unit to rename registers, an optional scheduler to schedule one or more operations that have been decoded from the instruction for execution, and an optional commit unit to commit execution results of the instruction.Example 27 includes a system-on-chip that includes at least one interconnect, the processor of any one of Examples 1 to 13 coupled with the at least one interconnect, an optional graphics processing unit (GPU) coupled with the at least one interconnect, an optional digital signal processor (DSP) coupled with the at least one interconnect, an optional display controller coupled with the at least one interconnect, an optional memory controller coupled with the at least one interconnect, an optional wireless modem coupled with the at least one interconnect, an optional image signal processor coupled with the at least one interconnect, an optional Universal Serial Bus (USB) 3.0 compatible controller coupled with the at least one interconnect, an optional Bluetooth 4.1 compatible controller coupled with the at least one interconnect, and an optional wireless transceiver controller coupled with the at least one interconnect.Example 28 is a processor or other apparatus operative to perform the method of any one of Examples 14 to 21.Example 29 is a processor or other apparatus that includes means for performing the method of any one of Examples 14 to 21.Example 30 is a processor or other apparatus that includes any combination of modules and/or units and/or logic and/or circuitry and/or means operative to perform the method of any one of Examples 14 to 21.Example 31 is an optionally non-transitory and/or tangible machine-readable medium, which optionally stores or otherwise provides instructions including a first instruction, the first instruction if and/or when executed by a processor, computer system, electronic device, or other machine, is operative to cause the machine to perform the method of any one of Examples 14 to 21.Example 32 is a processor or other apparatus substantially as described herein.Example 33 is a processor or other apparatus that is operative to perform any method substantially as described herein.Example 34 is a processor or other apparatus that is operative to perform any partition into lanes instruction substantially as described herein.Example 35 is a computer system or other electronic device that includes a processor having a decode unit operative to decode instructions of a first instruction set. The processor also has one or more execution units. The electronic device also includes a storage device coupled with the processor. The storage device is operative to store a first instruction, which may be any of the instructions substantially as disclosed herein, and which is to be of a second different instruction set. The storage device is also operative to store instructions to convert the first instruction into one or more instructions of the first instruction set. The one or more instructions of the first instruction set, when performed by the processor, are operative to cause the processor to store a result that would be stored by the first instruction (e.g., store any of the results of the instructions as described elsewhere herein).
In a pipelined processor where instructions are pre-decoded prior to being stored in a cache, an incorrectly pre-decoded instruction is detected during execution in the pipeline. The corresponding instruction is invalidated in the cache, and the instruction is forced to evaluate as a branch instruction. In particular, the branch instruction is evaluated as ''mispredicted not taken'' with a branch target address of the incorrectly pre-decoded instruction's address. This, with the invalidated cache line, causes the incorrectly pre-decoded instruction to be re-fetched from memory with a precise address. The re-fetched instruction is then correctly pre-decoded, written to the cache, and executed.
1.A method of correcting instructions that are not properly pre-decoded, comprising:Detecting pre-decoding errors;In response to detecting the error, the branch correction procedure is forced with the target address of the instruction that was not correctly pre-decoded.2.The method of claim 1 further comprising invalidating said uncorrectly pre-decoded instruction in a cache prior to forcing said branch correction procedure.3.The method of claim 2, further comprising extracting the instructions from memory in response to the branch correction procedure.4.The method of claim 3, further comprising pre-decoding the instructions and storing the instructions and pre-decode information associated with the instructions in the cache.5.The method of claim 1 wherein forcing the branch correction procedure comprises forcing the branch condition "true" and forcing the branch prediction "false".6.The method of claim 1 wherein forcing the branch correction procedure with the target address of the instruction that is not properly pre-decoded comprises storing the address in a target address register and forcing a register branch instruction correction.7.The method of claim 6 wherein storing the address in the target address register comprises: calculating if the target address register is loaded with an arithmetic operation result for the contents of the two operand registers The value is stored in the operand register, the value being calculated to generate the address from the arithmetic operation.8.The method of claim 1 wherein forcing the branch correction procedure with the target address of the uncorrectly pre-decoded instruction comprises forcing a PC-related branch instruction correction with a branch offset of zero.9.A processor comprising:a pre-decoder interposed in an instruction fetch path, the pre-decoder generating pre-decode information associated with the instruction;A pre-decode error detector and corrector that detects incorrect pre-decode information associated with the instruction and enforces the instruction as a branch of error prediction with the address of the instruction as a branch target address.10.The processor of claim 9 further comprisinga cache memory storing the instructions and the pre-decode information, and wherein the pre-decode error detector and corrector further invalidate the instruction in the cache when the pre-decode error is detected .11.The processor of claim 9 further comprising a branch predictor and a branch correction path responsive to a conditional branch that was predicted not to be taken but evaluated as taken to provide a corrected branch target for instruction fetching address.12.The processor of claim 11 wherein said pre-decoding error detector and corrector utilizes said branch correction path to force said instruction that is not correctly pre-decoded to be executed as a branch instruction that is incorrectly predicted to be taken.13.A method of correcting instructions that are not properly pre-decoded, comprising:Detecting pre-decoding errors;In response to detecting the error, the pre-decoding error is corrected by extracting the instruction from memory and pre-decoding the instruction.14.The method of claim 13 wherein extracting the instructions from the memory comprises:Invalidating the instruction in the cache; andAfter invalidating the instruction, an attempt is made to extract the instruction from the cache.15.The method of claim 13 wherein extracting the instructions from the memory comprises evaluating the instructions as branches, wherein the address of the instructions is a branch target address.16.The method of claim 15 wherein evaluating the instruction as a branch comprises evaluating the instruction as a branch that is erroneously predicted to not take.
Pre-decoding error handling via branch correctionTechnical fieldThe present invention relates generally to the field of processors, and in particular to a method of correcting erroneously pre-decoded data associated with an instruction by forcing a branch correction procedure with a target address of the instruction.Background techniqueMicroprocessors perform computational tasks in a variety of applications. Improved processor performance is almost always required to allow for faster operation and/or increased functionality through software changes. In many embedded applications (eg, portable electronic devices), power savings is also an important goal in processor design and implementation.Many modern processors can use a pipeline structure in which successive instructions are overlapped in execution to increase the total processor throughput. Maintaining smooth execution through the pipeline is critical to achieving high performance. Many modern processors also use hierarchical memory, where a fast, on-chip cache stores locally accessed copies of data and instructions. One pipeline optimization technique known in the art is pre-decode instructions. That is, the instruction is inspected when it is read from memory, partially pre-decoded, and some information about the instruction (referred to as pre-decode information) is stored in the cache along with the associated instruction. . When the instruction is later fetched from the cache, the pre-decode information is also extracted and the pre-decode information is used to assist in fully decoding the instruction.Sometimes the pre-decoded information contains an error. These errors can be detected during the decoding phase in the pipeline. When an error is found, an exception occurs and the pipeline must be flushed and all instructions including the error pre-decoded instruction must be re-fetched. This process results in significant degradation in performance and power management.Summary of the inventionThe present invention, in one embodiment, relates to a method of correcting instructions that are not properly pre-decoded. A pre-decoding error was detected. In response to detecting the error, the branch correction procedure is forced with the target address of the instruction that was not correctly pre-decoded.The invention relates to a processor in another embodiment. The processor includes a pre-decoder interposed in an instruction fetch path that generates pre-decode information associated with a particular instruction. The processor also includes a pre-decode error detector and a corrector that detects incorrect pre-decode information associated with the instruction and forces the instruction as an error prediction branch with the address of the instruction as a branch target address carried out.DRAWINGSFigure 1 is a functional block diagram of a processor.2 is a functional block diagram of a portion of a memory, predecoder, instruction cache, and processor pipeline.Figure 3 is a functional block diagram of branch correction logic.Detailed waysThe pipeline processor architecture develops parallelism by overlapping the execution of multiple consecutive instructions, each successive instruction having multiple execution steps. Typical instruction steps include instruction fetch, decode, execute, and write back. Each step is performed in the pipeline by one or more tube stages, which include logic and memory elements such as latches or registers. The tube stages are joined together to form a pipeline. The instructions enter the pipeline and are processed continuously through the stages. Additional instructions enter the pipeline before the previous instruction completes execution, so multiple instructions can be processed in the pipeline at any given time. This ability to develop parallelism between instructions in a continuous stream of instructions can significantly improve processor performance. Under ideal conditions, and in a processor that completes each tube level in one cycle, the instruction can be executed in each cycle after a brief start of the fill pipeline. Many realistic constraints make it impossible to maintain this ideal condition; however, keeping the pipeline flowing completely and smoothly is a common goal of processor design.Modern processors typically also use a memory hierarchy architecture that places a small amount of fast, expensive memory near the processor, backed up with a large number of slower, cheaper memories. A typical processor memory hierarchy may include registers in a top-level processor; one or more on-chip caches (eg, SRAM) as a fallback; may include an off-chip cache, referred to as a layer 2 or L2 cache (eg, SRAM); main memory (usually DRAM); disk storage (magnetic media); and lowest layer tape or CD (magnetic or optical media). In embedded applications (eg, portable electronic devices), there may be limited disk storage (if any), and thus main memory (typically limited in size) may be the lowest layer in the memory hierarchy.1 depicts a functional block diagram of a representative processor 10 that uses both a pipeline architecture and a hierarchical memory structure. The processor 10 executes instructions in the execution pipeline 12 in accordance with the control logic 14. The pipeline contains various registers or latches 16 organized in the pipeline level, and one or more arithmetic logic units (ALUs) 18. A general purpose register (GPR) file 20 provides registers including the top of the memory hierarchy. The pipeline fetches instructions from the instruction cache 22, which manages memory addressing and permissions by an Instruction-side Translation Lookaside Buffer (ITLB) 24, and performs some initial decoding of the instructions by the pre-decoder 21. Data is accessed from data cache 26, and memory address and permissions are managed by a primary translation lookaside buffer (TLB) 28. In various embodiments, the ITLB can include a copy of a portion of the TLB. Alternatively, ITLB and TLB can be integrated. Similarly, in various embodiments of processor 10, I-cache 22 and D-cache 26 may be integrated or unified. Under the control of the memory interface 30, accesses (missing) that are not present in the I-cache 22 and/or the D-cache 26 result in access to the main (off-chip) memory 32. Processor 10 may include an input/output (I/O) interface 34 that controls access to various peripheral devices 36. Those skilled in the art will recognize that processor 10 may have many variations. For example, processor 10 may include a Layer 2 (L2) cache that acts as either or both of I and D caches. Additionally, one or more of the functional blocks depicted in processor 10 may be omitted in a particular embodiment.One known technique for improving processor performance and reducing power consumption is known as pre-decoding. The predecoder 21 includes logic inserted in the path between the main memory 32 and the instruction cache 22. Some of the instructions fetched from memory may be pre-decoded, pre-decoded information generated, and written to I-cache 22 along with the instructions. The pre-decode information may assist one or more decoding stages in decoding the instructions when the instructions are fetched from the cache for execution. For example, the predecoder can determine the length of the variable length instruction and write predecoded information into a cache that assists the decoding pipeline level in retrieving the correct number of bits of the variable length instruction . A variety of information can be pre-decoded and stored in I cache 22.The performance of the predecoder 21 is improved by removing logic from one or more decoding pipeline stages, allowing for earlier use of the logic and possibly achieving shorter machine cycle times. The pre-decoder 21 also reduces power consumption by performing a pre-decoding operation at a time. Since the hit rate of the I-cache 22 is typically as high as 90%, considerable power savings can be achieved since it is not necessary to perform logic operations each time an instruction is executed from the I-cache 22.Sometimes, the predecoder 21 has an error. For example, if data such as parameters or intermediate values are stored in memory along with the instructions, the pre-decoding operation that determines the length of the instruction by simply counting the bytes from the beginning of the cache line may result in one or Bytes of one or more of the above parameters or intermediate values are erroneously identified as instructions that are further down the line. There may be other types of errors, including random bit errors in the pre-decoder 21 or in the I-cache 22. These errors should be found in one or more decoder stages, and these errors will typically result in anomalies, requiring flushing and restarting the pipeline, thereby causing performance and power consumption losses.There are a number of ways to correct for pre-decoding errors that do not cause anomalies and associated flushing of pipeline 12. 2 is a functional block diagram depicting portions of processor 10 and pipeline 12. FIG. 2 also depicts an Instruction Cache Address Register (ICAR) 48, which indexes I Cache 22. The address loaded into the ICAR 48 is generated and/or selected by the next fetch address calculation circuit 46. When an instruction is fetched from the memory 32 (or L2 cache), the predecoder 21 pre-decodes the instruction, and the pre-decode information 23 is stored in the instruction cache 22 along with the corresponding instruction.In pipeline 12, instructions and associated pre-decode information 23 are fetched from I-cache 22, which is at least partially decoded by decode logic 40, and the result is stored in DCD1 tube-level latch 42. In many processors 10, the DCD1 pipe stage contains a branch predictor. In the event that the branch predictor prediction will take a branch, the pipe level can compute the branch target address and provide it to the next fetch address calculation logic 46 along the branch predictive address path 44. This is an example of the address path from the pipe level to the next fetch address calculation logic 46 (predicting that the branch not taken will simply allow continuous instruction fetching to continue).In an exemplary embodiment, the extracted and partially decoded instructions then flow to the pipe level DCD2, which contains incorrect pre-decode detection and correction logic 50. If an error in the pre-decoded information is detected, the DCD2 tube level can signal an anomaly and flush the pipeline 12, as discussed above.Alternatively, the pre-decoding error can be corrected by retrieving instructions from memory 32. One way to accomplish this is to invalidate the instruction in cache 22 and provide the instruction address along path 54 to the next fetch address circuit 46. This address will then be loaded into ICAR 48. Because the instruction is invalidated in the cache 22, the cache access will be missed, resulting in access to the main memory 32. The instructions fetched from main memory 32 by predecoder 21 are then correctly pre-decoded and placed back into instruction cache 22. The instructions can then be retrieved from the cache 22, along with the correct pre-decode information 23.The next fetch address calculation logic 46 is typically located on the critical path of most processor data streams and thus limits machine cycle time. Adding a path 54 for the instruction address associated with the incorrect pre-decode will add logic to the next fetch address calculation 46, thereby increasing machine cycle time and performance. This performance strike is particularly alarming considering that the pre-decoded information 23 is rarely incorrect. Optimizing performance in rare cases at the expense of the general situation typically reduces overall processor performance.In accordance with an embodiment of the present invention, the incorrect pre-decoding path 54 to the next fetch address calculator 46 is removed (as indicated by the dashed line in Figure 2). The incorrect pre-decode detection and correction logic 50 causes pipeline 12 to evaluate the instruction that was not correctly pre-decoded as a branch instruction instead of providing a dedicated path to the next extraction address calculator 46. The pre-decoding correction logic 50 may change the semantics of the instruction that was not correctly pre-decoded to the semantics of the branch instruction, or, alternatively, may set a flag carried by the pipeline, the flag indicating to the execution level, The instructions will be treated as branches.Specifically, an instruction that is not correctly pre-decoded is evaluated as a branch that is predicted to be taken but evaluated as taken, and the branch target address is an address that is not correctly pre-decoded. At some point down the pipeline 12 (depending on the implementation details), the instructions are evaluated by evaluating the "branch take" condition and generating the branch level 56 of the branch target address. The branch target address is provided to the next extraction address calculator 46 along the branch correction path 58. The branch condition evaluation logic, the branch target address generation logic, and the associated control logic in the branch correction path 58 and the next extraction address calculator 46 are already present in each pipeline processor 10 that predicts branch behavior.3 is a functional diagram of one possible implementation of branch correction logic. Within the EXE tube level latch 56 is a branch (BPT) bit 60 predicted to be taken, and a branch condition evaluation (COND) bit 62. If the branch predictor earlier predicted that the branch would be taken in pipeline 12, then BPT bit 60 is one, and if the branch is not taken, the BPT bit 60 is zero. The COND bit 62 is 1 if the branch is evaluated as taken, and the COND bit 62 is 0 if the branch is evaluated as not taken. These two bits may be XORed, as indicated by gate 66, to generate a multiplexer selection or similar control signal provided to the next fetch address calculator 46, thereby indicating that the branch correction path 58 should Selected as the next fetch address. Table 1 below depicts the truth table for XOR 66.Table 1: Branch prediction analysis truth tableThe condition evaluation bit 62 can additionally be used as a select input to the multiplexer 68 that selects between successive addresses and the calculated branch target address 64 to generate an address placed on the branch correction path 58.In accordance with an embodiment of the present invention, in order to process an instruction that is not properly pre-decoded, the BPT bit 60 can be set or forced to zero, and the COND bit 62 can be set or forced to one to force the "branch error prediction to be Do not take the situation. In this case, the calculated branch target address 64 will be directed to the next address extraction circuit 46 via the branch correction path 58.According to one embodiment of the invention, instructions that are not correctly pre-decoded are evaluated as PC-related branch instructions, where the branch shift column is zero. When evaluating this instruction in EXE tube stage 56, the calculated branch target address will include the address of the erroneously pre-decoded instruction (offset is 0), which in another embodiment of the invention will not be correct The pre-decoded instructions are evaluated as register branch instructions, and in addition, the branch target address register will be loaded with the address of the instruction that was not properly pre-decoded. In the case of loading a branch target address register by an arithmetic operation, the operand register can be loaded to generate an instruction address that is not properly pre-decoded. Those skilled in the art will readily appreciate many other methods for evaluating an instruction that has not been properly pre-decoded as a branch instruction with a target address having the instruction itself, and which is included in the present invention. Within the scope.Referring again to FIG. 2, a forced error prediction is performed at the EXE stage 56 as a branch instruction that is not taken, and a branch target address including the address of the instruction that is not correctly pre-decoded is placed on the branch correction path 58. This address is selected by the next fetch address calculator 46, loaded into the ICAR 48, and instruction fetch is performed in the I cache 22.Since the incorrect pre-decode detection and correction logic 50 invalidates the cache line containing instructions that are not properly pre-decoded, the I-cache 22 access will be missed, forcing extraction from the memory 32 (or L2 cache). instruction. The instructions are then pre-decoded correctly by the pre-decoder 21 and placed in the I-cache 22 along with the correct pre-decode information 23. The instruction and predecode information 23 can then be retrieved from the I cache 22, the instruction and predecode information 23 are correctly decoded, and the instruction and predecode information 23 are correctly executed in pipeline 12. An offset error due to, for example, alternating data and instructions is not reoccurred in the predecoder 21 because the memory access is for the exact address of the instruction, not the beginning of the cache line.It should be noted that the above description of memory access is conceptual. In any given implementation, access to the memory 32 can occur in parallel with the I cache access 22; the I cache 21 can be predicted to be missed, and thus the I cache 22 access is avoided; the memory 32 The result of the access can go directly into pipeline 12, be written in parallel to I cache 22, and so on. In general, the present invention encompasses all memory and/or cache performance optimizations that may deviate operationally from the above description.While the invention has been described with respect to the specific embodiments, aspects and embodiments of the present invention, it is understood that many variations, modifications and other embodiments are possible within the broad scope of the invention and, therefore, Modifications and embodiments are considered to be within the scope of the invention. The present invention is to be considered in all respects as illustrative and not restrictive.
Disclosed embodiments relate to an interleaved pipeline of floating-point (FP) adders. In one example, a processor is to execute an instruction specifying an opcode and locations of a M by K first source matrix, a K by N second source matrix, and a M by N destination matrix, the opcode indicating execution circuitry, for each FP element (M, N) of the destination matrix, is to: launch K instances of a pipeline having a first, MULTIPLY stage, during which a FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix are multiplied; concurrently,in an EXPDIFF stage, determine an exponent difference between the product and a previous FP value of the element (M, N) of the destination matrix; and in a second, ADD-BYPASS stage, accumulate the product with the previous FP value and, concurrently, bypassing the accumulated sum to a subsequent pipeline instance.
1.A processor including:The decoding circuit is used to decode an instruction that specifies the position of a first source matrix of M by K, a second source matrix of K by N, and a destination matrix of M by N and specifies an operation code, and the operation code indicates The execution circuit is used to start K pipeline instances in K cycles for each floating-point FP element (M, N) of the destination matrix, and each pipeline instance includes:In the first multiplication stage, the product of the FP element (M, K) of the first source matrix and the element (K, N) of the second source matrix is generated;Concurrently, in the exponential difference level, determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix;In the second addition-bypass stage, the product is accumulated with the previous FP value and the accumulated sum is stored in the element (M, N) of the destination matrix, and if it is determined that rounding is required, then Add one to the next streamline instance;Wherein, before the accumulation, the products are aligned by shifting the mantissa of the product by the exponent difference; and concurrently, in the addition-bypass stage, the accumulation and the Subsequent instances of the pipeline; andThe execution circuit is used to execute the decoded instruction according to the operation code.2.The processor of claim 1, wherein the execution circuit is configured to complete the execution of K instances of the pipeline in K plus one cycle.3.The processor of claim 1 or 2, wherein, during the multiplication stage, the execution circuit is configured to perform rounding of the generated product when necessary.4.The processor of claim 1 or 2, wherein, during the addition-bypass stage, the execution circuit is configured to perform saturation of the accumulation and execution when necessary.5.The processor of claim 1 or 2, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K Is one of 1, 2, 3, 4, 8, and 16.6.The processor of claim 1 or 2, wherein the first source matrix, the second source matrix, and the destination matrix are each located in one of the following: a set of vector registers of a register file, a slice register A collection of, and multiple memory locations representing the matrix.7.The processor according to claim 1 or 2, wherein the execution circuit saves the state after executing the K pipeline instances for each element (M, N) of the destination matrix, and in the event of failure In this case, use the saved state to continue execution after recovering from the failure.8.The processor of claim 1 or 2, wherein the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed first instance of the pipeline start from the position of the destination matrix specified by the instruction The previous FP value of the element (M, N) of the destination matrix is received, and the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed subsequent instance of the pipeline receive the element of the destination matrix ( The previous FP value of M, N) serves as a bypass from the addition-bypass stage of the immediately preceding instance of the pipeline.9.The processor according to claim 1 or 2, wherein the instruction further specifies a multi-bit write mask, and each bit of the multi-bit write mask is used for the corresponding element of the destination matrix ( M, N) masks or otherwise allows writing to the corresponding element (M, N) of the destination matrix.10.The processor of claim 9, wherein each of the masked elements is to be zeroed or merged.11.A method for execution by a processor, the method comprising:Use a decoding circuit to decode the instruction, which specifies the position of the first source matrix of M by K, the second source matrix of K by N, and the destination matrix of M by N and specifies the operation code, which instructs the execution circuit For each floating-point FP element (M, N) of the destination matrix, K instances of the pipeline are started on K cycles; andUse an execution circuit to execute the decoded instruction according to the opcode; andEach instance of the pipeline includes:In the first multiplication stage, the product of the FP element (M, K) of the first source matrix and the corresponding FP element (K, N) of the second source matrix is generated;Concurrently, in the exponential difference level, determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix;In the second addition-bypass stage, the product is accumulated with the previous FP value and the accumulated sum is stored in the elements (M, N) of the destination matrix, wherein, before the accumulation is performed, Align the products by shifting the mantissa of the product by the exponent difference; and concurrently, in the addition-bypass stage, bypassing the accumulation and for subsequent instances of the pipeline .12.The method according to claim 11, wherein the execution circuit is configured to complete the execution of the K instances of the pipeline in K plus one cycle.13.The method of claim 11 or 12, wherein, during the multiplication stage, the execution circuit is configured to perform rounding of the generated product when necessary.14.The method according to claim 11 or 12, wherein, during the addition-bypass stage, the execution circuit is used to perform saturation on the accumulation sum when necessary.15.The method of claim 11 or 12, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is One of 1, 2, 3, 4, 8, and 16.16.The method of claim 11 or 12, wherein the first source matrix, the second source matrix, and the destination matrix are each located in one of the following: a set of vector registers of a register file, Sets, and multiple memory locations representing the matrix.17.The method according to claim 11 or 12, wherein the execution circuit saves the state after executing K pipeline instances for each element (M, N) of the destination matrix, and in case of failure, Use the saved state to continue execution after recovering from the failure.18.The method of claim 11 or 12, wherein the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed first instance of the pipeline are received from the location of the destination matrix specified by the instruction The previous FP value of the element (M, N) of the destination matrix, and the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed subsequent instances of the pipeline receive the element (M The previous FP value of N) serves as a bypass from the addition-bypass stage of the immediately preceding instance of the pipeline.19.The method according to claim 11 or 12, wherein the instruction further specifies a multi-bit write mask, and each bit of the multi-bit write mask is used to compare the corresponding element (M , N) mask or otherwise allow writing to the corresponding element (M, N) of the destination matrix.20.The method of claim 19, wherein each of the masked elements will be zeroed or merged.
Interleaved pipeline of floating point addersTechnical fieldThe technical field relates generally to computer processor architecture, and more specifically to systems and methods for executing interleaved pipelines of floating point adders.Background techniqueIn many computing tasks such as machine learning and other batch data processing, matrices are becoming increasingly important. Deep learning is a type of machine learning algorithm. Deep learning architectures such as deep neural networks have been applied in fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, and drug design.The two tools used for deep learning, reasoning and training are tending towards low-precision arithmetic. Maximizing the throughput of deep learning algorithms and calculations can help meet the needs of deep learning processors, such as those that perform deep learning in data centers.Matrix-matrix multiplication (also known as GEMM or general matrix multiplication) is a common recalculation operation on today's processors. Special hardware for matrix multiplication (eg, GEMM) is a good option for improving the peak calculation (and energy efficiency) of certain applications such as deep learning. As long as the output element has enough bits (ie, more than the input), some of these applications, including deep learning, can operate on input data elements with relatively few bits without loss of accuracy.Common operations performed in the context of machine learning are matrix (slice) floating point fusion multiplication-accumulation (FMA) instructions, whether single-precision or double-precision. It is desirable to improve the power and performance of FMA instructions to improve the power and performance of applications that use those instructions, including machine learning training and inference applications.Description of the drawingsThe present invention is illustrated in the accompanying drawings by way of example rather than limitation. In the drawings, similar reference signs indicate similar elements, among which:Figure 1A illustrates an embodiment of a configured sheet;Figure 1B illustrates an embodiment of a configured sheet;Figure 2 illustrates several examples of matrix storage;Figure 3 illustrates an embodiment of a system that uses a matrix (slice) to operate an accelerator;Figures 4 and 5 show different embodiments of how to use a matrix operation accelerator to share memory;Figure 6 illustrates an embodiment of a matrix multiplication accumulation operation ("TMMA") using slices;FIG. 7 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiplication and accumulation instructions;FIG. 8 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiplication and accumulation instructions;FIG. 9 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiply-accumulate instructions;Figure 10 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiply-accumulate instructions;FIG. 11 illustrates a SIMD implementation with a size of a power of 2 according to an embodiment, where the accumulator uses an input size larger than the size of the input to the multiplier;Figure 12 illustrates an embodiment of a system using a matrix operating circuit;FIG. 13 illustrates an embodiment of a processor core pipeline, which supports matrix operations using slices;Figure 14 illustrates an embodiment of a processor core pipeline, which supports matrix operations using slices;Figure 15 illustrates an example of a matrix expressed in a row-based format and a column-based format;Figure 16 illustrates an example of the use of a matrix (slice);Figure 17 illustrates an embodiment of a method of using a matrix (slice);FIG. 18 illustrates support for the configuration of the use of slices according to an embodiment;FIG. 19 illustrates an embodiment of the description of the matrix (slice) to be supported;Fig. 20(A)-Fig. 20(D) illustrate examples of register(s);21A-21B illustrate the floating-point multiplication-accumulation pipeline;Figure 21A illustrates a basic floating point multiplication-accumulation pipeline;Figure 21B illustrates an interleaved matrix (slice) floating point multiplication-accumulation pipeline according to some embodiments;22A is a block diagram illustrating the execution of a matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction according to some embodiments;22B is a block diagram illustrating the use of interleaved pipelines to execute matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instructions according to some embodiments;FIG. 23 illustrates an embodiment of a processor that executes a flow for processing matrix (chip) floating point multiply-accumulate (TILEFPFMA) instructions;Figure 24 is a block diagram illustrating the format of a matrix (chip) floating point multiply-accumulate (TILEFPFMA) instruction according to some embodiments;25A-25B are block diagrams illustrating a general vector friendly instruction format and its instruction template according to an embodiment;FIG. 25A is a block diagram illustrating a general vector friendly instruction format and its type A instruction template according to an embodiment;FIG. 25B is a block diagram illustrating a general vector friendly instruction format and its type B instruction template according to an embodiment;Figure 26A is a block diagram illustrating an exemplary specific vector friendly instruction format according to an embodiment;FIG. 26B is a block diagram illustrating fields with a dedicated vector friendly instruction format that constitute a complete opcode field according to one embodiment;FIG. 26C is a block diagram illustrating fields with a dedicated vector friendly instruction format that constitute a register index field according to one embodiment;FIG. 26D is a block diagram illustrating fields with a dedicated vector friendly instruction format that constitute an extended operation field according to one embodiment;Figure 27 is a block diagram of a register architecture according to one embodiment;FIG. 28A is a block diagram illustrating both an exemplary in-order pipeline according to an embodiment and an out-of-order issue/execution pipeline of exemplary register renaming;28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register renaming out-of-order issue/execution architecture core according to an embodiment;29A-29B illustrate a block diagram of a more specific exemplary ordered core architecture, which will be one of several logic blocks in the chip (including other cores of the same type and/or different types);Figure 29A is a block diagram of a single processor core and its connection to the on-die interconnection network and its local subset of the second level (L2) cache according to an embodiment;FIG. 29B is an expanded view of a part of the processor core in FIG. 29A according to an embodiment;FIG. 30 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment;Figures 31-34 are block diagrams of exemplary computer architectures;Figure 31 shows a block diagram of a system according to an embodiment of the present invention;Figure 32 is a block diagram of a first more specific exemplary system according to an embodiment of the present invention;Figure 33 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention;FIG. 34 is a block diagram of a system on chip (SoC) according to an embodiment of the present invention; andFIG. 35 is a block diagram of a comparison of using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment.detailed descriptionIn the following description, numerous specific details are explained. However, it should be understood that the embodiments may be implemented without these specific details. In other instances, well-known circuits, structures and technologies are not shown in detail so as not to obscure the understanding of this description.References in the specification to "one embodiment", "an embodiment", "exemplary embodiment", etc. indicate that the described embodiment may include specific features, structures or characteristics, but each embodiment may not necessarily include the specific The characteristics, structure or characteristics of. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, when a specific feature, structure, or characteristic is described in conjunction with an embodiment, it is considered that it is within the knowledge of those skilled in the art to influence such feature, structure, or characteristic in combination with other embodiments, whether explicitly described or not.In many mainstream processors, handling the matrix is a difficult and/or instruction-intensive task. For example, multiple rows of the matrix can be placed in multiple compact data (for example, SIMD or vector) registers, and then multiple rows of the matrix can be operated on separately. For example, depending on the data size, adding two 8x2 matrices may require loading or aggregation into four compact data registers. Then, the first addition of the packed data register corresponding to the first row from each matrix is performed and the second addition of the packed data register corresponding to the second row of each matrix is performed. Subsequently, the resulting compressed data registers are scattered back to the memory. Although this scenario may be acceptable for small matrices, it is generally unacceptable for larger matrices.discussDescribed herein are mechanisms for supporting matrix operations in computer hardware such as central processing unit (CPU), graphics processing unit (GPU), and accelerator. Matrix operations utilize a 2-dimensional (2-D) data structure that represents one or more compact areas of memory, such as registers. Throughout this specification, these 2-D data structures are called slices. Note that the matrix can be smaller than the slice (less than all slices are used), or multiple slices can be used (the matrix is larger than the size of any slice). Throughout this specification, matrix (slice) language is used to indicate operations performed using slices that affect the matrix; whether the matrix is larger than any slice is generally irrelevant.Each slice can be affected by different operations, such as those detailed in this article, including but not limited to: matrix (slice) multiplication, slice addition, slice subtraction, slice diagonal, slice zero, slice transformation, Slice dot product, slice broadcasting, slice row broadcasting, slice column broadcasting, slice multiplication, slice multiplication and accumulation, slice movement, etc. In addition, support for operators such as scaling and/or biasing can be used in the future with these operations or to support non-numerical applications, such as OpenCL "local memory", data compression/decompression, and many more. This article also describes instructions for executing matrix (chip) floating point multiplication-accumulation (TILEFPFMA) instructions.Multiple parts of storage (such as (non-volatile and volatile) memory, registers, caches, etc.) are arranged as slices with different horizontal and vertical dimensions. For example, a slice may have a lateral dimension of 4 (e.g., four rows of a matrix) and a longitudinal dimension of 8 (e.g., 8 columns of a matrix). Typically, the lateral scale is related to the element size (e.g., 2 bits, 4 bits, 8 bits, 16 bits, 32 bits, 64 bits, 128 bits, etc.). Can support multiple data types (single precision floating point, double precision floating point, integer, etc.).Exemplary use of configured tabletsIn some embodiments, slice parameters can be configured. For example, a given slice can be configured to provide slice options. Exemplary slice options include, but are not limited to: the number of rows of slices, the number of columns of slices, whether slices are valid, and whether slices consist of slice pairs of equal size.Figure 1A illustrates an embodiment of a configured sheet. As shown in the figure, the 4kB of the application memory 102 has four 1kB slices stored thereon-slice t0 104, slice t1 106, slice t2 108, and slice t3 110. In this example, these 4 slices are not composed of pairs, and each slice has elements arranged in rows and columns. Slice t0 104 and slice t1 106 have 4 byte elements (for example, single-precision data) of K rows and N columns, where K=8 and N=32. The slice t2 108 and the slice t3 110 have 8-byte elements of K rows and N/2 columns (for example, double-precision data). Since the width of double-precision operands is twice that of single-precision operands, this configuration is consistent with the palette used to provide slice options, providing at least 4kB of total storage for at least 4 names. In operation, load operations and store operations can be used to load slices from the memory and store slices into the memory. Depending on the instruction encoding scheme used, the amount of available application memory and the size, number and configuration of available slices vary.Figure IB illustrates an embodiment of a configured sheet. As shown in the figure, the 4 kB of the application memory 122 has 2 pairs of 1 kB slices stored thereon, the first pair is slice t4L 124 and slice t4R 126, and the second pair is slice t5L 128 and slice t5R 130. As shown in the figure, the slice pair is divided into a left slice and a right slice. In other embodiments, slice pairs are divided into even slices and odd slices. In this example, each of these 4 slices has elements arranged in rows and columns. The slice t4L 124 and the slice t4R 126 have 4 byte elements (for example, single-precision floating point data) of K rows and N columns, where K=8 and N=32. The slice t5L 128 and the slice t5R 130 have 8-byte elements of K rows and N/2 columns (for example, double-precision floating point data). Since the width of double-precision operands is twice that of single-precision operands, this configuration is consistent with the palette used to provide slice options, providing at least 4kB of total storage for at least 2 names. The four slices in Fig. 1A use 4 names, and each name names a 1kB slice, while the two slice pairs in Fig. 1B can use two names to designate a pair of slices. In some embodiments, slice instructions accept the names of pairs of slices as operands. In operation, load operations and store operations can be used to load slices from the memory and store slices into the memory. Depending on the instruction encoding scheme used, the amount of available application memory and the size, number and configuration of available slices vary.In some embodiments, slice parameters are definable. For example, "Palette" is used to provide slice options. Exemplary options include, but are not limited to: the number of slice names, the number of bytes in stored rows, the number of rows and columns in the slice, and so on. For example, the maximum "height" (number of rows) of a slice can be defined as:Maximum row of slice = storage constructed/(number of palette names * number of bytes per row).As a result, applications can be written, so that the fixed use of names will be able to take advantage of different storage sizes across implementations.Use the slice configuration ("TILECONFIG") instruction to complete the configuration of the slice, in which the specific slice usage is defined in the selected palette. The declaration includes the number of slice names to be used, the number of rows and columns requested for each name (slice), and in some embodiments the requested data type for each slice. In some embodiments, a consistency check is performed during the execution of the TILECONFIG instruction to determine that it matches the constraints of the palette entry.Exemplary slice storage typeFigure 2 illustrates several examples of matrix storage. In (A), the slice is stored in the memory. As shown in the figure, each "row" consists of four compressed data elements. To reach the next "line", the stride value is used. Note that the rows can be continuously stored in the memory. When slice storage does not map the row width of the underlying memory array, strided memory access allows access to one row and subsequently to the next row.Loading slices from memory and storing slices to memory are typically stepped accesses from application memory to compressed data rows. Exemplary TILELOAD and TILESTORE instructions or other instruction references to application memory as TILE operands in load operation instructions are restartable in some embodiments to handle (up to) 2* lines per instruction Page faults, unmasked floating-point exceptions, and/or interrupts.In (B), the matrix is stored in a slice composed of multiple registers, such as compact data registers (single instruction multiple data (SIMD) or vector registers). In this example, the slice is superimposed on three physical registers. Typically, consecutive registers are used, however, this need not be the case.In (C), the matrix is stored in a slice in non-register storage accessible by the fused multiply accumulate (FMA) circuit used in slice operations. The storage can be inside the FMA, or adjacent to the FMA. Furthermore, in some embodiments, as discussed below, the storage may be used for data elements, rather than for entire rows or entire slices.The supported parameters of the TMMA architecture are reported via CPUID. In some embodiments, the information list includes the maximum height and the maximum SIMD dimension. Configuring the TMMA architecture requires specifying the dimensions of each piece, the element size of each piece, and the palette identifier. The configuration is completed by executing the TILECONFIG instruction.The successful execution of the TILECONFIG instruction enables the subsequent TILE operator. The TILERELEASEALL instruction clears the chip configuration and disables the TILE operation (until the next TILECONFIG instruction is executed). In some embodiments, XSAVE, XSTORE, etc. are used in context switching using slices. In some embodiments, two XCR0 bits are used in XSAVE, one is for TILECONFIG metadata, and one bit corresponds to the actual slice payload data.TILECONFIG not only configures the chip usage, but also sets the state variable, which indicates that the program is in the code area when the chip is configured. The implementation can enumerate the restrictions on other instructions that can be used with the slice area, such as not using the existing register set, and so on.Exiting the slice area is typically done using the TILERELEASEALL instruction. This command takes no parameters and quickly invalidates all slices (indicating that the data no longer needs any saving or restoration), and clears the internal state corresponding to the slice area.In some embodiments, the slice operation will zero out any rows and any columns that exceed the scale specified by the slice configuration. For example, as each row is written, the slice operation will zero out the data that exceeds the configured number of columns (taking into account the size of the element). For example, for 64-byte rows and slices configured with 10 rows and 12 columns, the operation of writing FP32 elements will write output/result data in each of the previous 10 rows with 12*4 bytes, and make every The remaining 4*4 bytes in a row are reset to zero. The slice operation also completely zeroes out any rows after the first 10 configured rows. When using 1K slices with 64-byte rows, there will be 16 rows, so in this example, the last 6 rows will also be zeroed.In some embodiments, when data is loaded, a context recovery instruction (eg, XRSTOR) forces data beyond the configured row of the slice to be maintained at zero. If there is no valid configuration, all rows are zeroed. XRSTOR of slice data can load useless information in columns beyond those configured. It should not be possible for XRSTOR to clear the number of columns beyond the configuration because there is no element width associated with the slice configuration.When the entire TILE storage area is written to the memory, the context save (for example, XSAVE) exposes the entire TILE storage area. If XRSTOR loads useless data into the rightmost part of the slice, XSAVE will save that data. For rows that exceed the number specified for each slice, XSAVE will write zeros.In some embodiments, the slice instruction is restartable. The operation of accessing the memory is allowed to restart after a page fault. By virtue of the masking of exceptions controlled by the control and/or status register, calculation instructions that handle floating-point operations also allow unmasked floating-point exceptions.To support restarting instructions after these events, these instructions store information in the start registers detailed below.Matrix (chip) operating systemExemplary hardware supportFig. 3 illustrates an embodiment of a system for operating an accelerator using a matrix (slice). In this illustration, the host processor/processing system 301 passes commands 311 (for example, matrix manipulation operations such as arithmetic or matrix manipulation operations, or load and store operations) to the matrix operation accelerator 307. However, this is shown in this way, for discussion purposes only. As described in detail later, the accelerator 307 may be part of the processing core. Typically, the command 311, which is a chip manipulation operator instruction, refers to the chip as a register-register ("reg-reg") or register-memory ("reg-mem") format. Other commands such as TILESTORE, TILELOAD, TILECONFIG, etc. do not perform data operations on the slice. The commands may be decoded instructions (e.g., micro operations) or macro instructions for the accelerator 307 to handle.In this example, the coherent memory interface 303 is coupled to the host processor/processing system 301 and the matrix operation accelerator 307 so that they can share memory. Figures 4 and 5 show different embodiments of how to use a matrix operation accelerator to share memory. As shown in FIG. 4, the host processor 401 and the matrix operation accelerator circuit 405 share the same memory 403. FIG. 5 illustrates an embodiment in which the host processor 501 and the matrix operation accelerator 505 do not share memory, but can access each other's memory. For example, the processor 501 can access the chip memory 507 and utilize its host memory 503 as usual. Similarly, the matrix operation accelerator 505 can access the host memory 503, but more typically uses its own memory 507. Note that these memories can be of different types.In some embodiments, overlay structures on physical registers are used to support slices. For example, depending on the implementation, the slice can utilize 16 1024-bit registers, 32 512-bit registers, and so on. In some embodiments, matrix operations utilize a 2-dimensional (2-D) data structure that represents one or more compact regions of memory (such as registers). Throughout this specification, these 2-D data structures are called slices or slice registers.In some embodiments, the matrix operation accelerator 307 includes multiple FMAs 309 coupled to the data buffer 305 (in some implementations, one or more of these buffers 305 are stored in the grid as shown in the figure). FMA). The data buffer 305 buffers slices loaded from the memory and/or slices stored into the memory (for example, using slice load or slice store instructions). The data buffer may be, for example, multiple registers. Typically, these FMAs are arranged as a grid of chained FMAs 309 capable of reading and writing pieces. In this example, the matrix operation accelerator 307 is used to perform matrix multiplication operations using slices T0, T1, and T2. At least one of the pieces is contained in the FMA grid 309. In some embodiments, all slices in operation are stored in the FMA grid 309. In other embodiments, only a subset is stored in the FMA grid 309. As shown in the figure, T1 is accommodated, but T0 and T2 are not accommodated. Note that A, B, and C refer to the matrices of these patches, and these matrices may or may not occupy the entire space of the patch.Figure 6 illustrates an embodiment of a matrix multiplication accumulation operation ("TMMA") using slices.The number of rows in the matrix (slice A 601) matches the number of concatenated (chained) FMAs, which include the calculated waiting time. The implementation is free to recirculate on a grid of smaller height, but the calculation remains the same.The source/destination vector comes from a slice of N rows (slice C 605), and the grid 611 of the FMA performs N vector-matrix operations, resulting in a complete instruction of matrix multiplication of the slice. Slice B 603 is another vector source and provides "broadcast" items to the FMA in each level.In operation, in some embodiments, the elements of matrix B (stored in slice B 603) are spread across the rectangular grid of the FMA. Matrix B (stored in slice A 601) has its row elements transformed to match the column dimensions of the FMA's rectangular grid. At each FMA in the grid, the elements of A and B are multiplied and added to the incoming summand (from the figure above), and the outgoing sum is passed to the next line of the FMA ( Or final output).The waiting time of a single step is proportional to K (row height of matrix B), and the dependent TMMA typically (in a single slice or across slices) has enough source-destination rows to hide this waiting time. The implementation can also divide the SIMD (compact data element) scale M (the row height of matrix A) across time steps, but this only changes the constant by which K is multiplied. When the program specifies K that is smaller than the maximum value enumerated by TMMA, the implementation uses "mask" or "early out" to achieve this freely.The waiting time of the entire TMMA is proportional to N*K. The repetition rate is proportional to N. The number of MAC for each TMMA instruction is N*K*M.Figure 7 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiply-accumulate instructions. Specifically, this illustrates the execution circuit of an iteration of the compressed data element position of the destination. In this embodiment, chain fusion multiplication accumulation is operating on signed sources, where the accumulator is twice the size of the input data.The first signed source (source 1 701) and the second signed source (source 2 703) each have four compressed data elements. Each of these packed data elements stores signed data such as floating point data. The third signed source (source 3 709) has two compressed data elements, each of which stores signed data. The size of the first signed source 701 and the size of the second signed source 703 are half of the size of the third signed source (initial value or previous result) 709. For example, the first signed source 701 and the second signed source 703 may have 32-bit compressed data elements (e.g., single-precision floating point), and the third signed source 709 may have 64-bit compressed data elements (e.g., Double precision floating point).In this illustration, only the two most significant compressed data element positions of the first signed source 701 and the second signed source 703 and the most significant compressed data element position of the third signed source 709 are shown. Of course, other compressed data element positions will also be processed.As shown in the figure, compressed data elements are processed in pairs. For example, the multiplier circuit 705 is used to multiply the data of the most significant compressed data element positions of the first signed source 701 and the second signed source 703, and the multiplier circuit 707 is used to multiply the data from the first signed source 701 and the first signed source 701. Multiply the data at the second most effective compressed data element position of the signed source 703. In some embodiments, these multiplier circuits 705 and 707 are reused for other compressed data element positions. In other embodiments, additional multiplier circuits are used so that compressed data elements are processed in parallel. In some contexts, parallel execution is accomplished using a channel with the size of the signed third source 709. The addition circuit 711 is used to add the results of each of these multiplications.The result of the addition of the results of these multiplications (using a different adder 713 or the same adder 711) is added to the data from the signed source 3709 at the most significant compressed data element position.Finally, the result of the second addition is stored in the position of the compressed data element in the signed destination 715 corresponding to the position of the used compressed data element from the signed third source 709, or if there is a next iteration, the first The result of the two addition is passed on to the next iteration. In some embodiments, a write mask is applied to this storage, so that if the corresponding write mask (bit) is set, the storage occurs, and if the corresponding write mask (bit) is not set, the storage does not occur.Figure 8 illustrates an embodiment of a subset of the execution of the iteration of chained fusion multiply-accumulate instructions. Specifically, this illustrates the execution circuit of an iteration of the compressed data element position of the destination. In this embodiment, chain fusion multiplication accumulation is operating on signed sources, where the accumulator is twice the size of the input data.The first signed source (source 1 801) and the second signed source (source 2 803) each have four compressed data elements. Each of these compressed data elements stores signed data such as integer data. The third signed source (source 3 809) has two compressed data elements, each of which stores signed data. The size of the first signed source 801 and the size of the second signed source 803 are half the size of the third signed source 809. For example, the first signed source 801 and the second signed source 803 may have 32-bit compressed data elements (e.g., single-precision floating point), while the third signed source 809 may have 64-bit compressed data elements (e.g., Double precision floating point).In this illustration, only the two most significant compressed data element positions of the first signed source 801 and the second signed source 803 and the most significant compressed data element position of the third signed source 809 are shown. Of course, other compressed data element positions will also be processed.As shown in the figure, compressed data elements are processed in pairs. For example, the multiplier circuit 805 is used to multiply the data of the most significant compressed data element positions of the first signed source 801 and the second signed source 803, and the multiplier circuit 807 is used to multiply the data from the first signed source 801 and the second signed source 803. The data at the second most effective compressed data element position of the two signed source 803 is multiplied. In some embodiments, these multiplier circuits 805 and 807 are reused for other compressed data element positions. In other embodiments, additional multiplier circuits are used so that compressed data elements are processed in parallel. In some contexts, parallel execution is accomplished using a channel whose size is the size of the signed third source (initial value or previous iteration result) 809. The addition/saturation circuit 813 is used to add the result of each of the multiple multiplications to the signed third source 809.When the addition results in an excessively large value, the addition/saturation (accumulator) circuit 813 retains the sign of the operand. Specifically, for infinite precision results between multi-way addition and writing to the destination or the next iteration, saturation evaluation occurs. When the accumulator 813 is a floating point and the input is an integer, the sum of the products and the input value of the floating point accumulator are converted to an infinite precision value (a fixed point number of hundreds of digits), the multiplication result is added to the third input, and Perform a single round to the actual accumulator type.Unsigned saturation means that the output value is limited to the largest unsigned number (all 1s) of that element width. Signed saturation means that the value is limited to the range between the smallest negative and largest positive number of that element width (for example, for bytes, the range is from -128 (=-2^7) to 127 (=2^7 -1)).The result of the addition and saturation check is stored in the compressed data element position corresponding to the used compressed data element position from the signed third source 809 in the signed result 815, or if there is the next iteration, the result is Continue to pass to the next iteration. In some embodiments, a write mask is applied to this storage, so that if the corresponding write mask (bit) is set, the storage occurs, and if the corresponding write mask (bit) is not set, the storage does not occur.Figure 9 illustrates an embodiment of a subset of the execution of an iterative chain fusion multiply-accumulate instruction. Specifically, this illustrates the execution circuit of an iteration of the compressed data element position of the destination. In this embodiment, the chained fusion multiplication-accumulation is operating on signed and unsigned sources, where the accumulator is 4 times the size of the input data.The first signed source (source 1 901) and the second unsigned source (source 2 903) each have four packed data elements. Each of these compressed data elements has data such as floating point data or integer data. The third signed source (initial value or result 915) has a packed data element that stores signed data. The size of the first source 901 and the size of the second source 903 are one quarter of the size of the third signed source 915. For example, the first source 901 and the second source 903 may have 16-bit compressed data elements (e.g., words), and the third signed source 915 may have 64-bit compressed data elements (e.g., double-precision floating point or 64-bit Integer).In this illustration, only the four most significant compressed data element positions of the first source 901 and the second source 903 and the most significant compressed data element position of the third signed source 915 are shown. Of course, if there are any other compressed data element positions, these compressed data element positions will also be processed.As shown in the figure, the compressed data elements are processed in quadruples. For example, the multiplier circuit 905 is used to multiply the data of the most significant compressed data element positions of the first source 901 and the second source 903, and the multiplier circuit 907 is used to multiply the second most significant data from the first source 901 and the second source 903. Multiply the data at the position of the compressed data element by using the multiplier circuit 909 to multiply the data at the third most effective compressed data element position from the first source 901 and the second source 903, and use the multiplier circuit 911 to The data of the least significant compressed data element position of one source 901 and the second source 903 are multiplied. In some embodiments, before the multiplication, the signed compressed data element of the first source 901 is sign extended, and the unsigned compressed data element of the second source 903 is zero extended.In some embodiments, these multiplier circuits 905-911 are reused for other compressed data element positions. In other embodiments, additional multiplier circuits are used so that compressed data elements are processed in parallel. In some contexts, the parallel execution is accomplished using channels whose size is the size of the signed third source 915. The addition circuit 913 is used to add the result of each of these multiplications.The result of the addition of the results of these multiplications (using a different adder 917 or the same adder 913) is added to the data of the most significant compressed data element position from the signed source 3 915.Finally, the result of the second addition 919 is stored in the compressed data element position corresponding to the used compressed data element position from the signed third source 915 in the signed destination, or passed to the next iteration. In some embodiments, a write mask is applied to this storage, so that if the corresponding write mask (bit) is set, the storage occurs, and if the corresponding write mask (bit) is not set, the storage does not occur.Figure 10 illustrates an embodiment of a subset of the execution of an iteration of chained fusion multiply-accumulate instructions. Specifically, this illustrates the execution circuit of an iteration of the compressed data element position of the destination. In this embodiment, the chained fusion multiplication-accumulation is operating on signed and unsigned sources, where the accumulator is 4 times the size of the input data.The first signed source 1001 and the second unsigned source 1003 each have four packed data elements. Each of these compressed data elements stores data such as floating point data or integer data. The third signed source 1015 (initial or previous result) has a compact data element storing signed data. The size of the first source and the size of the second source are one quarter of the size of the third signed source 1015 (initial or previous result). For example, the first source and the second source may have 16-bit compressed data elements (e.g., words), and the third signed source 1015 (initial or previous result) may have 64-bit compressed data elements (e.g., double precision float). Dot or 64-bit integer).In this illustration, the most significant four compressed data element positions of the first signed source 1001 and the second unsigned source 1003 and the most significant compressed data element position of the third signed source 1015 are shown. Of course, if there are any other compressed data element positions, these compressed data element positions will also be processed.As shown in the figure, the compressed data elements are processed in quadruples. For example, the multiplier circuit 1005 is used to multiply the data of the most significant compressed data element position of the first signed source 1001 and the second unsigned source 1003, and the multiplier circuit 1007 is used to multiply the data from the first signed source 1001 and the second unsigned source 1003. The data of the second most effective compressed data element position of the unsigned source 1003 is multiplied, and the data of the third most effective compressed data element position from the first signed source 1001 and the second unsigned source 1003 is multiplied by the multiplier circuit 1009 Multiply, and use the multiplier circuit 1011 to multiply the data from the least significant compressed data element position from the first signed source 1001 and the second unsigned source 1003. In some embodiments, before multiplication, the signed compressed data elements of the first signed source 1001 are sign extended, and the unsigned compressed data elements of the second unsigned source 1003 are zero extended.In some embodiments, these multiplier circuits 1005-1011 are reused for other compressed data element positions. In other embodiments, additional multiplier circuits are used so that compressed data elements are processed in parallel. In some contexts, parallel execution is accomplished using a channel of the size of the third signed source 1015 (initial or previous result). The adder/saturation 1013 circuit is used to add the result of the addition of these multiplication results to the data of the most significant compressed data element position from the third signed source 1015 (initial or previous result).When addition results in values that are too large or too small for signed saturation, the addition/saturation (accumulator) circuit 1013 preserves the sign of the operand. Specifically, for infinite precision results between multiple addition and writing to the destination, saturation evaluation occurs. When the accumulator 1013 is a floating point and the input item is an integer, the sum of the products and the input value of the floating point accumulator are converted to infinite precision values (a fixed-point number of hundreds of digits), the multiplication result is added to the third input, and Perform a single round to the actual accumulator type.The result of the addition and saturation check 1019 is stored in the compressed data element position corresponding to the used compressed data element position from the third signed source 1015 (initial or previous result) in the signed destination or passed to the next One iteration. In some embodiments, a write mask is applied to this storage, so that if the corresponding write mask (bit) is set, the storage occurs, and if the corresponding write mask (bit) is not set, the storage does not occur.FIG. 11 illustrates a SIMD implementation with a size of a power of 2 according to an embodiment, where the accumulator uses an input size larger than the size of the input to the multiplier. Note that the source and accumulator values (to the multiplier) can be signed or unsigned values. For an accumulator with a 2X input size (in other words, the size of the accumulator input value is twice the size of the compressed data element of the source), table 1101 illustrates different configurations. For byte-sized sources, the accumulator uses a 16-bit word or half-precision floating point (HPFP) value. For the source of word size, the accumulator uses a 32-bit integer or single-precision floating point (SPFP) value with a size of 32 bits. For SPFP or 32-bit integer size sources, the accumulator uses 64-bit integer or double-precision floating point (DPFP) values with a size of 64 bits.For an accumulator with a 4X input size (in other words, the size of the accumulator input value is 4 times the size of the compressed data element of the source), Table 1103 illustrates different configurations. For sources of byte size, the accumulator uses a 32-bit integer or single-precision floating point (SPFP) value with a size of 32 bits. In some embodiments, for the source of the word size, the accumulator uses a 64-bit integer or double-precision floating point (DPFP) value with a size of 64 bits.For an accumulator with 8X input size (in other words, the size of the accumulator input value is 8 times the size of the compressed data element of the source), Table 1105 illustrates the configuration. For byte size sources, the accumulator uses 64-bit integers.As previously indicated, the matrix operation circuit can be included in the core or can be used as an external accelerator. Fig. 12 illustrates an embodiment of a system using a matrix operating circuit. In this illustration, multiple entities are coupled to the ring interconnect 1245.Multiple cores, core 0 1201, core 1 1203, core 2 1205, and core N1207 provide non-slice-based instruction support. In some embodiments, the matrix operation circuit 1251 is provided in the core 1203, while in other embodiments, the matrix operation circuits 1211 and 1213 are accessible on the ring interconnect 1245.In addition, one or more memory controllers 1223-1225 are provided to communicate with the memories 1233 and 1231 on behalf of cores and/or matrix operation circuits.Figure 13 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices. The branch prediction and decoding circuit 1303 performs branch prediction of instructions from the instructions stored in the instruction store 1301, decoding of these instructions, and/or both branch prediction and decoding. For example, the instructions detailed herein may be stored in the instruction store. In some implementations, separate circuits are used for branch prediction, and in some embodiments, at least some instructions are decoded into one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or using micro-code 1305 Other control signals. The branch prediction and decoding circuit 1303 can be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc.The branch prediction and decoding circuit 1303 is coupled to the allocation/rename 1307 circuit, which in some embodiments is coupled to the scheduler circuit 1309. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functions by performing one or more of the following steps: 1) Rename logical operand values to physical operand values (for example, some embodiments The register alias table in); 2) assign status bits and flags to decoded instructions; and 3) (for example, in some embodiments, using reserved stations) schedule decoded instructions for execution outside the instruction pool Executed on the circuit.The scheduler circuit 1309 represents any number of different schedulers, including reserved stations, central command windows, and so on. The scheduler circuit 1309 is coupled to or includes the physical register file(s) 1315. Each of the physical register file(s) 1315 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer, Packed floating point, vector integer, vector floating point, state (for example, an instruction pointer as the address of the next instruction to be executed), slice, etc. In one embodiment, the physical register file(s) 1315 includes a vector register circuit, a write mask register circuit, and a scalar register circuit. These register circuits can provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1315 is covered by the retirement circuit 1317 to illustrate various ways in which register renaming and out-of-order execution can be achieved (such as using reorder buffer(s) and retiring register file(s) , Use (multiple) future files, (multiple) history buffers, (multiple) retirement register files, use register maps and register pools, etc.). The retirement circuit 1317 and the physical register file(s) 1315 are coupled to the execution circuit 1311.Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may also have a single internal cache for both instructions and data, such as For example, the first level (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.The execution circuit 1311 is a collection of one or more execution units, including a scalar circuit 1321, a vector/SIMD circuit 1323, a matrix operation circuit 1327, and a memory access circuit 1325 for accessing the cache 1313. The execution circuit performs various operations (for example, shift, addition, subtraction, multiplication) and various data types (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scalar circuit 1321 performs scalar operations, the vector/SIMD circuit 1323 performs vector/SIMD operations, and the matrix operation circuit 1327 performs the matrix (slice) operations detailed herein.As an example, the exemplary register-renaming, out-of-order issue/execution core architecture can implement the pipeline as follows: 1) the instruction fetch circuit executes the fetch and length decode stage; 2) the branch and decode circuit 1303 executes the decode stage; 3) allocate/rewrite The naming 1307 circuit performs the allocation stage and the renaming stage; 4) the scheduler circuit 1309 performs the scheduling stage; 5) (coupled to or included in the scheduler circuit 1309 and the allocation/renaming 1307 circuit and memory unit) (multiple 1) The physical register file executes the register read/memory read level; the execution circuit 1311 executes the execution level; 6) the memory unit and the physical register file unit(s) execute the write-back/memory write level; 7) Each unit may involve exceptions Disposal level; and 8) The retirement unit and the physical register file unit(s) execute the commit level.The approval supports one or more instruction sets (for example, the x86 instruction set (with some extensions added with newer versions); the MIPS instruction set of MIPS Technologies, Sunnyvale, California; the MIPS instruction set of Sunnyvale, California The ARM instruction set of ARM Holdings (with optional additional extensions such as NEON), which includes the instruction(s) described in this article. In one embodiment, the core 1390 includes logic to support compressed data instruction set extensions (eg, AVX1, AVX2), thereby allowing compressed data to be used to perform operations used by many multimedia applications.It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading and simultaneous multi-threading. Threading (where a single physical core provides a logical core for each thread in which the physical core is multithreading at the same time), or a combination thereof (for example, time-division fetching and decoding, and subsequent simultaneous in hyperthreading technology) Multithreading).Figure 14 illustrates an embodiment of a processor core pipeline that supports matrix operations using slices. The branch prediction and decoding circuit 1403 performs branch prediction of instructions from the instructions stored in the instruction store 1401, decoding of these instructions, and/or both branch prediction and decoding. For example, the instructions detailed herein may be stored in the instruction store. In some implementations, separate circuits are used for branch prediction, and in some embodiments, at least some instructions are decoded into one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or using micro-code 1405 Other control signals. The branch prediction and decoding circuit 1403 can be implemented using various mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc.The branch prediction and decoding circuit 1403 is coupled to the allocation/rename 1407 circuit, which in some embodiments is coupled to the scheduler circuit 1409. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functions by performing one or more of the following steps: 1) Rename logical operand values to physical operand values (for example, some embodiments The register alias table in); 2) assign status bits and flags to decoded instructions; and 3) (for example, in some embodiments, using reserved stations) schedule decoded instructions for execution outside the instruction pool Executed on the circuit.The scheduler circuit 1409 represents any number of different schedulers, including reserved stations, central command windows, and so on. The scheduler unit(s) scheduler circuit 1409 is coupled to or includes the physical register file(s) 1415. Each of the physical register file(s) 1415 represents one or more physical register files, wherein different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer, Packed floating point, vector integer, vector floating point, state (for example, an instruction pointer as the address of the next instruction to be executed), slice, etc. In one embodiment, the physical register file(s) 1415 includes a vector register circuit, a write mask register circuit, and a scalar register circuit. These register circuits can provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1415 is covered by the retirement circuit 1417 to illustrate various ways in which register renaming and out-of-order execution can be achieved (such as using reorder buffer(s) and retiring register file(s) , Use (multiple) future files, (multiple) history buffers, (multiple) retirement register files, use register maps and register pools, etc.). The retirement circuit 1417 and the physical register file(s) 1415 are coupled to the execution circuit 1411.Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may also have a single internal cache for both instructions and data, such as For example, the first level (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.The execution circuit 1411 includes a set of one or more execution circuits 1427 and a set of one or more memory access circuits 1425 for accessing the cache 1413. The execution circuit 1427 executes the matrix (slice) operations detailed herein.As an example, the exemplary register-renaming, out-of-order issue/execution core architecture can implement the pipeline as follows: 1) the instruction fetch circuit executes the fetch and length decode stage; 2) the branch and decode circuit 1403 executes the decode stage; 3) the allocation/relay The naming 1407 circuit performs the allocation stage and the renaming stage; 4) the scheduler circuit 1409 performs the scheduling stage; 5) (coupled to or included in the scheduler circuit 1409 and the allocation/rename 1407 circuit and memory unit) (multiple 1) The physical register file executes the register read/memory read level; the execution circuit 1411 executes the execution level; 6) the memory unit and the physical register file unit(s) execute the write-back/memory write level; 7) each unit may involve exceptions Disposal level; and 8) The retirement unit and the physical register file unit(s) execute the commit level.The approval supports one or more instruction sets (for example, the x86 instruction set (with some extensions added with newer versions); the MIPS instruction set of MIPS Technologies, Sunnyvale, California; the MIPS instruction set of Sunnyvale, California The ARM instruction set of ARM Holdings (with optional additional extensions such as NEON), which includes the instruction(s) described in this article. In one embodiment, core 1490 includes logic to support compressed data instruction set extensions (e.g., AVX1, AVX2), thereby allowing compressed data to be used to perform operations used by many multimedia applications.It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading and simultaneous multi-threading. Threading (where a single physical core provides a logical core for each thread in the threads that the physical core is multithreading at the same time), or a combination thereof (for example, time-division fetching and decoding, and subsequent simultaneous in hyperthreading technology) Multithreading).layoutThroughout this specification, the data layout of the main line is used to express the data. Users who are listed as primary should change these items according to their orientation. FIG. 15 illustrates an example of a matrix expressed in a row-major format and a column-major format. As shown in the figure, matrix A is a 2x3 matrix. When the matrix is stored in a row-oriented format, the data elements of the row are continuous. When the matrix is stored in a column-oriented format, the data elements of the columns are continuous. AT*BT=(BA)T is a well-known attribute of the matrix, where the superscript T represents transformation. Reading column-oriented data as row-oriented data results in a matrix that looks like a transformation matrix.In some embodiments, the behavior-oriented semantics is used in the hardware, and the column-oriented data will exchange the operand order and make the result a matrix transformation, but for subsequent column-oriented reads from the memory, it is correct The non-transformation matrix.For example, if you have two matrices with main columns to be multiplied:The input matrix will be stored in linear memory as follows (column-oriented):a c e b d fas well asg h i j k l.Read those matrices as behavior-oriented at scales of 2x3 and 3x2, then they will behave as:a c e and g hb d f i jk lExchange order and matrix multiplication:g h a c e ag+bh cg+dh eg+fhi j *b d f= ai+bj ci+dj ei+fjk l ak+bl ck+dl ek+flThe transformation matrix is moved out and can then be stored in the order of the main line:ag+bh cg+dh eg+fh ai+bj ci+dj ei+fj ak+bl ck+dl ek+flAnd it is used in the subsequent column-based calculations, which is the correct untransformed matrix:ag+bh ai+bj ak+blcg+dh ci+dj ck+dleg+fh ei+fj ek+flExemplary useFig. 16 illustrates an example of the use of a matrix (slice). In this example, matrix C1601 includes two slices, matrix A1603 includes one slice, and matrix B1605 includes two slices. The figure shows an example of the inner loop of the algorithm used to calculate matrix multiplication. In this example, the two result pieces tmm0 and tmm1 from matrix C1601 are used to accumulate intermediate results. When one slice (tmm2) from matrix A1603 is multiplied by two slices from matrix B1605, this slice is reused twice. The pointer is used to load a new A matrix (slice) and two new B matrices (slices) from the direction indicated by the arrow. The outer loop not shown adjusts the pointer for the C slice.The exemplary code shown in the figure includes the use of slice configuration instructions and is executed to configure slice usage, load slices, process slice cycles, store slices in memory, and release slice usage.Fig. 17 illustrates an embodiment of the use of a matrix (slice). At 1701, the configuration slice is used. For example, execute the TILECONFIG instruction to configure slice usage, including setting the number of rows and columns of each slice. Typically, at 1703, at least one matrix (slice) is loaded from the memory. At 1705, the matrix (slice) is used to perform at least one matrix (slice) operation. At 1707, at least one matrix (slice) is stored out to the memory, and at 1709, a context switch can occur.Example configurationChip configuration hardware supportAs discussed above, slice usage usually requires configuration before use. For example, it may not be necessary to use all rows and columns at all. Not configuring these rows and columns in some embodiments not only saves power, but the configuration can be used to determine whether the operation will generate an error. For example, if M and L are not the same, a matrix multiplication of the form (N x M)*(L x N) will generally not work.Before using the matrix using slices, in some embodiments slice support will be configured. For example, configure how many rows and columns each slice has, which slices will be used, and so on. The TILECONFIG instruction is an improvement to the computer itself, because it provides support for configuring the computer to use the matrix accelerator (as part of the processor core or as an external device). Specifically, the execution of the TILECONFIG instruction causes the configuration to be retrieved from the memory and applied to the matrix (slice) settings in the matrix accelerator.Slice configurationFIG. 18 illustrates support for configuration of the use of slices according to an embodiment. The memory 1801 contains a slice description 1803 of the matrix (slice) to be supported.The instruction execution resource 1811 of the processor/core 1805 stores multiple aspects of the slice description 1803 into the slice configuration 1817. The slice configuration 1817 includes a palette table 1813 that details what slices (number of rows and columns in each slice) are configured for the palette, and a mark that the matrix supports in use. Specifically, the instruction execution resource 1811 is configured to use the slice as specified by the slice configuration 1817. The instruction execution resource 1811 may also include machine-specific registers or configuration registers used to indicate chip usage. Additional values are also set, such as using median and starting value. The slice configuration 1817 utilizes the register(s) 1819 to store slice usage and configuration information.FIG. 19 illustrates an embodiment of the description of the matrix (slice) to be supported. This is the description that will be stored in response to the execution of the STTILECFG instruction. In this example, each field is a byte. In byte [0], palette ID 1901 is stored. The palette ID is used to index the palette table 1813, as defined by the configuration, the palette table 1813 stores the number of bytes in the slice according to the palette ID and each row of the slice associated with the ID Of bytes.Byte 1 stores the value to be stored in the "startRow" register 1903, and byte 2 stores the value to be stored in the register startP 1905. To support restarting instructions after these events, these instructions store information in these registers. In order to support restarting instructions after interrupt events such as those detailed above, these instructions store information in these registers. The startRow value indicates the row that should be used to restart. The startP value indicates the position within the row used for storage operations when the pair is used, and in some embodiments, the startP value indicates the lower half of the row (in the lower slice of the pair) or (in the higher slice of the pair) The upper part of the line. Generally speaking, this position in the row (column) is not required.Successful execution of the matrix (slice) instruction will set both startRow and startP to zero, except for TILECONFIG and STTILECFG.It is the software's responsibility to reset the startRow and startP values to zero at any time when the interrupted matrix (slice) instruction is not restarted. For example, an unmasked floating-point exception handler may decide to complete the operation in software and change the program counter value to another instruction, usually the next instruction. In this case, before restoring the program, the software exception handling program must reset the startRow and startP values in the exceptions presented by the operating system to the software exception handling program. The operating system will then use restore instructions to reload those values.Byte 3 stores the indication of the pair of slices (1b per slice) 1907.Byte 16-17 stores the row number 1913 and column number 1915 of slice 0, byte 18-19 stores the row number and column number of slice 1, and so on. In other words, each 2-byte group specifies the number of rows and columns of the slice. If groups of 2 bytes are not used to specify slice parameters, they should have the value zero. Specifying slice parameters for slices more than the implementation limit or palette limit caused an error. Unconfigured slices are set to the initial state with 0 rows and 0 columns.In the end, the configuration in the memory usually ends with an ending description such as all zeros for several consecutive bytes.Exemplary slices and slice configuration storage20(A)-20(D) illustrate examples of register(s) 1819. FIG. 20(A) illustrates a plurality of registers 1819. As shown in the figure, each slice (TMM0 2001...TMMN 2003) has a separate register, where each register stores the row size and column size of that particular slice. StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (for example, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(B) illustrates a plurality of registers 1819. As shown, each slice has separate registers for its rows and its columns. For example, TMM0 row configuration 2021, TMM0 column configuration 2023, StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (for example, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(C) illustrates a single register 1819. As shown in the figure, this register stores the slice configuration (rows and columns of each slice) 2031, StartP 2011, and StartRow 2013 in a single register as a compact data register. One or more status registers 2015 are set (for example, TILES_CONFIGURED=1) to indicate that the slice is configured for use.FIG. 20(D) illustrates a plurality of registers 1819. As shown in the figure, a single register stores the slice configuration (rows and columns of each slice) 2031. StartP and StartRow are stored in separate registers 2011 and 2013. One or more status registers 2015 are set (for example, TILES_CONFIGURED=1) to indicate that the slice is configured for use.Other combinations are conceived, such as combining start registers into a single register in which these start registers are displayed separately, and so on.Matrix (chip) floating point multiplication-accumulation (TILEFPFMA) instructionAs mentioned above, common operations performed in the context of machine learning are matrix (chip) floating point fusion multiply-accumulate (FMA) instructions, whether single-precision or double-precision. It is desirable to improve the power and performance of FMA instructions to improve the power and performance of applications that use those instructions, including machine learning training and inference applications.Therefore, the disclosed method and system use an interleaved pipeline to execute matrix (slice) floating point fusion multiply-accumulate (TILEFPFMA) instructions, which performs slice and matrix floating-point fusion multiply-accumulate at a rate of one FMA per cycle. In other words, the disclosed interleaved pipeline is used to execute the FMA operation in one cycle after the FMA operation immediately before the FMA operation has generated the source of the FMA operation (accumulating the product it generates with the source) FMA operation. This optimization allows for easier scheduling of matrix (slice)-multiplications (at least by allowing FMA operations to be scheduled one after the other without intervening overhead), better utilization of system FMA circuits (at least by eliminating buffering intermediate results The need), reduced power consumption (at least by requiring tightly packed fewer instructions to operate one after another without intervening overhead) and hardware cost (at least by the routing cost and the timing control element (trigger) Decrease in quantity).The matrix (slice) multiplication operation (ie, TMUL) is given by Equation 1, where the dimensions of the A, B, and C matrices are given by: C[n,m], A[m,k], B[k, n]. In other words, the A matrix has M rows by K columns, the B matrix has K rows by N columns, and the C matrix has M rows by N columns.Cij+=∑(n=0 to k)Ain*Bnj Equation 1K, M, and N can be any of 1, 2, 3, 4, 8, and 16, of course, subject to the maximum available matrix (chip) size of the processor.The basic non-interleaved pipeline method implies long latency and relatively low performance. According to the basic non-interleaved pipeline method, a fused multiply-accumulate (FMA) instruction waits before it starts until the execution of the previous FMA instruction is completed. Such a method is illustrated and described with reference to FIG. 21A.On the other hand, the disclosed embodiment uses an interleaved pipeline to execute matrix (slice) floating point fusion multiply-accumulate (TILEFPFMA) instructions, through which the multiplication operation of the next FMA occurs in parallel with the accumulation operation of the current FMA operation. For example, given two FMA operations, the disclosed embodiment is executed concurrently with (multiplication 2) (accumulation 1), and the product of multiplication 2 is accumulated with the result of accumulation 1. Such methods as applied in the disclosed embodiments are further illustrated and described with reference to FIGS. 21B, 22A-22B, and FIG. 23. 24, 25A-25B, and 26A-26C illustrate and describe the format of the TILEFPFMA instruction.Interleaved pipelines achieve great TMUL latency reduction and better memory utilization. Through this staggered pipeline, improved performance and power savings are achieved.As illustrated and described herein, an embodiment of a processor that executes the TILEFPFMA instruction includes: a decoding circuit for decoding the instruction, the instruction specifies a first source matrix of M by K, a second source matrix of K by N, The location of the destination matrix of M by N and specify the opcode, the opcode instructs the execution circuit to start K pipelines on K cycles for each floating point (FP) element (M, N) of the destination matrix Examples, each pipeline instance includes: in the first multiplication (MULTIPLY) stage, the product of the FP element (M, K) of the first source matrix and the element (K, N) of the second source matrix is generated; concurrently, in In the exponential difference (EXPDIFF) stage, determine the exponent difference between the product and the previous FP value of the element (M, N) of the destination matrix; in the second addition-bypass (ADD-BYPASS) stage, the product and The previous FP value is accumulated and the accumulated sum is stored in the elements (M, N) of the destination matrix, and if it is determined that rounding is required, the lower waterline instance is incremented by one; among them, before the accumulation, by shifting the mantissa of the product The bit exponent difference aligns the product; and concurrently, in the add-bypass stage, bypasses the accumulation sum to subsequent instances of the pipeline; and an execution circuit for executing the decoded instruction according to the opcode.When the previous FP value is still unknown, the exponential difference operation: the exponential difference and the mantissa calculation of the previous FP value (the mantissa calculation is being calculated by the addition-bypass calculation of the previous waterline instance) work in parallel.This requires the index to be adjusted independently of the late mantissa. The disclosed embodiment addresses this requirement by not performing late mantissa adjustment at all.The disclosed embodiment uses 1) no rounding and 2) blur-J bit to eliminate the need for post adjustment:No rounding: No rounding means only determining whether rounding is required, and if rounding is required, the next addition operation is increased by one. In this way, the disclosed embodiments improve the timing of mantissa calculations and remove much of the hardware required for rounding. In addition, since rounding is not actually completed, adjustments to mantissa and exponent are not required.Blur-J bit position: The fuzzy-J bit position as used in the disclosed embodiment is an internal FP format for single precision. The fuzzy-J bit format allows the j bit to be in one of the following positions: position 23 and position 24. This format increases the mantissa by 1 bit, but allows to avoid post-adjustment according to the j bit position of the final result.Each pipeline instance further includes a second addition-bypass stage, during which the product and the previous FP value are accumulated, and the accumulated sum will be stored in the element (M, N) of the destination matrix . Before performing the accumulation, during the multiplication stage, the products produced are aligned by shifting the mantissa of the products by the exponent difference. The pipeline instance is further used to concurrently add and bypass the accumulation and bypass for subsequent instances of the pipeline during the addition-bypass stage.The format of the TILEFPFMA instruction is further illustrated and described with reference to FIGS. 24, 25A-25B, and 26A-26D.21A-21B illustrate an example embodiment of a floating-point multiplication-accumulation pipeline. Figure 21A illustrates a conventional floating-point multiplication-accumulation pipeline. As shown, the pipeline 2100 is used to perform 16 fused multiplication-accumulation (FMA) in 32 cycles. On the other hand, the disclosed embodiment interleaves the pipeline to efficiently perform those calculations in about half the time. The staggered pipeline of the disclosed embodiment is further illustrated and described with reference to FIGS. 21B, 22A-22B, and FIG. 23.Figure 21B illustrates an interleaved matrix (slice) floating point multiplication-accumulation pipeline according to some embodiments. As shown, the pipeline 2150 is used to perform 16 fused multiplication-accumulation (FMA) in 16 cycles, or about twice as fast as the non-interleaved pipeline of FIG. 21A.In operation, the processor that executes the matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction is used to decode the instruction using a decoding circuit, which specifies the first source matrix of M by K and the second source of K by N Matrix, the location of the destination matrix of M by N and specify the opcode, the opcode instructs the execution circuit to start the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix K instances of, the opcode indicates that the execution circuit is used to start K instances of the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix. The processor that executes the TILEFPFMA instruction is further configured to include: a decoding circuit for decoding the instruction; and an execution circuit for executing the instruction according to the operation code.Specifically, as further illustrated and described with reference to FIG. 22B and FIG. 23, the execution circuit that executes the TILEFPFMA instruction is used to start (or in other words, initiate, implement, and execute) K instances of the pipeline on K cycles, each of the pipeline Examples include: in the first multiplication stage, the product of the FP element (M, K) of the first source matrix and the corresponding FP element (K, N) of the second source matrix is generated; concurrently, in the exponential difference stage, Determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix; in the second addition-bypass stage, the product is accumulated with the previous FP value and the accumulated sum is stored to the destination The elements (M, N) of the matrix, in which, before performing the accumulation, the mantissa of the product is shifted by the exponent difference to align the product; and concurrently, in the addition-bypass stage, the accumulation and bypass are used for the pipeline Use of subsequent examples.Advantageously, as illustrated and described with reference to FIGS. 21B, 22A-22B and FIG. 23, the interleaved pipeline of the disclosed embodiment is approximately twice as fast as the non-interleaved pipeline of FIG. 21A to perform the matrix multiplication operations required by the TILEFPFMA instruction . Specifically, as shown, 16 FMA operations are completed in 16+1 cycles.Referring to FIGS. 21A, 22A-22B, 23, 28A-28B, and 29A-29B, the execution circuit of the disclosed embodiment is further illustrated and described.Exemplary executionFigure 22A is a block diagram illustrating the execution of a matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction according to some embodiments. The processing system 2200 that executes the matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction is used to decode the instruction using a decoding circuit, and the instruction specifies a first source matrix of M by K, a second source matrix of K by N, The location of the destination matrix of M by N and specify the operation code, the operation code instructs the execution circuit to start the K of the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix Examples. The processor is further configured to: use the execution circuit to execute the decoded instruction according to the operation code.As used herein, an instance of the term "starting" a pipeline means initiating the execution of the pipeline, or starting the execution of the pipeline, or just executing the pipeline. The processor that executes the TILEFPFMA instruction is further configured to include an execution circuit 2206 to execute the instruction according to the operation code. The format of the TILEFPFMA instruction 2201 is further illustrated and described with reference to FIGS. 24, 25A-25B, and 26A-26D.More specifically, each instance of the pipeline includes: in the first multiplication stage, the product of the FP element (M, K) of the first source matrix and the corresponding FP element (K, N) of the second source matrix is generated. Concurrently, the exponential difference stage is responsible for determining the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix. In the second addition-bypass stage, the pipeline is used to accumulate the product with the previous FP value and store the accumulated sum to the elements (M, N) of the destination matrix, where before the accumulation is performed, the mantissa of the product is moved by The bit exponent difference causes the product to align, and concurrently, in the add-bypass stage, the accumulation and bypass are used for subsequent instances of the pipeline.To illustrate the execution of the TILEFPFMA instruction, FIG. 22A illustrates an exemplary 4-row by 4-column first matrix (slice) 2202 and an exemplary 4-row by 4-column second matrix (slice) 2204. Also shown is an execution circuit 2206, illustrating the mathematical operation that will be performed on each element (M, N) of the destination matrix (slice).The execution circuit for executing the TILEFPFMA instruction 2201 is further illustrated and described with reference to FIG. 22B, FIG. 23, FIG. 28A-28B, and FIG. 29A-29B.Figure 22B is a block diagram illustrating the use of an interleaved pipeline to execute a matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction according to some embodiments. As shown, the processor that executes the matrix (chip) floating point fusion multiply-accumulate (TILEFPFMA) instruction is used to decode the instruction using a decoding circuit, which specifies the first source matrix of M by K and the second source of K by N Matrix, the location of the destination matrix of M by N and specify the opcode, the opcode instructs the execution circuit to start the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix K instances of. Each instance of the pipeline includes generating the product of the FP element (M, K) of the first source matrix and the corresponding FP element (K, N) of the second source matrix in the first multiplication stage. Concurrently, in the exponential difference stage, the processor is used to determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix. In the second addition-bypass stage, the processor is used to accumulate the product with the previous FP value and store the accumulated sum to the elements (M, N) of the destination matrix, where before the accumulation is performed, the mantissa of the product is passed Shift the exponent difference to align the product; and concurrently, in the add-bypass stage, the accumulation and bypass are used for subsequent instances of the pipeline.The format of the TILEFPFMA instruction 2251 is further illustrated and described with reference to FIGS. 24, 25A-25B, and 26A-26D. The processor that executes the TILEFPFMA instruction is further configured to include: a decoding circuit (not shown) for decoding the instruction; and an execution circuit for executing the instruction according to the operation code.Also shown is an interleaved pipeline 2250 for executing the TILEFPFMA instruction shown in FIG. 22A. According to the disclosed embodiment, the execution circuit is used to start K instances of the pipeline over K cycles, and each instance of the pipeline includes: in the first multiplication stage, generating the FP elements (M, K) of the first source matrix and The product of the corresponding FP element (K, N) of the second source matrix; concurrently, in the exponential difference level, determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix; In the second addition-bypass stage, the product is accumulated with the previous FP value and the accumulated sum is stored in the elements (M, N) of the destination matrix, where the mantissa of the product is shifted by the exponential difference before the accumulation is performed And align the product; and concurrently, in the addition-bypass stage, the accumulation and bypass are used for subsequent instances of the pipeline.Here, the first instance 2252 of the pipeline is started at time t=0. The pipeline 2252 is used to retrieve specified instances of the A (first source) matrix, B (second source) matrix, and C (destination) matrix and operate on these specified instances. Here, the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed first instance 2252 of the pipeline receive the previous FP value of the element (M, N) of the destination matrix from the position of the destination matrix specified by the instruction And use the previous FP value, and the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed subsequent instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as the immediately preceding FP value from the pipeline The addition of the previous example-the bypass of the bypass stage. Start the second, third, and fourth instances of the pipeline at 2254, 2256, and 2258. The arcs 2260, 2262, and 2264 show that the elements (M, N) of the destination matrix are bypassed from the first, second, and third instances of the pipeline to their subsequent pipeline instances.Advantageously, the processor that executes the TILEFPFMA instruction as shown in FIG. 22B is used to complete the execution of K instances of the pipeline on K plus one cycle. The execution circuit of the disclosed embodiment is further illustrated and described with reference to FIGS. 21B, 28A-28B, and 29A-29B.Exemplary method(s) performedFigure 23 illustrates an embodiment of a processor that executes a flow for processing matrix (chip) floating point multiply-accumulate (TILEFPFMA) instructions. As shown, the processor is used to execute instruction 2301, which specifies an opcode and specifies the location of a first source matrix of M by K, a second source matrix of K by N, and a destination matrix of M by N.In some embodiments, the processor is used to execute the process 2300. At 2303, the processor is used to decode the instruction using a decoding circuit that specifies the location of the M by K first source matrix, the K by N second source matrix, and the M by N destination matrix. The instruction further specifies an operation code, which instructs the execution circuit to start K instances of the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix. Each instance of the pipeline is shown at 2305, showing that each instance of the pipeline includes: a first multiplication stage, during which the FP elements (M, K) of the first source matrix and the second source matrix are generated during the first multiplication stage Product of FP elements (K, N). Concurrently, in the exponential difference stage, the processor is used to determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix. The pipeline further includes a second addition-bypass stage during which the product and the previous FP value are accumulated and stored in the elements (M, N) of the destination matrix, where before the accumulation is performed , Align the product with the previous FP value by shifting the mantissa of the product by the exponent difference; and concurrently, in the add-bypass stage, the processor is used to bypass the accumulation and for subsequent instances of the pipeline.In some embodiments, at 2307, the processor is used to schedule the execution of instructions. Operation 2307 is optional, as indicated by its dashed border, which is embodied in that it can occur at different times, or not at all.At 2309, the processor is used to: use an execution circuit to execute instructions according to the opcode.In some embodiments, at 2311, the processor is used to retire and submit execution results. Operation 2311 is optional, as indicated by its dashed border, which is embodied in that it can occur at different times, or not at all.The execution circuit is further illustrated and described with reference to FIGS. 3-14. In some embodiments, the execution circuit is a matrix operation accelerator, such as the accelerator shown and described as accelerator 307 (FIG. 3). In some embodiments, the execution circuit is a matrix operation circuit, such as matrix operation circuit 405 (FIG. 4), 505 (FIG. 5) or 1213 (FIG. 12) and 1327 (FIG. 13).(Multiple) exemplary instruction formatFigure 24 is a block diagram illustrating the format of a TILEFPFMA instruction according to some embodiments. As shown, the TILEFPFMA instruction 2400 includes instructions for specifying an opcode 2402, a destination position 2404, a first source matrix (slice) position 2406, a second source matrix (slice) position 2408, and a third source matrix (slice) position 2410. Field. In some embodiments, the specified third source matrix (slice) location is also used as the destination location, so the destination location is an optional field, as indicated by its dashed border.The TILEFPFMA instruction 2400 further includes several optional parameters to control the behavior of the processor. These optional parameters include M 2412, K 2414 and N 2416, element size 2418 (binary digits, nibbles, bytes, words, double words or Four words), element format 2420 (compacted or scalar single-precision or double-precision floating-point data, and compact or scalar integer data), and mask 2422 (a multi-bit value with one bit for each destination element, the bit Used to control whether the destination element will be updated, or whether the destination element will be masked from being updated). The masked destination element will be zeroed or merged, as controlled by another instruction field or a control register programmed by software.Operation code 2402 is shown to include asterisks, which are used to convey that additional prefixes and/or suffixes may be added to specify instruction behavior. One or more of the instruction modifiers 2412, 2414, 2416, 2418, 2420, and 2422 can be specified by using the prefix or suffix of the opcode 2402.In some embodiments, one or more of the optional instruction modifiers 2412, 2414, 2416, 2418, 2420, and 2422 are encoded in an immediate field (not shown) that is optionally included with the instruction 2400 )in. In some embodiments, one or more of the optional instruction modifiers 2412, 2414, 2416, 2418, 2420, and 2422 are specified via a configuration/status register (eg, XTILECONFIG). In other words, when any one or more of the optional modifiers 2412, 2414, 2416, 2418, 2420, and 2422 are not specified by the instruction, sometimes they use implicit parameters inherited from other parts of the slice architecture.Detailed exemplary systems, processors and simulationsDetailed herein are examples of hardware, software, etc. used to execute the instructions described above. For example, the content described below details multiple aspects of instruction execution, including various pipeline stages such as fetching, decoding, scheduling, execution, and retirement.Instruction SetThe instruction set can include one or more instruction formats. A given instruction format can define various fields (for example, the number of bits, the position of bits) to specify the operation to be performed (for example, opcode) and the operand(s) and/or the operation to be performed on. Other data field(s) (eg mask), etc. Some instruction formats are further decomposed through the definition of instruction templates (or sub-formats). For example, an instruction template of a given instruction format can be defined as having different fields in the instruction format (the fields included are usually in the same order, but at least some fields have different bit positions because fewer fields are included). A subset, and/or defined as having a given field that is interpreted differently. Thus, each instruction of the ISA uses a given instruction format (and if defined, in accordance with a given instruction template in the instruction template of the instruction format) to express, and includes instructions for specifying operations and operands Field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format, and the specific instruction format includes an opcode field for specifying the opcode and for selecting operands (source 1 / destination and source 2) The operand field; and the occurrence of the ADD instruction in the instruction stream will make the operand field have specific content for selecting a specific operand. Has introduced and/or released SIMD extension sets called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and Utilizing Vector Extensions (VEX) coding schemes (see e.g. 64 and IA-32 architecture in September 2014) Software Developer’s Manual; and see the Advanced Vector Extension Programming Reference in October 2014).Exemplary instruction formatThe embodiments of instruction(s) described herein can be embodied in different formats. In addition, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) can be executed on such systems, architectures, and pipelines, but are not limited to those systems, architectures, and pipelines detailed.Generic vector friendly instruction formatThe vector friendly instruction format is an instruction format suitable for vector instructions (for example, there are specific fields dedicated to vector operations). Although an embodiment has been described in which both vector and scalar operations are supported by the vector friendly instruction format, alternative embodiments only use vector operations by the vector friendly instruction format.25A-25B are block diagrams illustrating a general vector friendly instruction format and its instruction template according to an embodiment. FIG. 25A is a block diagram illustrating a general vector friendly instruction format and a type A instruction template thereof according to an embodiment; and FIG. 25B is a block diagram illustrating a general vector friendly instruction format and a type B instruction template thereof according to an embodiment. Specifically, a generic vector friendly instruction format 2500 defines type A and type B instruction templates, both of which include instruction templates for no memory access 2505 and instruction templates for memory access 2520. The term "universal" in the context of vector-friendly instruction formats refers to instruction formats that are not tied to any specific instruction set.Although an embodiment will be described in which the vector-friendly instruction format supports the following cases: 64-byte vector operand length (or size) and 32-bit (4 bytes) or 64-bit (8 bytes) data element width (or size) ( And therefore, a 64-byte vector is composed of 16 double-word-sized elements, or alternatively composed of 8 four-word-sized elements); 64-byte vector operand length (or size) and 16 bits (2 bytes) ) Or 8-bit (1 byte) data element width (or size); 32-byte vector operand length (or size) and 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 words) Section) or 8-bit (1 byte) data element width (or size); and 16-byte vector operand length (or size) and 32 bits (4 bytes), 64 bits (8 bytes), 16 bits ( 2 bytes), or 8-bit (1 byte) data element width (or size); but alternative embodiments may support larger, smaller, and/or different vector operand sizes (e.g., 256-byte vector operand ) And larger, smaller or different data element width (for example, 128-bit (16 byte) data element width).The type A instruction template in FIG. 25A includes: 1) In the instruction template of no memory access 2505, the instruction template of the fully rounding control type operation 2510 without memory access and the data transformation type operation 2515 without memory access are shown. Instruction template; and 2) in the instruction template of the memory access 2520, the instruction template of the timeliness of the memory access 2525 and the instruction template of the non-timeliness of the memory access 2530 are shown. The type B instruction template in FIG. 25B includes: 1) In the instruction template of no memory access 2505, the instruction template of the partial rounding control type operation 2512 of the write mask control without memory access and the write mask without memory access are shown. The instruction template of the vsize type operation 2517 controlled by the code; and 2) the instruction template of the write mask control 2527 of the memory access is shown in the instruction template of the memory access 2520.The general vector friendly instruction format 2500 includes the following fields listed below in the order illustrated in FIGS. 25A-25B.Format field 2540-a specific value (instruction format identifier value) in this field uniquely identifies the vector-friendly instruction format, and thus identifies that the instruction appears in the vector-friendly instruction format in the instruction stream. Therefore, this field is not required for instruction sets that only have a general vector-friendly instruction format, and in this sense, this field is optional.Basic operation field 2542-its content distinguishes different basic operations.Register index field 2544-its content directly or through address generation to specify the location of the source or destination operand in the register or in the memory. These fields include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment N may be up to three source registers and one destination register, alternative embodiments may support more or fewer source and destination registers (for example, up to two sources may be supported, where these One of the sources is also used as a destination; up to three sources can be supported, where one of these sources is also used as a destination; up to two sources and one destination can be supported).Modifier field 2546-its content distinguishes instructions that specify memory access in general vector instruction format from instructions that do not specify memory access in general vector instruction format; that is, the instruction template of 2505 without memory access Distinguish between the instruction template of the memory access 2520. Memory access operations read and/or write to the memory hierarchy (in some cases, the value in the register is used to specify the source and/or destination address), while non-memory access operations do not (for example, the source and destination are register). Although in one embodiment, this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.Extended operation field 2550-its content distinguishes which operation of various operations should be performed in addition to the basic operation. This field is context-specific. In one embodiment, this field is divided into class field 2568, alpha field 2552, and beta field 2554. The extended operation field 2550 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 2560-its content allows scaling of the content of the index field used for memory address generation (for example, for address generation using (2 scale * index + base address)).Displacement field 2562A-its content is used as part of memory address generation (for example, for address generation using (2 scale * index + base address + displacement)).Displacement factor field 2562B (note that the concatenation of the displacement field 2562A directly on the displacement factor field 2562B indicates the use of one or the other)-its content is used as part of the address generation; it specifies the size of the memory access that will be scaled (N ) Displacement factor-where N is the number of bytes in the memory access (for example, for address generation using (2 scale * index + base address + scaled displacement)). The redundant low-order bits are ignored, and therefore the content of the displacement factor field is multiplied by the total size of the memory operand (N) to generate the final displacement that will be used in calculating the effective address. The value of N is determined by the processor hardware at runtime based on the complete opcode field 2574 (described later in this document) and the data manipulation field 2554C. The displacement field 2562A and the displacement factor field 2562B are not used for the instruction template of no memory access 2505 and/or different embodiments may implement only one of the two or not implement either of the two. In this sense, the displacement The field 2562A and the displacement factor field 2562B are optional.Data element width field 2564-its content distinguishes which of the multiple data element widths will be used (in some embodiments for all instructions; in other embodiments only for some of the instructions). If only one data element width is supported and/or a certain aspect of the opcode is used to support the data element width, then this field is not needed, in this sense, this field is optional.Write mask field 2570-its content controls whether the data element position in the destination vector operand reflects the result of the basic operation and the expansion operation on a data element position by data element position. Type A instruction templates support merge-write mask, while type B instruction templates support both merge-write mask and zero-write mask. When merging, the vector mask allows to protect any set of elements in the destination from being updated during any operation (specified by the base operation and the expansion operation); in another embodiment, keep the corresponding mask bit with 0 The old value of each element of the destination. Conversely, when zeroing, the vector mask allows any set of elements in the destination to be zeroed during any operation (specified by the base operation and the expansion operation); in one embodiment, the elements of the destination are in the corresponding mask. When the bit has a value of 0, it is set to 0. A subset of this function is the ability to control the vector length of the operation being performed (ie, the span from the first to the last element being modified), however, the modified elements do not have to be continuous. Thus, the write mask field 2570 allows partial vector operations, which include loads, stores, arithmetic, logic, and so on. Although it is described that the content of the write mask field 2570 selects one of the multiple write mask registers that contains the write mask to be used (and thus, the content of the write mask field 2570 indirectly identifies the Implementation of the mask), but the alternative embodiment alternatively or additionally allows the content of the mask write field 2570 to directly specify the mask to be executed.Immediate field 2572-its content allows the specification of immediate numbers. This field does not exist in the implementation of general vector-friendly formats that do not support immediate data and does not exist in instructions that do not use immediate data. In this sense, this field is optional.Class field 2568-its content distinguishes between different types of instructions. With reference to Figures 25A-25B, the content of this field selects between Type A and Type B instructions. In FIGS. 25A-25B, rounded squares are used to indicate that a specific value exists in the field (for example, in FIGS. 25A-25B for the class A 2568A and the class B 2568B of the class field 2568, respectively).Type A instruction templateIn the case of the instruction template of Type A non-memory access 2505, the α field 2552 is interpreted as its content to distinguish which of the different types of expansion operations are to be performed (for example, rounding operation 2510 for no memory access and no memory The instruction template of the accessed data transformation type operation 2515 specifies the RS field 2552A of rounding 2552A.1 and data transformation 2552A.2), and the β field 2554 distinguishes which of the specified types of operations is to be performed. In the instruction template with no memory access 2505, the scale field 2560, the displacement field 2562A, and the displacement factor field 2562B do not exist.Instruction template without memory access-fully rounding controlled operationIn the instruction template of the full rounding control type operation 2510 without memory access, the β field 2554 is interpreted as the rounding control field 2554A that provides static rounding for its content(s). Although in the described embodiment the rounding control field 2554A includes the suppression of all floating point exceptions (SAE) field 2556 and the rounding operation control field 2558, alternative embodiments may support these two concepts, which may be coded as The same field, or only one or the other of these concepts/fields (for example, it may only have the rounding operation control field 2558).SAE field 2556-its content distinguishes whether abnormal event reporting is disabled; when the content of SAE field 2556 indicates that suppression is enabled, a given instruction does not report any kind of floating-point exception flags, and does not invoke any floating-point exception handlers.Rounding operation control field 2558-its content distinguishes which one of a set of rounding operations is to be performed (for example, round up, round down, round to zero, and round to nearest). Thus, the rounding operation control field 2558 allows the rounding mode to be changed instruction by instruction. In an embodiment in which the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 2550 override the register value.Instruction template without memory access-data transformation type operationIn the instruction template of the data transformation type operation 2515 without memory access, the β field 2554 is interpreted as the data transformation field 2554B, and its content distinguishes which of the multiple data transformations is to be performed (for example, no data transformation, mixing, broadcasting) .In the case of the instruction template of type A memory access 2520, the α field 2552 is interpreted as the expulsion hint field 2552B, and its content distinguishes which one of the expulsion hints is to be used (in FIG. 25A, for the memory access timeliness 2525 instruction template And memory access non-time-sensitive 2530 instruction templates respectively specify time-sensitive 2552B.1 and non-time-sensitive 2552B.2), and the β field 2554 is interpreted as a data manipulation field 2554C, and its content distinguishes that multiple data manipulation operations need to be performed (Also called primitive) which one (for example, no manipulation, broadcast, up-conversion of source, and down-conversion of destination). The instruction template of the memory access 2520 includes a scale field 2560 and optionally a displacement field 2562A or a displacement factor field 2562B.Vector memory instructions use translation support to perform vector loads from memory and vector stores to memory. Like ordinary vector instructions, vector memory instructions transfer data from/to the memory in a data-element manner, where the actual element to be transferred is specified by the content of the vector mask selected as the write mask.Memory access instruction template-time-sensitiveTime-sensitive data is data that may be reused fast enough to benefit from cache operations. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Memory access instruction template-non-time-sensitiveTime-insensitive data is data that is unlikely to be reused fast enough to benefit from cache operations in the first level cache and should be given eviction priority. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Type B instruction templateIn the case of the Type B instruction template, the α field 2552 is interpreted as the write mask control (Z) field 2552C, and its content distinguishes whether the write mask controlled by the write mask field 2570 should be merged or zeroed.In the case of the instruction template of Type B non-memory access 2505, part of the β field 2554 is interpreted as the RL field 2557A, and its content distinguishes which of the different types of expansion operations are to be performed (for example, for write masks without memory access) The code control part of the instruction template of the rounding control type operation 2512 and the instruction template of the write mask control VSIZE type operation 2517 respectively specify rounding 2557A.1 and vector length (VSIZE) 2557A.2), and the β field 2554 The rest of the distinguishes which of the specified types of operations are to be performed. In the instruction template with no memory access 2505, the scale field 2560, the displacement field 2562A, and the displacement factor field 2562B do not exist.In the instruction template of the write mask control part rounding control type operation 2510 without memory access, the rest of the β field 2554 is interpreted as the round operation field 2559A, and abnormal event reporting is disabled (the given instruction does not report any kind Floating-point exception flag, and does not evoke any floating-point exception handlers).Rounding operation control field 2559A-Just like the rounding operation control field 2558, its content distinguishes which of a set of rounding operations are to be performed (for example, round up, round down, round to zero, and round to nearest ). Thus, the rounding operation control field 2559A allows the rounding mode to be changed instruction by instruction. In an embodiment where the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 2550 override the register value.In the instruction template of the write mask control VSIZE type operation 2517 without memory access, the rest of the β field 2554 is interpreted as the vector length field 2559B, whose content distinguishes which of the multiple data vector lengths (for example, 128 Bytes, 256 bytes, or 512 bytes).In the case of the instruction template of type B memory access 2520, a part of the β field 2554 is interpreted as a broadcast field 2557B, and its content distinguishes whether broadcast data manipulation operations are to be performed, and the rest of the β field 2554 is interpreted as a vector length field 2559B. The instruction template of the memory access 2520 includes a scale field 2560 and optionally a displacement field 2562A or a displacement factor field 2562B.For the general vector friendly instruction format 2500, it is shown that the complete opcode field 2574 includes a format field 2540, a basic operation field 2542, and a data element width field 2564. Although an embodiment is shown in which the full opcode field 2574 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 2574 includes less than all of these fields. The full opcode field 2574 provides the operation code (opcode).The extended operation field 2550, the data element width field 2564, and the write mask field 2570 allow these features to be specified in a generic vector friendly instruction format on an instruction-by-instruction basis.The combination of the write mask field and the data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear in category A and category B are beneficial in different situations. In some embodiments, different processors or different cores within processors may support only type A, only type B, or may support both types. For example, high-performance general-purpose out-of-order cores intended for general-purpose computing can only support class B, and cores intended for graphics and/or scientific (throughput) computing only support class A, and are intended to be used Cores for both general computing and graphics and/or scientific (throughput) computing can support both Type A and Type B (of course, there are some mixtures of templates and instructions from these two types, but not from these two types All templates and instruction cores are within the scope of the present invention). Likewise, a single processor may include multiple cores, all of which support the same class, or where different cores support different classes. For example, in a processor with separate graphics cores and general-purpose cores, one of the graphics cores intended to be mainly used for graphics and/or scientific computing can only support Class A, while one or more of the general-purpose cores One can be a high-performance general-purpose core that only supports Type B out-of-order execution and register renaming intended for general-purpose computing. Another processor that does not have a separate graphics core may include one or more general in-order or out-of-order cores that support both Type A and Type B. Of course, in different embodiments, features from one category can also be implemented in other categories. Programs written in high-level languages will become (for example, just-in-time compilation or static compilation) various executable forms. These executable forms include: 1) Only have (multiple) supported by the target processor for execution Or 2) have alternative routines and have the form of control flow code, the alternative routines are written using different combinations of instructions of all types, the control flow code selects these routines based on the current execution The code is executed by the instructions supported by the processor.Exemplary dedicated vector friendly instruction formatFigure 26A is a block diagram illustrating an exemplary dedicated vector friendly instruction format according to an embodiment. FIG. 26A shows a dedicated vector friendly instruction format 2600, which specifies the location, size, interpretation, and order of fields, and the values of some of those fields. In this sense, the dedicated vector friendly instruction format 2600 is dedicated. The dedicated vector friendly instruction format 2600 can be used to extend the x86 instruction set, and thus some fields in the fields are similar or identical to those used in the existing x86 instruction set and its extensions (for example, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field of the existing x86 instruction set with extensions. The fields from Figs. 25A-25B are shown, and the fields from Fig. 26A are mapped to the fields from Figs. 25A-25B.It should be understood that although the embodiments are described with reference to the specific vector friendly instruction format 2600 in the context of the general vector friendly instruction format 2500 for illustrative purposes, the present invention is not limited to the specific vector friendly instruction format 2600 unless otherwise stated. For example, the general vector friendly instruction format 2500 contemplates various possible sizes of various fields, while the specific vector friendly instruction format 2600 is shown as fields having a specific size. As a specific example, although the data element width field 2564 is illustrated as a one-bit field in the special vector friendly instruction format 2600, the present invention is not limited to this (ie, the general vector friendly instruction format 2500 envisions other sizes of the data element width field 2564 ).The dedicated vector friendly instruction format 2600 includes the following fields listed below in the order illustrated in FIG. 26A.EVEX prefix 2602 (bytes 0-3)-encoded in the form of four bytes.Format field 2540 (EVEX byte 0, bit [7:0])-the first byte (EVEX byte 0) is the format field 2540, and it contains 0x62 (in one embodiment, for distinguishing vector friendly The unique value of the instruction format).The second to fourth bytes (EVEX bytes 1-3) include multiple bit fields that provide dedicated capabilities.REX field 2605 (EVEX byte 1, bit [7-5])-consists of EVEX.R bit field (EVEX byte 1, bit [7]–R), EVEX.X bit field (EVEX byte 1, bit [6]–X) and EVEX.B bit field (EVEX byte 1, bit [5]–B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functions as the corresponding VEX bit fields, and are encoded in the form of 1's complement, that is, ZMM0 is encoded as 1111B, and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits (rrr, xxx, and bbb) of the register index as known in the art, which can be achieved by adding EVEX.R, EVEX.X, and EVEX.B To form Rrrr, Xxxx and Bbbb.REX' field 2510-This is the first part of REX' field 2510, and is the EVEX.R' bit field (EVEX word) used to encode the upper 16 or lower 16 registers of the extended 32 register set Section 1, bit [4]-R'). In one embodiment, this bit and other bits indicated below are stored in a bit-reversed format (under the well-known x86 32-bit mode) to distinguish it from the BOUND instruction. The actual opcode byte of the BOUND instruction is 62 , But the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments do not store the indicated bit and the other indicated bits below in a reverse format. The value 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRR from other fields.Opcode mapping field 2615 (EVEX byte 1, bits [3:0]-mmmm)-its content encodes the implicit leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 2564 (EVEX byte 2, bit [7]-W)-represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data element or 64-bit data element).EVEX.vvvv 2620 (EVEX byte 2, bit [6:3]-vvvv)-the role of EVEX.vvvv can include the following: 1) EVEX.vvvv is the first source specified in the inverted (1's complement) The register operand is encoded and is valid for instructions with two or more source operands; 2) EVEX.vvvv encodes the destination register operand specified in the form of 1's complement for a specific vector displacement; or 3 ) EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, the EVEX.vvvv field 2620 encodes the 4 low-order bits of the first source register specifier stored in the inverted (1's complement) form. Depending on the instruction, an extra different EVEX bit field is used to expand the specifier size to 32 registers.EVEX.U 2568 type field (EVEX byte 2, bit [2]-U)-if EVEX.U=0, it indicates type A or EVEX.U0; if EVEX.U=1, it indicates type B Or EVEX.U1.Prefix encoding field 2625 (EVEX byte 2, bits [1:0]-pp)-provides additional bits for the basic operation field. In addition to providing support for traditional SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (EVEX prefix only needs 2 bits, instead of bytes to express the SIMD prefix). In one embodiment, in order to support traditional SSE instructions using SIMD prefixes (66H, F2H, F3H) in both the traditional format and the EVEX prefix format, these traditional SIMD prefixes are encoded into SIMD prefix encoding fields; and at runtime Before being provided to the decoder, PLA is expanded into a traditional SIMD prefix (therefore, without modification, PLA can execute these traditional instructions in the traditional format as well as these traditional instructions in the EVEX format). Although newer instructions can directly use the content of the EVEX prefix encoding field as an opcode extension, for consistency, certain embodiments are extended in a similar manner, but allow different meanings specified by these traditional SIMD prefixes. Alternative embodiments can redesign PLA to support 2-bit SIMD prefix encoding, and thus no extension is required.α field 2552 (EVEX byte 3, bit [7]-EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also shown as α)- -As mentioned earlier, this field is context-specific.β field 2554 (EVEX byte 3, bit [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also shown as βββ )-As mentioned earlier, this field is context-specific.REX' field 2510-This is the remainder of the REX' field and is the EVEX.V' bit field (EVEX byte) that can be used to encode the upper 16 or lower 16 registers of the extended 32 register set 3. Bit [3]-V'). The bit is stored in bit-reversed format. The value 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V' and EVEX.vvvv.Write mask field 2570 (EVEX byte 3, bits [2:0]-kkk)-its content specifies the index of the register in the write mask register, as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior that implies that no write mask is used for a specific instruction (this can be implemented in various ways, including using a write mask that is hardwired to all objects or bypass The hardware of the mask hardware is implemented).The real opcode field 2630 (byte 4) is also called the opcode byte. A part of the opcode is specified in this field.The MOD R/M field 2640 (byte 5) includes MOD field 2642, Reg field 2644, and R/M field 2646. As previously described, the content of the MOD field 2642 distinguishes memory access operations from non-memory access operations. The role of the Reg field 2644 can be summarized into two situations: encoding the destination register operand or the source register operand; or it is regarded as an operation code extension and is not used to encode any instruction operand. The role of the R/M field 2646 may include the following: encoding the instruction operand referring to the memory address; or encoding the destination register operand or the source register operand.Scale, index, base address (SIB) byte (byte 6)-As mentioned earlier, the contents of SIB2650 are used for memory address generation. SIB.xxx 2654 and SIB.bbb 2656-The contents of these fields have been mentioned previously for register indexes Xxxx and Bbbb.Displacement field 2562A (bytes 7-10)-When the MOD field 2642 contains 10, bytes 7-10 are the displacement field 2562A, and it works the same as the traditional 32-bit displacement (disp32) and works at byte granularity .Displacement factor field 2562B (byte 7)-When the MOD field 2642 contains 01, byte 7 is the displacement factor field 2562B. The position of this field is the same as that of the traditional x86 instruction set 8-bit displacement (disp8) that works at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 and 127 byte offsets; in terms of 64-byte cache lines, disp8 usage can be set to only four really useful values -128 8 bits of, -64, 0, and 64; Since a larger range is often required, disp32 is used; however, disp32 requires 4 bytes. Compared with disp8 and disp32, the displacement factor field 2562B is a reinterpretation of disp8; when the displacement factor field 2562B is used, the actual displacement is determined by multiplying the content of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). Such compressed displacement assumes that the effective displacement is a multiple of the granularity of memory access, and thus the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 2562B replaces the 8-bit displacement of the traditional x86 instruction set. Therefore, the displacement factor field 2562B is encoded in the same way as the x86 instruction set 8-bit displacement (therefore, there is no change in ModRM/SIB encoding rules), the only difference is that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, but only in the interpretation of the displacement value by hardware (this requires the displacement to be scaled to the size of the memory operand to obtain a byte address offset). The immediate field 2572 operates as previously described.Full opcode fieldFIG. 26B is a block diagram illustrating the fields of the dedicated vector friendly instruction format 2600 that make up the complete opcode field 2574 according to one embodiment. Specifically, the complete opcode field 2574 includes a format field 2540, a basic operation field 2542, and a data element width (W) field 2564. The basic operation field 2542 includes a prefix encoding field 2625, an opcode mapping field 2615, and an actual opcode field 2630.Register index fieldFIG. 26C is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the register index field 2544 according to one embodiment. Specifically, the register index field 2544 includes a REX 2605 field, a REX' 2610 field, a MODR/M.reg field 2644, a MODR/M.r/m field 2646, a VVVV field 2620, a xxx field 2654, and a bbb field 2656.Extended operation fieldFIG. 26D is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the extended operation field 2550 according to one embodiment. When the class (U) field 2568 contains 0, it indicates EVEX.U0 (Class A 2568A); when it contains 1, it indicates EVEX.U1 (Class B 2568B). When U=0 and the MOD field 2642 contains 11 (indicating no memory access operation), the α field 2552 (EVEX byte 3, bit [7]-EH) is interpreted as the rs field 2552A. When the rs field 2552A contains 1 (rounding 2552A.1), the β field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as the rounding control field 2554A. The rounding control field 2554A includes a one-bit SAE field 2556 and a two-bit rounding operation field 2558. When the rs field 2552A contains 0 (data transformation 2552A.2), the β field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data transformation field 2554B. When U=0 and the MOD field 2642 contains 00, 01, or 10 (indicating a memory access operation), the α field 2552 (EVEX byte 3, bit [7]-EH) is interpreted as an Eviction Hint (EH) field 2552B, and The β field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data manipulation field 2554C.When U=1, the α field 2552 (EVEX byte 3, bit [7]-EH) is interpreted as the write mask control (Z) field 2552C. When U=1 and the MOD field 2642 contains 11 (indicating no memory access operation), a part of the β field 2554 (EVEX byte 3, bit [4]-S0) is interpreted as the RL field 2557A; when it contains 1 (no memory access operation) When entering 2557A.1), the rest of the β field 2554 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as the rounding operation field 2559A, and when the RL field 2557A contains 0 (VSIZE2557A.2 ), the rest of the β field 2554 (EVEX byte 3, bit [6-5]-S2-1) is interpreted as the vector length field 2559B (EVEX byte 3, bit [6-5]–L1-0) . When U=1 and the MOD field 2642 contains 00, 01, or 10 (indicating a memory access operation), the β field 2554 (EVEX byte 3, bits [6:4]-SSS) is interpreted as the vector length field 2559B (EVEX word Section 3, bits [6-5]-L1-0) and broadcast field 2557B (EVEX byte 3, bits [4]-B).Exemplary register architectureFigure 27 is a block diagram of a register architecture 2700 according to one embodiment. In the illustrated embodiment, there are 32 vector registers 2710 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm register) are overlaid on the registers xmm0-15. The dedicated vector friendly instruction format 2600 operates on these overwritten register files, as illustrated in the following table.In other words, the vector length field 2559B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half of the previous length and does not have an instruction template for the vector length field 2559B Operate on the maximum vector length. In addition, in one embodiment, the type B instruction template of the special vector friendly instruction format 2600 operates on compressed or scalar single/double precision floating point data and compressed or scalar integer data. Scalar operations are operations performed on the lowest-order data element position in the zmm/ymm/xmm register; depending on the embodiment, the higher-order data element position either remains the same as before the instruction or is zeroed.Write mask register 2715-In the illustrated embodiment, there are 8 write mask registers (k0 to k7), each of which is 64 bits in size. In an alternative embodiment, the size of the write mask register 2715 is 16 bits. As mentioned earlier, in one embodiment, the vector mask register k0 cannot be used as a write mask; when the code that normally indicates k0 is used as a write mask, it selects the hard-wired write mask 0xFFFF, which is effective It is prohibited to use the write mask for that instruction.General-purpose registers 2725-In the illustrated embodiment, there are sixteen 64-bit general-purpose registers that are used with the existing x86 addressing modes to address memory operands. These registers are referred to by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.The scalar floating-point stack register file (x87 stack) 2745, on which the MMX compacted integer flat register file 2750 is overlapped-in the illustrated embodiment, the x87 stack is used to use the x87 instruction set extension for 32/64 /80-bit floating-point data performs an eight-element stack of scalar floating-point operations; while using MMX registers to perform operations on 64-bit compressed integer data, and save operands for some operations performed between MMX and XMM registers.Alternative embodiments may use wider or narrower registers. In addition, alternative embodiments may use more, fewer, or different register files and registers.Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different processors in different ways, for different purposes. For example, the implementation of such cores may include: 1) a general-purpose ordered core intended for general-purpose computing; 2) a high-performance general-purpose out-of-order core intended for general-purpose computing; 3) intended mainly for graphics and/ Or a dedicated core for scientific (throughput) calculations. The implementation of different processors may include: 1) CPU, which includes one or more general-purpose in-order cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores intended for general-purpose computing; and 2 ) Coprocessor, which includes one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors lead to different computer system architectures. These computer system architectures may include: 1) a coprocessor on a chip separate from the CPU; 2) in the same package as the CPU but on a separate die 3) Coprocessors on the same die as the CPU (in this case, such coprocessors are sometimes called dedicated logic or dedicated cores, such as integrated graphics and / Or scientific (throughput logic); and 4) a system-on-chip, which can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processing described above The device and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary nuclear architectureIn-order and out-of-order kernel block diagramFIG. 28A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline according to various embodiments. 28B is a block diagram showing an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register-renaming out-of-order issue/execution architecture core according to various embodiments. The solid-line boxes in FIGS. 28A-28B illustrate the ordered pipeline and the ordered cores, and the optional addition of the dashed-line boxes illustrates the register renaming, out-of-order issue/execution pipeline and cores. Considering that the order aspect is a subset of the disorder aspect, the disorder aspect will be described.In FIG. 28A, the processor pipeline 2800 includes a fetch stage 2802, a length decoding stage 2804, a decoding stage 2806, an allocation stage 2808, a rename stage 2810, a scheduling (also called dispatch or release) stage 2812, a register read/memory Read stage 2814, execute stage 2816, write back/memory write stage 2818, exception handling stage 2822, and commit stage 2824.28B shows a processor core 2890 that includes a front end unit 2830 that is coupled to the execution engine unit 2850, and both the front end unit 2830 and the execution engine unit 2850 are coupled to the memory unit 2870. The core 2890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 2890 may be a dedicated core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general-purpose computing graphics processing unit (GPGPU) core, a graphics core, and so on.The front-end unit 2830 includes a branch prediction unit 2832. The branch prediction unit 2832 is coupled to an instruction cache unit 2834, the instruction cache unit 2834 is coupled to an instruction translation lookaside buffer (TLB) 2836, and the instruction translation lookaside buffer 2836 is coupled to an instruction The fetch unit 2838 is coupled to the decoding unit 2840. The decoding unit 2840 (or decoder) can decode instructions and generate one or more micro-operations, micro-code entry points, micro-operations, micro-code entry points, micro-operations that are decoded from the original instructions, or reflect the original instructions in other ways, or derived from the original instructions. Commands, other commands, or other control signals are output. The decoding unit 2840 can be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. In one embodiment, the core 2890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in the decoding unit 2840, or otherwise in the front-end unit 2830). The decoding unit 2840 is coupled to the rename/allocator unit 2852 in the execution engine unit 2850.The execution engine unit 2850 includes a rename/allocator unit 2852 that is coupled to a retirement unit 2854 and a set 2856 of one or more scheduler units. The scheduler unit(s) 2856 represents any number of different schedulers, including reserved stations, central command windows, and so on. The scheduler unit(s) 2856 is coupled to the physical register file unit(s) 2858. Each of the physical register file unit(s) 2858 represents one or more physical register files, wherein different physical register files store one or more different data types, such as scalar integer, scalar float Point, packed integer, packed floating point, vector integer, vector floating point, state (for example, the instruction pointer as the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit(s) 2858 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers and general registers. The physical register file unit(s) 2858 is overlapped by the retirement unit 2854 to illustrate various ways in which register renaming and out-of-order execution can be realized (for example, using reorder buffer(s) and retirement register(s) Heap; use (multiple) future files, (multiple) history buffers, (multiple) retirement register files; use register map and register pool, etc.). The retirement unit 2854 and the physical register file unit(s) 2858 are coupled to the execution cluster(s) 2860. The execution cluster(s) 2860 includes a set 2862 of one or more execution units and a set 2864 of one or more memory access units. The execution unit 2862 may perform various operations (for example, shift, addition, subtraction, multiplication) and may perform various data types (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 2856, the physical register file unit(s) 2858, and the execution cluster(s) 2860 are shown as possibly multiple, because some embodiments create separate data/operations for certain types of Pipeline (e.g., scalar integer pipeline, scalar floating point/compacted integer/compacted floating point/vector integer/vector floating point pipeline, and/or each has its own scheduler unit, physical register file unit(s), and/or The memory access pipeline of the execution cluster-and in the case of a separate memory access pipeline, some embodiments are implemented in which only the execution cluster of the pipeline has memory access unit(s) 2864). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order, and the remaining pipelines may be ordered.A collection of memory access units 2864 is coupled to a memory unit 2870, which includes a data TLB unit 2872, which is coupled to a data cache unit 2874, which is coupled to a second level (L2) high speed Cache unit 2876. In an exemplary embodiment, the memory access unit 2864 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to a data TLB unit 2872 in the memory unit 2870. The instruction cache unit 2834 is also coupled to the second level (L2) cache unit 2876 in the memory unit 2870. The L2 cache unit 2876 is coupled to one or more other levels of cache, and ultimately to the main memory.As an example, the out-of-order issue/execution core architecture of exemplary register renaming can implement the pipeline 2800 as follows: 1) instruction fetch 2838 execution fetch stage 2802 and length decoding stage 2804; 2) decoding unit 2840 executes decoding stage 2806; 3) The rename/allocator unit 2852 executes the allocation stage 2808 and the rename stage 2810; 4) (multiple) scheduler unit 2856 executes the scheduling stage 2812; 5) (multiple) physical register file unit 2858 and memory unit 2870 execute Register read/memory read stage 2814; execution cluster 2860 execute execution stage 2816; 6) memory unit 2870 and (multiple) physical register file units 2858 execute write back/memory write stage 2818; 7) each unit may be involved The exception handling stage 2822; and 8) the retirement unit 2854 and the physical register file unit(s) 2858 execute the commit stage 2824.Core 2890 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies, Sunnyvale, California; Sunnyvale, California The ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings, Inc., which includes the instruction(s) described herein. In one embodiment, the core 2890 includes logic to support compressed data instruction set extensions (eg, AVX1, AVX2), thereby allowing compressed data to be used to perform operations used by many multimedia applications.It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading and simultaneous multi-threading. Threading (where a single physical core provides a logical core for each thread in the threads that the physical core is multithreading at the same time), or a combination thereof (for example, time-division fetching and decoding, and subsequent simultaneous synchronization in hyperthreading technologies such as NER5_ Multithreading).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 2834/2874 and a shared L2 cache unit 2876, alternative embodiments may have a single internal cache for both instructions and data , Such as, for example, the first level (L1) internal cache or multiple levels of internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific exemplary ordered core architecture29A-29B illustrate a block diagram of a more specific exemplary ordered core architecture. The core will be one of several logic blocks in the chip (including other cores of the same type and/or different types). Depending on the application, the logic block communicates with some fixed functional logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnection network (for example, a ring network).Figure 29A is a block diagram of a single processor core and its connection to the on-die interconnect network 2902 and its local subset 2904 of its second level (L2) cache, under an embodiment. In one embodiment, the instruction decoder 2900 supports an x86 instruction set with a compressed data instruction set extension. L1 cache 2906 allows low-latency access to cache memory into scalar and vector units. Although in one embodiment (in order to simplify the design), the scalar unit 2908 and the vector unit 2910 use separate sets of registers (scalar register 2912 and vector register 2914, respectively), and the data transferred between these registers is written to the memory , And then read back from the first level (L1) cache 2906, but alternative embodiments can use different methods (for example, using a single register set or including allowing data to be transferred between the two register files without being written And read back communication path).The local subset 2904 of the L2 cache is part of the global L2 cache, which is divided into a number of separate local subsets, one local subset for each processor core. Each processor core has a direct access path to its own local subset 2904 of the L2 cache. The data read by the processor core is stored in its L2 cache subset 2904 and can be quickly accessed in parallel with other processor cores accessing its own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 2904, and is dumped and cleared from other subsets if necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each circular data path is 1012 bits wide in each direction.FIG. 29B is an expanded view of a part of the processor core in FIG. 29A according to an embodiment. Figure 29B includes the L1 data cache 2906A portion of the L1 cache 2904, and more details about the vector unit 2910 and the vector register 2914. Specifically, the vector unit 2910 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 2928), which executes one or more of integer, single-precision floating-point and double-precision floating-point instructions. The VPU supports the mixing of register inputs through the mixing unit 2920, numerical conversion units 2922A-B, and the replication unit 2924 supporting the copying of memory inputs. The write mask register 2926 allows the masked vector to be written.FIG. 30 is a block diagram of a processor 3000 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment. The solid-line box in FIG. 30 illustrates a processor 3000 having a single core 3002A, a system agent 3010, and a set 3016 of one or more bus controller units, and the optional addition of a dashed-line frame has multiple cores 3002A-N , A collection 3014 of one or more integrated memory controller units in the system agent unit 3010 and a replacement processor 3000 for the dedicated logic 3008.Therefore, different implementations of the processor 3000 may include: 1) CPU, where dedicated logic 3008 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and core 3002A-N is one or Multiple general-purpose cores (for example, general-purpose ordered cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, of which core 3002A-N is intended to be mainly used for graphics and/or science (throughput) A large number of dedicated cores; and 3) a coprocessor, of which the core 3002A-N is a large number of general-purpose ordered cores. Therefore, the processor 3000 may be a general-purpose processor, a co-processor, or a dedicated processor, such as, for example, a network or communication processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), a high-throughput many integrated core (MIC) Coprocessor (including 30 or more cores), embedded processor, etc. The processor can be implemented on one or more chips. The processor 3000 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies (such as, for example, BiCMOS, CMOS, or NMOS).The memory hierarchy includes one or more levels of cache within the core, a collection 3006 of one or more shared cache units, and an external memory (not shown) coupled to the collection 3014 of integrated memory controller units. The set 3006 of shared cache units may include one or more intermediate levels of cache, such as the second level (L2), third level (L3), fourth level (L4) or other levels of cache, the last level Cache (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnection unit 3012 combines dedicated logic 3008 (integrated graphics logic is an example of dedicated logic and is also referred to herein as dedicated logic), a set of shared cache units 3006, and a system proxy unit 3010/Integrated memory controller unit(s) 3014 are interconnected, but alternative embodiments may use any number of known techniques to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 3006 and cores 3002A-N.In some embodiments, one or more cores 3002A-N can be multi-threaded. The system agent 3010 includes those components that coordinate and operate the cores 3002A-N. The system agent unit 3010 may include, for example, a power control unit (PCU) and a display unit. The PCU may be logic and components required to adjust the power state of the cores 3002A-N and the dedicated logic 3008, or may include these logics and components. The display unit is used to drive one or more externally connected displays.Core 3002A-N may be homogeneous or heterogeneous in terms of architectural instruction set; that is, two or more cores in core 3002A-N may be able to execute the same instruction set, while other cores may be able to execute the instruction Only a subset of the set or different instruction sets.Exemplary computer architectureFigures 31-34 are block diagrams of exemplary computer architectures. Known in the art for laptop devices, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSP), graphics devices , Video game equipment, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and other system designs and configurations of various other electronic devices are also suitable. Generally, a variety of systems or electronic devices capable of including a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 31, shown is a block diagram of a system 3100 according to an embodiment of the present invention. The system 3100 may include one or more processors 3110 and 3115, which are coupled to the controller hub 3120. In one embodiment, the controller hub 3120 includes a graphics memory controller hub (GMCH) 3190 and an input/output hub (IOH) 3150 (which can be on separate chips); the GMCH 3190 includes a memory and a graphics controller, the memory 3140 The coprocessor 3145 is coupled to the memory and graphics controller; the IOH 3150 couples the input/output (I/O) device 3160 to the GMCH 3190. Alternatively, one or both of the memory and the graphics controller are integrated in the processor (as described herein), the memory 3140 and the coprocessor 3145 are directly coupled to the processor 3110, and the controller hub 3120 and IOH 3150 In a single chip.The optionality of the additional processor 3115 is indicated by a dashed line in FIG. 31. Each processor 3110, 3115 may include one or more of the processing cores described herein, and may be a certain version of the processor 3000.The memory 3140 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3120 communicates with(s) via a multi-branch bus such as a front side bus (FSB), a point-to-point interface such as a fast path interconnect (QPI), or a similar connection 3195 The devices 3110 and 3115 communicate.In one embodiment, the coprocessor 3145 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and so on. In one embodiment, the controller hub 3120 may include an integrated graphics accelerator.There may be various differences between physical resources 3110 and 3115 in a series of quality metrics including architecture, micro-architecture, thermal and power consumption characteristics.In one embodiment, the processor 3110 executes instructions that control general types of data processing operations. Embedded in these instructions can be coprocessor instructions. The processor 3110 recognizes these coprocessor instructions as having a type that should be executed by the attached coprocessor 3145. Therefore, the processor 3110 issues these coprocessor instructions (or control signals indicating coprocessor instructions) to the coprocessor 3145 on the coprocessor bus or other interconnections. The coprocessor(s) 3145 accepts and executes the received coprocessor instructions.Referring now to FIG. 32, shown is a block diagram of a first more specific exemplary system 3200 in accordance with an embodiment of the present invention. As shown in FIG. 32, the multiprocessor system 3200 is a point-to-point interconnection system, and includes a first processor 3270 and a second processor 3280 coupled via a point-to-point interconnection 3250. Each of the processors 3270 and 3280 may be a certain version of the processor 3000. In one embodiment, processors 3270 and 3280 are processors 3110 and 3115, respectively, and coprocessor 3238 is coprocessor 3145. In another embodiment, the processors 3270 and 3280 are the processor 3110 and the coprocessor 3145, respectively.The processors 3270 and 3280 are shown as including integrated memory controller (IMC) units 3272 and 3282, respectively. The processor 3270 also includes point-to-point (P-P) interfaces 3276 and 3278 as part of its bus controller unit; similarly, the second processor 3280 includes P-P interfaces 3286 and 3288. The processors 3270, 3280 may exchange information via a P-P interface 3250 using a point-to-point (P-P) interface circuit 3278, 3288. As shown in Figure 32, IMC 3272 and 3282 couple the processors to corresponding memories, namely memory 3232 and memory 3234, which may be parts of the main memory locally attached to the corresponding processors.The processors 3270, 3280 can each exchange information with the chipset 3290 via the respective P-P interfaces 3252, 3254 using the point-to-point interface circuits 3276, 3294, 3286, 3298. The chipset 3290 can optionally exchange information with the coprocessor 3238 via the high-performance interface 3292. In one embodiment, the coprocessor 3238 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and so on.A shared cache (not shown) can be included in either processor, or external to the two processors but connected to these processors via a PP interconnect, so that if the processor is placed in low power mode, nothing The local cache information of one or both processors can be stored in the shared cache.The chipset 3290 may be coupled to the first bus 3216 via the interface 3296. In one embodiment, the first bus 3216 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present invention is not limited to this .As shown in FIG. 32, various I/O devices 3214 may be coupled to the first bus 3216 along with a bus bridge 3218 that couples the first bus 3216 to the second bus 3220. In one embodiment, one or the other such as a coprocessor, a high-throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor A plurality of additional processors 3215 are coupled to the first bus 3216. In one embodiment, the second bus 3220 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 3220. These devices include, for example, a keyboard and/or mouse 3222, a communication device 3227, and a storage unit 3228, such as a storage unit 3228 that may include instructions/code and data 3230. Disk drives or other mass storage devices. In addition, audio I/O 3224 may be coupled to the second bus 3220. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 32, the system can implement a multi-branch bus or other such architecture.Referring now to FIG. 33, shown is a block diagram of a second more specific exemplary system 3300 in accordance with an embodiment of the present invention. Similar elements in FIGS. 32 and 33 use similar reference numerals, and some aspects of FIG. 32 are omitted from FIG. 33 to avoid confusion with other aspects of FIG. 33.Figure 33 illustrates that the processors 3270, 3280 may include integrated memory and I/O control logic ("CL") 3272 and 3282, respectively. Therefore, CL 3272, 3282 include integrated memory controller units and include I/O control logic. FIG. 33 illustrates that not only the memory 3232, 3234 is coupled to the CL 3272, 3282, but the I/O device 3314 is also coupled to the control logic 3272, 3282. The conventional I/O device 3315 is coupled to the chipset 3290.Referring now to FIG. 34, shown is a block diagram of SoC 3400 according to an embodiment of the present invention. Similar elements in FIG. 30 use similar reference numerals. In addition, the dashed box is an optional feature on more advanced SoCs. In FIG. 34, the interconnection unit(s) 3402 is coupled to: an application processor 3410, which includes a set of one or more cores 3002A-N and a shared cache unit(s) 3006, one or more cores The set 3002A-N includes a cache unit 3004A-N; a system proxy unit 3010; (multiple) bus controller units 3016; (multiple) integrated memory controller units 3014; a collection of one or more coprocessors 3420, It may include integrated graphics logic, image processor, audio processor, and video processor; static random access memory (SRAM) unit 3430; direct memory access (DMA) unit 3432; and for coupling to one or more external displays The display unit 3440. In one embodiment, the coprocessor(s) 3420 includes a dedicated processor, such as, for example, a network or communication processor, compression engine, GPGPU, high-throughput MIC processor, or embedded processor, etc.The various embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. The embodiments may be implemented as a computer program or program code executed on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), and at least one Input device and at least one output device.Program code (such as code 3230 illustrated in FIG. 32) can be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high-level process-oriented programming language or an object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanism described in this article is not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in the processor, and the instructions, when read by a machine, cause the machine to manufacture To implement the logic of the techniques described in this article. Such representations called "IP cores" can be stored on a tangible machine-readable medium and can be supplied to various customers or production facilities to be loaded into the manufacturing machine that actually manufactures the logic or processor.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles manufactured or formed by machines or equipment, including storage media, such as hard disks; any other types of disks, including floppy disks, optical disks, compact disks Disk Read Only Memory (CD-ROM), Rewritable Compact Disk (CD-RW), and Magneto-Optical Disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Access memory (SRAM) random access memory (RAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM); phase change memory (PCM); magnetic card or Optical card; or any other type of medium suitable for storing electronic instructions.Therefore, the embodiments also include non-transitory tangible machine-readable media that contain instructions or contain design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processors, and/or described herein System characteristics. These embodiments are also called program products.Simulation (including binary transformation, code transformation, etc.)In some cases, the instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter can transform instructions (for example, using static binary transformation, dynamic binary transformation including dynamic compilation), deform, emulate, or otherwise convert them into one or more other instructions to be processed by the core. The instruction converter can be implemented by software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, off the processor, or part on and part off the processor.FIG. 35 is a block diagram of a comparison of using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented by software, firmware, hardware, or various combinations thereof. Figure 35 shows that the x86 compiler 3504 can be used to compile a program in the form of a high-level language 3502 to generate x86 binary code 3506 that can be natively executed by a processor 3516 having at least one x86 instruction set core. The processor 3516 with at least one x86 instruction set core means any processor that performs substantially the same function as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing the following items: 1) Intel The substantial part of the instruction set of the x86 instruction set core, or 2) an application that aims to run on an Intel processor with at least one x86 instruction set core in order to obtain substantially the same results as an Intel processor with at least one x86 instruction set core Or the object code version of other software. The x86 compiler 3504 represents a compiler operable to generate x86 binary code 3506 (for example, object code), which can be executed on a processor 3516 having at least one x86 instruction set core with or without additional link processing . Similarly, FIG. 35 shows that an alternative instruction set compiler 3508 can be used to compile programs in the form of a high-level language 3502 to generate programs that can be generated by a processor 3514 that does not have at least one x86 instruction set core (e.g. An alternative instruction set binary code 3510 natively executed by the MIPS instruction set of MIPS Technology Corporation of Vail, and/or a processor that executes the core of the ARM instruction set of ARM Holdings of Sunnyvale, California. The instruction converter 3512 is used to convert the x86 binary code 3506 into code that can be natively executed by the processor 3514 without the x86 instruction set core. This converted code is unlikely to be the same as the alternate instruction set binary code 3510, because an instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform general operations and is composed of instructions from the alternate instruction set. Therefore, the instruction converter 3512 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device without an x86 instruction set processor or core to execute the x86 binary code 3506 through simulation, simulation, or any other process.Further exampleExample 1 provides an exemplary processor, including: a decoding circuit for decoding an instruction that specifies a first source matrix of M by K, a second source matrix of K by N, and a destination matrix of M by N Position and specify the opcode. The opcode indicates that the execution circuit is used to start K pipeline instances on K cycles for each floating point (FP) element (M, N) of the destination matrix. Each pipeline instance includes: In the first multiplication stage, the product of the FP element (M, K) of the first source matrix and the element (K, N) of the second source matrix is generated; concurrently, in the exponential difference stage, the product and the destination matrix are determined The exponential difference between the previous FP values of the elements (M, N); in the second addition-bypass stage, the product is accumulated with the previous FP value and the accumulated sum is stored in the element (M, N), and if it is determined that rounding is required, the next waterline instance is incremented by one; wherein, before accumulation, the product is aligned by shifting the mantissa of the product by the exponent difference; and concurrently, in addition-bypass In the stage, the accumulation and bypassing to subsequent instances of the pipeline; and an execution circuit for executing the decoded instruction according to the opcode.Example 2 includes the essence of the exemplary processor of Example 1, wherein the execution circuit is used to complete the execution of K instances of the pipeline on K plus one cycle.Example 3 includes the substance of the exemplary processor of Example 1, wherein during the multiplication stage, the execution circuit is used to perform rounding of the generated product when necessary.Example 4 includes the essence of the exemplary processor of Example 1, in which, during the add-bypass stage, the execution circuit is used to saturate the accumulation and execution when necessary.Example 5 includes the substance of the exemplary processor of Example 1, wherein M is one of 1, 2, 3, 4, 8, and 16, and N is one of 1, 2, 3, 4, 8, and 16, And K is one of 1, 2, 3, 4, 8, and 16.Example 6 includes the substance of the exemplary processor of Example 1, wherein the first source matrix, the second source matrix, and the destination matrix are each located in one of the following: a set of vector registers of a register file, a set of slice registers, and Represents multiple memory locations of the matrix.Example 7 includes the substance of the exemplary processor of Example 1, wherein the execution circuit saves the state after K pipeline instances are executed for each element (M, N) of the destination matrix, and in the case of a failure, Use the saved state to continue execution after recovering from the failure.Example 8 includes the substance of the exemplary processor of Example 1, wherein the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed first instance of the pipeline receive the destination matrix from the location of the destination matrix specified by the instruction The previous FP value of the element (M, N) of the element (M, N), and the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed subsequent instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as Addition from the immediately preceding instance of the pipeline-bypass of the bypass stage.Example 9 includes the substance of the exemplary processor of Example 1, wherein the instruction further specifies a multi-bit write mask, and each bit of the multi-bit write mask is used to compare the corresponding element (M, N ) To mask or otherwise allow writing to the corresponding element (M, N) of the destination matrix.Example 10 includes the substance of the exemplary processor of Example 9, wherein each of the masked elements will be zeroed or merged.Example 11 provides an exemplary method for execution by a processor, the method comprising: using a decoding circuit to decode an instruction that specifies a first source matrix of M by K, a second source matrix of K by N, and M by The location of the destination matrix of N and specify the operation code, the operation code instructs the execution circuit to start K instances of the pipeline on K cycles for each floating point (FP) element (M, N) of the destination matrix , And use the execution circuit to execute the decoded instruction according to the opcode; and each instance of the pipeline includes: in the first multiplication stage, the FP element (M, K) of the first source matrix is generated corresponding to the second source matrix The product of the FP elements (K, N), concurrently, in the exponential difference stage, determine the exponential difference between the product and the previous FP value of the element (M, N) of the destination matrix, in the second addition-bypass In the stage, the product is accumulated with the previous FP value and the accumulated sum is stored in the elements (M, N) of the destination matrix, where before the accumulation is performed, the product is aligned by shifting the mantissa of the product by the exponent difference; And concurrently, in the addition-bypass stage, the accumulation and the bypass are used for subsequent instances of the pipeline.Example 12 includes the essence of the exemplary method of Example 11, wherein the execution circuit is used to complete the execution of K instances of the pipeline on K plus one cycle.Example 13 includes the essence of the exemplary method of Example 11, wherein, during the multiplication stage, the execution circuit is used to perform rounding of the generated product when necessary.Example 14 includes the essence of the exemplary method of Example 11, in which, during the add-bypass stage, the execution circuit is used to saturate the accumulation and execution as necessary.Example 15 includes the substance of the exemplary method of Example 11, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is one of 1, 2, 3, 4, 8, and 16.Example 16 includes the substance of the exemplary method of Example 11, wherein the first source matrix, the second source matrix, and the destination matrix are each located in one of the following: the set of vector registers of the register file, the set of slice registers, and the representation Multiple memory locations for the matrix.Example 17 includes the essence of the exemplary method of Example 11, in which the execution circuit saves the state after K pipeline instances are executed for each element (M, N) of the destination matrix, and in the case of a failure, in the slave Use the saved state to continue execution after failure recovery.Example 18 includes the substance of the exemplary method of Example 11, wherein the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed first instance of the pipeline receive the destination matrix from the position of the destination matrix specified by the instruction The previous FP value of the element (M, N), and the exponential difference pipeline stage and the addition-bypass pipeline stage of the executed subsequent instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as The addition of the immediately preceding instance of the pipeline-the bypass of the bypass stage.Example 19 includes the substance of the exemplary method of Example 11, wherein the instruction further specifies a multi-bit write mask, and each bit of the multi-bit write mask is used for the corresponding element (M, N) of the destination matrix Mask or otherwise allow writing to the corresponding element (M, N) of the destination matrix.Example 20 includes the essence of the exemplary method of Example 19, wherein each of the masked elements will be zeroed or merged.
A device includes a match element (160) that includes a first data input configured to receive a first result, wherein the first result is of an analysis performed on at least a portion of a data stream by an element of a state machine. The match element (160) also includes a second data input configured to receive a second result, wherein the second result is of an analysis performed on at least a portion of the data stream by another element of the state machine. The match element (160) further includes an output configured to selectively provide the first result or the second result.
CLAIMS What is claimed is: 1. A device, comprising: a match element comprising: a first data input configured to receive a first result, wherein the first result is of an analysis performed on at least a portion of a data stream by an element of a state machine; a second data input configured to receive a second result, wherein the second result is of an analysis performed on at least a portion of the data stream by another element of the state machine; and an output configured to selectively provide the first result or the second result. 2. The device of claim 1 , wherein the match element comprises: a first control input configured to receive a first control signal; and a second control input configured to receive a second control signal. 3. The device of claim 2, wherein the output being configured to selectively provide the first result or the second result comprises the output being configured to selectively provide the first result or the second result based on the first and second control signals. 4. The device of claim 3, wherein the match element comprises a third control input configured to receive an output enable signal. 5. The device of claim 4, wherein the output being configured to provide the first result or the second result based on the first and second control signals comprises the match element being configured to provide the first result, or the second result based on the first and second control signals and the output enable signal. 6. The device of claim 4, wherein the match element comprises a 2-to-l multiplexer coupled to the first data input, the second data input, the first control input, the second control input, the third control input, and the output. 7. The device of claim 1 , wherein the output being configured to selectively provide the first result or the second result comprises the output being configured to selectively provide no output, the first result or the second result. 8. The device of claim 2, wherein the match element comprises: a third data input configured to receive a third result, wherein the third result is of an analysis performed on at least a portion of the data stream by a third element of the state machine; a fourth data input configured to receive a fourth result, wherein the fourth result comprises a combination of results detected by a special purpose element of the state machine; and a second output configured to selectively provide the third result or the fourth result. 9. The device of claim 8, wherein the match element comprises: a third control input configured to receive a third control signal; and a fourth control input configured to receive a fourth control signal. 10. The device of claim 9, wherein the second output being configured to selectively provide the third result or the fourth result comprises the second output being configured to selectively provide the third result or the fourth result based on the third and fourth control signals. 11. The device of claim 10, wherein the match element comprises a fifth control input configured to receive an output enable signal. 12. The device of claim 1 1, wherein the match element being configured to provide the third result or the fourth result based on the third and fourth control signals comprises the matchelement being configured to provide the third result or the fourth result based on the third and fourth control signals and the output enable signal. 13. The device of claim 1 1, wherein the match element comprises a 2-to-l multiplexer coupled to the third data input, the fourth data input, the third control input, the fourth control input, the fifth control input, and the second output. 14. A device, comprising: a state machine comprising: a plurality of blocks each comprising: a plurality of rows, each comprising: a first element configured to provide a first result of an analysis performed on at least a portion of a data stream; a second element configured to provide a second result of an analysis performed on at least a portion of the data stream; and a match element comprising: a first input configured to receive the first result; a second input configured to receive the second result; and an output configured to selectively provide the first result or the second result. 15. The device of claim 1 , wherein the output being configured to selectively provide the first result or the second result comprises the output being configured to selectively provide no output, the first result or the second result. 16. The device of claim 14, wherein each of the plurality of rows comprises a plurality of row routing lines configured to be selectively coupled to the first element and the second element. 17. The device of claim 16, wherein each of the plurality of rows comprise a plurality of junction points configured to selectively couple selected row routing lines to each of the first element and the second element. 18. The device of claim 17, wherein at least one of the plurality of junction points is configured to selectively couple at least one of the row routing line to the match element. 19. The device of claim 14, wherein each of the plurality of rows comprise a special purpose element configured to provide a special purpose result, wherein the special purpose result is based on a combination of results from analysis performed on at least a portion of the data stream. 20. The device of claim 19, wherein each of the plurality of rows comprises a third element configured to provide a third result of an analysis performed on at least a portion of the data stream. 21. The device of claim 20, wherein the match element further comprises: a third input configured to receive the third result; a fourth input configured to receive the special purpose result; and a second output configured to selectively provide the third result or the special purpose result. 22. A method, comprising: receiving at a first data input of a match element a first result of an analysis performed on at least a portion of a data stream by a first element of a state machine; receiving at a second data input of the match element a second result of an analysis performed on at least a portion of the data stream by a second element of the state machine; and selectively outputting from the match element the first result or the second result. 23. The method of claim 22, comprising: receiving a first control signal at the match element; and receiving a second control signal at the match element. 24. The method of claim 23, wherein selectively outputting from the match element the first result or the second result comprises selectively outputting from the match element no output, the first result or the second result based on the first and second control signals. 25. The method of claim 24, further comprising receiving an output enable signal at the match element, wherein selectively outputting from the match element no output, the first result or the second result based on the first and second control signals comprises selectively outputting from the match element no output, the first result or the second result based on the first and second control signals and the output enable signal. 26. The method of claim 26, comprising: receiving at the match element a third result of an analysis performed on at least a portion of the data stream by a third element of the state machine; receiving at the match element a fourth result, wherein the fourth result comprises a combination of results detected by a special purpose element of the state machine; and selectively outputting from the match element the third result or the fourth result. 27. The method of claim 27, comprising: receiving a third control signal at the match element; and receiving a fourth control signal at the match element; wherein selectively outputting from the match element the third result or the fourth result comprises selectively outputting from the match element the third result or the fourth result based on the third and fourth control signals . 28. The method of claim 28, further comprising receiving an output enable signal at the match element, wherein selectively outputting from the match element the third result or the fourth result based on the third and fourth control signals comprises selectively outputting from the match element the third result or the fourth result based on the third and fourth control signals and the output enable signal. 29. A device, comprising: a match element comprising multiple data inputs configured to receive indications of results of an analysis of a data stream have been detected in more than one multiple state machine elements.
METHODS AND SYSTEMS FOR DATA ANALYSIS IN A STATE MACHINE BACKGROUND Field of Invention [0001] Embodiments of the invention relate generally to electronic devices and, more specifically, in certain embodiments, to electronic devices with parallel finite state machines for pattern-reco gnition . Description of Related Art [0002] Complex pattern recognition can be inefficient to perform on a conventional von Neumann based computer. A biological brain, in particular a human brain, however, is adept at performing pattern recognition. Current research suggests that a human brain performs pattern recognition using a series of hierarchically organized neuron layers in the neocortex. Neurons in the lower layers of the hierarchy analyze "raw signals" from, for example, sensory organs, while neurons in higher layers analyze signal outputs from neurons in the lower levels. This hierarchical system in the neocortex, possibly in combination with other areas of the brain, accomplishes the complex pattern recognition that enables humans to perform high level functions such as spatial reasoning, conscious thought, and complex language. [0003] In the field of computing, pattern recognition tasks are increasingly challenging. Ever larger volumes of data are transmitted between computers, and the number of patterns that users wish to identify is increasing. For example, spam or malware are often detected bysearching for patterns in a data stream, e.g., particular phrases or pieces of code. The number of patterns increases with the variety of spam and malware, as new patterns may be implemented to search for new variants. Searching a data stream for each of these patterns can form a computing bottleneck. Often, as the data stream is received, it is searched for each pattern, one at a time. The delay before the system is ready to search the next portion of the data stream increases with the number of patterns. Thus, pattern recognition may slow the receipt of data. [0004] Hardware has been designed to search a data stream for patterns, but this hardware often is unable to process adequate amounts of data in an amount of time given. Some devices configured to search a data stream do so by distributing the data stream among a plurality of circuits. The circuits each determine whether the data stream matches a portion of a pattern. Often, a large number of circuits operate in parallel, each searching the data stream at generally the same time. However, there has not been a system that effectively allows for performing pattern recognition in a manner more comparable to that of a biological brain. Development of such a system is desirable. BRIEF DESCRIPTION OF DRAWINGS [0005] FIG. 1 illustrates an example of system having a state machine engine, according to various embodiments of the invention. [0006] FIG. 2 illustrates an example of an FSM lattice of the state machine engine of FIG. 1, according to various embodiments of the invention. [0007] FIG. 3 illustrates an example of a block of the FSM lattice of FIG. 2, according to various embodiments of the invention.[0008] FIG. 4 illustrates an example of a row of the block of FIG. 3, according to various embodiments of the invention. [0009] FIG. 5 illustrates an example of a Group of Two of the row of FIG. 4, according to various embodiments of the invention. [0010] FIG. 6 illustrates an example of a finite state machine graph, according to various embodiments of the invention. [0011] FIG. 7 illustrates an example of two-level hierarchy implemented with FSM lattices, according to various embodiments of the invention. [0012] FIG. 8 illustrates an example of a method for a compiler to convert source code into a binary file for programming of the FSM lattice of FIG. 2, according to various embodiments of the invention. [0013] FIG. 9 illustrates a state machine engine, according to various embodiments of the invention. [0014] FIG. 10 illustrates an illustrates a second example of a row of the block of FIG. 3, according to various embodiments of the invention. [0015] FIG. 1 1 illustrates an example of the match element of FIG. 10, according to various embodiments of the invention. [0016] FIG. 12 illustrates a truth table corresponding to a multiplexer of FIG. 11, according to various embodiments of the invention. DETAILED DESCRIPTION [0017] Turning now to the figures, FIG. 1 illustrates an embodiment of a processor-based system, generally designated by reference numeral 10. The system 10 may be any of a variety of types such as a desktop computer, laptop computer, pager, cellular phone, personal organizer, portable audio player, control circuit, camera, etc. The system 10 may also be a network node,such as a router, a server, or a client (e.g., one of the previously-described types of computers). The system 10 may be some other sort of electronic device, such as a copier, a scanner, a printer, a game console, a television, a set-top video distribution or recording system, a cable box, a personal digital media player, a factory automation system, an automotive computer system, or a medical device. (The terms used to describe these various examples of systems, like many of the other terms used herein, may share some referents and, as such, should not be construed narrowly in virtue of the other items listed.) [0018] In a typical processor-based device, such as the system 10, a processor 12, such as a microprocessor, controls the processing of system functions and requests in the system 10. Further, the processor 12 may comprise a plurality of processors that share system control. The processor 12 may be coupled directly or indirectly to each of the elements in the system 10, such that the processor 12 controls the system 10 by executing instructions that may be stored within the system 10 or external to the system 10. [0019] In accordance with the embodiments described herein, the system 10 includes a state machine engine 14, which may operate under control of the processor 12. The state machine engine 14 may employ any one of a number of state machine architectures, including, but not limited to Mealy architectures, Moore architectures, Finite State Machines (FSMs), Deterministic FSMs (DFSMs), Bit-Parallel State Machines (BPSMs), etc. Though a variety of architectures may be used, for discussion purposes, the application refers to FSMs. However, those skilled in the art will appreciate that the described techniques may be employed using any one of a variety of state machine architectures. [0020] As discussed further below, the state machine engine 14 may include a number of (e.g., one or more) finite state machine (FSM) lattices. Each FSM lattice may include multiple FSMs that each receive and analyze the same data in parallel. Further, the FSM lattices may be arranged in groups (e.g., clusters), such that clusters of FSM lattices may analyze the same input data in parallel. Further, clusters of FSM lattices of the state machine engine 14 may be arranged in a hierarchical structure wherein outputs from state machine lattices on a lower level of the hierarchical structure may be used as inputs to state machine lattices on a higher level. By cascading clusters of parallel FSM lattices of the state machine engine 14 in series through thehierarchical structure, increasingly complex patterns may be analyzed (e.g., evaluated, searched, etc.). [0021] Further, based on the hierarchical parallel configuration of the state machine engine 14, the state machine engine 14 can be employed for pattern recognition in systems that utilize high processing speeds. For instance, embodiments described herein may be incorporated in systems with processing speeds of 1 GByte/sec. Accordingly, utilizing the state machine engine 14, data from high speed memory devices or other external devices may be rapidly analyzed for various patterns. The state machine engine 14 may analyze a data stream according to several criteria, and their respective search terms, at about the same time, e.g., during a single device cycle. Each of the FSM lattices within a cluster of FSMs on a level of the state machine engine 14 may each receive the same search term from the data stream at about the same time, and each of the parallel FSM lattices may determine whether the term advances the state machine engine 14 to the next state in the processing criterion. The state machine engine 14 may analyze terms according to a relatively large number of criteria, e.g., more than 100, more than 1 10, or more than 10,000. Because they operate in parallel, they may apply the criteria to a data stream having a relatively high bandwidth, e.g., a data stream of greater than or generally equal to 1 GByte/sec, without slowing the data stream. [0022] In one embodiment, the state machine engine 14 may be configured to recognize (e.g., detect) a great number of patterns in a data stream. For instance, the state machine engine 14 may be utilized to detect a pattern in one or more of a variety of types of data streams that a user or other entity might wish to analyze. For example, the state machine engine 14 may be configured to analyze a stream of data received over a network, such as packets received over the Internet or voice or data received over a cellular network. In one example, the state machine engine 14 may be configured to analyze a data stream for spam or malware. The data stream may be received as a serial data stream, in which the data is received in an order that has meaning, such as in a temporally, lexically, or semantically significant order. Alternatively, the data stream may be received in parallel or out of order and, then, converted into a serial data stream, e.g., by reordering packets received over the Internet. In some embodiments, the data stream may present terms serially, but the bits expressing each of the terms may be received in parallel. The data stream may be received from a source external to the system 10, or may beformed by interrogating a memory device, such as the memory 16, and forming the data stream from data stored in the memory 16. In other examples, the state machine engine 14 may be configured to recognize a sequence of characters that spell a certain word, a sequence of genetic base pairs that specify a gene, a sequence of bits in a picture or video file that form a portion of an image, a sequence of bits in an executable file that form a part of a program, or a sequence of bits in an audio file that form a part of a song or a spoken phrase. The stream of data to be analyzed may include multiple bits of data in a binary format or other formats, e.g., base ten, ASCII, etc. The stream may encode the data with a single digit or multiple digits, e.g., several binary digits. [0023] As will be appreciated, the system 10 may include memory 16. The memory 16 may include volatile memory, such as Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronous DRAM (SDRAM), Double Data Rate DRAM (DDR SDRAM), DDR2 SDRAM, DDR3 SDRAM, etc. The memory 16 may also include nonvolatile memory, such as read-only memory (ROM), PC-RAM, silicon-oxide-nitride-oxide- silicon (SONOS) memory, metal-oxide-nitride-oxide-silicon (MONOS) memory, polysilicon floating gate based memory, and/or other types of flash memory of various architectures (e.g., NAND memory, NOR memory, etc.) to be used in conjunction with the volatile memory. The memory 16 may include one or more memory devices, such as DRAM devices, that may provide data to be analyzed by the state machine engine 14. Such devices may be referred to as or include solid state drives (SSD's), MultimediaMediaCards (MMC's), SecureDigital (SD) cards, CompactFlash (CF) cards, or any other suitable device. Further, it should be appreciated that such devices may couple to the system 10 via any suitable interface, such as Universal Serial Bus (USB), Peripheral Component Interconnect (PCI), PCI Express (PCI-E), Small Computer System Interface (SCSI), IEEE 1394 (Firewire), or any other suitable interface. To facilitate operation of the memory 16, such as the flash memory devices, the system 10 may include a memory controller (not illustrated). As will be appreciated, the memory controller may be an independent device or it may be integral with the processor 12. Additionally, the system 10 may include an external storage 18, such as a magnetic storage device. The external storage may also provide input data to the state machine engine 14.[0024] The system 10 may include a number of additional elements. For instance, a compiler 20 may be used to program the state machine engine 14, as described in more detail with regard to FIG. 8. An input device 22 may also be coupled to the processor 12 to allow a user to input data into the system 10. For instance, an input device 22 may be used to input data into the memory 16 for later analysis by the state machine engine 14. The input device 22 may include buttons, switching elements, a keyboard, a light pen, a stylus, a mouse, and/or a voice recognition system, for instance. An output device 24, such as a display may also be coupled to the processor 12. The display 24 may include an LCD, a CRT, LEDs, and/or an audio display, for example. They system may also include a network interface device 26, such as a Network Interface Card (NIC), for interfacing with a network, such as the Internet. As will be appreciated, the system 10 may include many other components, depending on the application of the system 10. [0025] FIGs. 2-5 illustrate an example of a FSM lattice 30. In an example, the FSM lattice 30 comprises an array of blocks 32. As will be described, each block 32 may include a plurality of selectively couple-able hardware elements (e.g., programmable elements and/or special purpose elements) that correspond to a plurality of states in a FSM. Similar to a state in a FSM, a hardware element can analyze an input stream and activate a downstream hardware element, based on the input stream. [0026] The programmable elements can be programmed to implement many different functions. For instance, the programmable elements may include state machine elements (SMEs) 34, 36 (shown in FIG. 5) that are hierarchically organized into rows 38 (shown in FIGs. 3 and 4) and blocks 32 (shown in FIGs. 2 and 3). To route signals between the hierarchically organized SMEs 34, 36, a hierarchy of programmable switching elements can be used, including inter-block switching elements 40 (shown in FIGs. 2 and 3), intra-block switching elements 42 (shown in FIGs. 3 and 4) and intra-row switching elements 44 (shown in FIG. 4). [0027] As described below, the switching elements may include routing structures and buffers. A SME 34, 36 can correspond to a state of a FSM implemented by the FSM lattice 30. The SMEs 34, 36 can be coupled together by using the programmable switching elements as described below. Accordingly, a FSM can be implemented on the FSM lattice 30 byprogramming the SMEs 34, 36 to correspond to the functions of states and by selectively coupling together the SMEs 34, 36 to correspond to the transitions between states in the FSM. [0028] FIG. 2 illustrates an overall view of an example of a FSM lattice 30. The FSM lattice 30 includes a plurality of blocks 32 that can be selectively coupled together with programmable inter-block switching elements 40. The inter-block switching elements 40 may include conductors 46 (e.g., wires, traces, etc.) and buffers 48 and 50. In an example, buffers 48 and 50 are included to control the connection and timing of signals to/from the inter-block switching elements 40. As described further below, the buffers 48 may be provided to buffer data being sent between blocks 32, while the buffers 50 may be provided to buffer data being sent between inter-block switching elements 40. Additionally, the blocks 32 can be selectively coupled to an input block 52 (e.g., a data input port) for receiving signals (e.g., data) and providing the data to the blocks 32. The blocks 32 can also be selectively coupled to an output block 54 (e.g., an output port) for providing signals from the blocks 32 to an external device (e.g., another FSM lattice 30). The FSM lattice 30 can also include a programming interface 56 to load a program (e.g., an image) onto the FSM lattice 30. The image can program (e.g., set) the state of the SMEs 34, 36. That is, the image can configure the SMEs 34, 36 to react in a certain way to a given input at the input block 52. For example, a SME 34, 36 can be set to output a high signal when the character 'a' is received at the input block 52. [0029] In an example, the input block 52, the output block 54, and/or the programming interface 56 can be implemented as registers such that writing to or reading from the registers provides data to or from the respective elements. Accordingly, bits from the image stored in the registers corresponding to the programming interface 56 can be loaded on the SMEs 34, 36. Although FIG. 2 illustrates a certain number of conductors (e.g., wire, trace) between a block 32, input block 52, output block 54, and an inter-block switching element 40, it should be understood that in other examples, fewer or more conductors may be used. [0030] FIG. 3 illustrates an example of a block 32. A block 32 can include a plurality of rows 38 that can be selectively coupled together with programmable intra-block switching elements 42. Additionally, a row 38 can be selectively coupled to another row 38 within another block 32 with the inter-block switching elements 40. A row 38 includes a plurality of SMEs 34,36 organized into pairs of elements that are referred to herein as groups of two (GOTs) 60. In an example, a block 32 comprises sixteen (16) rows 38. [0031] FIG. 4 illustrates an example of a row 38. A GOT 60 can be selectively coupled to other GOTs 60 and any other elements (e.g., a special purpose element 58) within the row 38 by programmable intra-row switching elements 44. A GOT 60 can also be coupled to other GOTs 60 in other rows 38 with the intra-block switching element 42, or other GOTs 60 in other blocks 32 with an inter-block switching element 40. In an example, a GOT 60 has a first and second input 62, 64, and an output 66. The first input 62 is coupled to a first SME 34 of the GOT 60 and the second input 62 is coupled to a second SME 34 of the GOT 60, as will be further illustrated with reference to FIG. 5. [0032] In an example, the row 38 includes a first and second plurality of row interconnection conductors 68, 70. In an example, an input 62, 64 of a GOT 60 can be coupled to one or more row interconnection conductors 68, 70, and an output 66 can be coupled to one row interconnection conductor 68, 70. In an example, a first plurality of the row interconnection conductors 68 can be coupled to each SME 34, 36 of each GOT 60 within the row 38. A second plurality of the row interconnection conductors 70 can be coupled to only one SME 34, 36 of each GOT 60 within the row 38, but cannot be coupled to the other SME 34,36 of the GOT 60. In an example, a first half of the second plurality of row interconnection conductors 70 can couple to first half of the SMEs 34, 36 within a row 38 (one SME 34 from each GOT 60) and a second half of the second plurality of row interconnection conductors 70 can couple to a second half of the SMEs 34,36 within a row 38 (the other SME 34,36 from each GOT 60), as will be better illustrated with respect to FIG. 5. The limited connectivity between the second plurality of row interconnection conductors 70 and the SMEs 34, 36 is referred to herein as "parity". In an example, the row 38 can also include a special purpose element 58 such as a counter, a programmable Boolean logic element, look-up table, RAM, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a programmable processor (e.g., a microprocessor), or other element for performing a special purpose function. [0033] In an example, the special purpose element 58 comprises a counter (also referred to herein as counter 58). In an example, the counter 58 comprises a 12-bit programmable downcounter. The 12-bit programmable counter 58 has a counting input, a reset input, and zero-count output. The counting input, when asserted, decrements the value of the counter 58 by one. The reset input, when asserted, causes the counter 58 to load an initial value from an associated register. For the 12-bit counter 58, up to a 12-bit number can be loaded in as the initial value. When the value of the counter 58 is decremented to zero (0), the zero-count output is asserted. The counter 58 also has at least two modes, pulse and hold. When the counter 58 is set to pulse mode, the zero-count output is asserted during the clock cycle when the counter 58 decrements to zero, and at the next clock cycle the zero-count output is no longer asserted. When the counter 58 is set to hold mode the zero-count output is asserted during the clock cycle when the counter 58 decrements to zero, and stays asserted until the counter 58 is reset by the reset input being asserted. [0034] In another example, the special purpose element 58 comprises Boolean logic. In some examples, this Boolean logic can be used to extract information from terminal state SMEs (corresponding to terminal nodes of a FSM, as discussed later herein) in FSM lattice 30. The information extracted can be used to transfer state information to other FSM lattices 30 and/or to transfer programming information used to reprogram FSM lattice 30, or to reprogram another FSM lattice 30. [0035] FIG. 5 illustrates an example of a GOT 60. The GOT 60 includes a first SME 34 and a second SME 36 having inputs 62, 64 and having their outputs 72, 74 coupled to an OR gate 76 and a 3-to- l multiplexer 78. The 3-to- l multiplexer 78 can be set to couple the output 66 of the GOT 60 to either the first SME 34, the second SME 36, or the OR gate 76. The OR gate 76 can be used to couple together both outputs 72, 74 to form the common output 66 of the GOT 60. In an example, the first and second SME 34, 36 exhibit parity, as discussed above, where the input 62 of the first SME 34 can be coupled to some of the row interconnect conductors 68 and the input 64 of the second SME 36 can be coupled to other row interconnect conductors 70. In an example, the two SMEs 34, 36 within a GOT 60 can be cascaded and/or looped back to themselves by setting either or both of switching elements 79. The SMEs 34, 36 can be cascaded by coupling the output 72, 74 of the SMEs 34, 36 to the input 62, 64 of the other SME 34, 36. The SMEs 34, 36 can be looped back to themselves by coupling the output 72, 74 to their owninput 62, 64. Accordingly, the output 72 of the first SME 34 can be coupled to neither, one, or both of the input 62 of the first SME 34 and the input 64 of the second SME 36. [0036] In an example, a state machine element 34, 36 comprises a plurality of memory cells 80, such as those often used in dynamic random access memory (DRAM), coupled in parallel to a detect line 82. One such memory cell 80 comprises a memory cell that can be set to a data state, such as one that corresponds to either a high or a low value (e.g., a 1 or 0). The output of the memory cell 80 is coupled to the detect line 82 and the input to the memory cell 80 receives signals based on data on the data stream line 84. In an example, an input on the data stream line 84 is decoded to select one of the memory cells 80. The selected memory cell 80 provides its stored data state as an output onto the detect line 82. For example, the data received at the input block 52 can be provided to a decoder (not shown) and the decoder can select one of the data stream lines 84. In an example, the decoder can convert an 8-bit ACSII character to the corresponding 1 of 256 data stream lines 84. [0037] A memory cell 80, therefore, outputs a high signal to the detect line 82 when the memory cell 80 is set to a high value and the data on the data stream line 84 corresponds to the memory cell 80. When the data on the data stream line 84 corresponds to the memory cell 80 and the memory cell 80 is set to a low value, the memory cell 80 outputs a low signal to the detect line 82. The outputs from the memory cells 80 on the detect line 82 are sensed by a detection cell 86. [0038] In an example, the signal on an input line 62, 64 sets the respective detection cell 86 to either an active or inactive state. When set to the inactive state, the detection cell 86 outputs a low signal on the respective output 72, 74 regardless of the signal on the respective detect line 82. When set to an active state, the detection cell 86 outputs a high signal on the respective output line 72, 74 when a high signal is detected from one of the memory cells 82 of the respective SME 34, 36. When in the active state, the detection cell 86 outputs a low signal on the respective output line 72, 74 when the signals from all of the memory cells 82 of the respective SME 34, 36 are low. [0039] In an example, an SME 34, 36 includes 256 memory cells 80 and each memory cell 80 is coupled to a different data stream line 84. Thus, an SME 34, 36 can be programmed tooutput a high signal when a selected one or more of the data stream lines 84 have a high signal thereon. For example, the SME 34 can have a first memory cell 80 (e.g., bit 0) set high and all other memory cells 80 (e.g., bits 1 -255) set low. When the respective detection cell 86 is in the active state, the SME 34 outputs a high signal on the output 72 when the data stream line 84 corresponding to bit 0 has a high signal thereon. In other examples, the SME 34 can be set to output a high signal when one of multiple data stream lines 84 have a high signal thereon by setting the appropriate memory cells 80 to a high value. [0040] In an example, a memory cell 80 can be set to a high or low value by reading bits from an associated register. Accordingly, the SMEs 34 can be programmed by storing an image created by the compiler 20 into the registers and loading the bits in the registers into associated memory cells 80. In an example, the image created by the compiler 20 includes a binary image of high and low (e.g., 1 and 0) bits. The image can program the FSM lattice 30 to operate as a FSM by cascading the SMEs 34, 36. For example, a first SME 34 can be set to an active state by setting the detection cell 86 to the active state. The first SME 34 can be set to output a high signal when the data stream line 84 corresponding to bit 0 has a high signal thereon. The second SME 36 can be initially set to an inactive state, but can be set to, when active, output a high signal when the data stream line 84 corresponding to bit 1 has a high signal thereon. The first SME 34 and the second SME 36 can be cascaded by setting the output 72 of the first SME 34 to couple to the input 64 of the second SME 36. Thus, when a high signal is sensed on the data stream line 84 corresponding to bit 0, the first SME 34 outputs a high signal on the output 72 and sets the detection cell 86 of the second SME 36 to an active state. When a high signal is sensed on the data stream line 84 corresponding to bit 1 , the second SME 36 outputs a high signal on the output 74 to activate another SME 36 or for output from the FSM lattice 30. [0041] In an example, a single FSM lattice 30 is implemented on a single physical device, however, in other examples two or more FSM lattices 30 can be implemented on a single physical device (e.g., physical chip). In an example, each FSM lattice 30 can include a distinct data input block 52, a distinct output block 54, a distinct programming interface 56, and a distinct set of programmable elements. Moreover, each set of programmable elements can react (e.g., output a high or low signal) to data at their corresponding data input block 52. For example, a first set of programmable elements corresponding to a first FSM lattice 30 can reactto the data at a first data input block 52 corresponding to the first FSM lattice 30. A second set of programmable elements corresponding to a second FSM lattice 30 can react to a second data input block 52 corresponding to the second FSM lattice 30. Accordingly, each FSM lattice 30 includes a set of programmable elements, wherein different sets of programmable elements can react to different input data. Similarly, each FSM lattice 30, and each corresponding set of programmable elements can provide a distinct output. In some examples, an output block 54 from a first FSM lattice 30 can be coupled to an input block 52 of a second FSM lattice 30, such that input data for the second FSM lattice 30 can include the output data from the first FSM lattice 30 in a hierarchical arrangement of a series of FSM lattices 30. [0042] In an example, an image for loading onto the FSM lattice 30 comprises a plurality of bits of information for configuring the programmable elements, the programmable switching elements, and the special purpose elements within the FSM lattice 30. In an example, the image can be loaded onto the FSM lattice 30 to program the FSM lattice 30 to provide a desired output based on certain inputs. The output block 54 can provide outputs from the FSM lattice 30 based on the reaction of the programmable elements to data at the data input block 52. An output from the output block 54 can include a single bit indicating a match of a given pattern, a word comprising a plurality of bits indicating matches and non-matches to a plurality of patterns, and a state vector corresponding to the state of all or certain programmable elements at a given moment. As described, a number of FSM lattices 30 may be included in a state machine engine, such as state machine engine 14, to perform data analysis, such as pattern-recognition (e.g., speech recognition, image recognition, etc.) signal processing, imaging, computer vision, cryptography, and others. [0043] FIG. 6 illustrates an example model of a finite state machine (FSM) that can be implemented by the FSM lattice 30. The FSM lattice 30 can be configured (e.g., programmed) as a physical implementation of a FSM. A FSM can be represented as a diagram 90, (e..g, directed graph, undirected graph, pseudograph), which contains one or more root nodes 92. In addition to the root nodes 92, the FSM can be made up of several standard nodes 94 and terminal nodes 96 that are connected to the root nodes 92 and other standard nodes 94 through one or more edges 98. A node 92, 94, 96 corresponds to a state in the FSM. The edges 98 correspond to the transitions between the states.[0044] Each of the nodes 92, 94, 96 can be in either an active or an inactive state. When in the inactive state, a node 92, 94, 96 does not react (e.g., respond) to input data. When in an active state, a node 92, 94, 96 can react to input data. An upstream node 92, 94 can react to the input data by activating a node 94, 96 that is downstream from the node when the input data matches criteria specified by an edge 98 between the upstream node 92, 94 and the downstream node 94, 96. For example, a first node 94 that specifies the character 'b' will activate a second node 94 connected to the first node 94 by an edge 98 when the first node 94 is active and the character 'b' is received as input data. As used herein, "upstream" refers to a relationship between one or more nodes, where a first node that is upstream of one or more other nodes (or upstream of itself in the case of a loop or feedback configuration) refers to the situation in which the first node can activate the one or more other nodes (or can activate itself in the case of a loop). Similarly, "downstream" refers to a relationship where a first node that is downstream of one or more other nodes (or downstream of itself in the case of a loop) can be activated by the one or more other nodes (or can be activated by itself in the case of a loop). Accordingly, the terms "upstream" and "downstream" are used herein to refer to relationships between one or more nodes, but these terms do not preclude the use of loops or other non-linear paths among the nodes. [0045] In the diagram 90, the root node 92 can be initially activated and can activate downstream nodes 94 when the input data matches an edge 98 from the root node 92. Nodes 94 can activate nodes 96 when the input data matches an edge 98 from the node 94. Nodes 94, 96 throughout the diagram 90 can be activated in this manner as the input data is received. A terminal node 96 corresponds to a match of a sequence of interest by the input data. Accordingly, activation of a terminal node 96 indicates that a sequence of interest has been received as the input data. In the context of the FSM lattice 30 implementing a pattern recognition function, arriving at a terminal node 96 can indicate that a specific pattern of interest has been detected in the input data. [0046] In an example, each root node 92, standard node 94, and terminal node 96 can correspond to a programmable element in the FSM lattice 30. Each edge 98 can correspond to connections between the programmable elements. Thus, a standard node 94 that transitions to (e.g., has an edge 98 connecting to) another standard node 94 or a terminal node 96 correspondsto a programmable element that transitions to (e.g., provides an output to) another programmable element. In some examples, the root node 92 does not have a corresponding programmable element. [0047] When the FSM lattice 30 is programmed, each of the programmable elements can also be in either an active or inactive state. A given programmable element, when inactive, does not react to the input data at a corresponding data input block 52. An active programmable element can react to the input data at the data input block 52, and can activate a downstream programmable element when the input data matches the setting of the programmable element. When a programmable element corresponds to a terminal node 96, the programmable element can be coupled to the output block 54 to provide an indication of a match to an external device. [0048] An image loaded onto the FSM lattice 30 via the programming interface 56 can configure the programmable elements and special purpose elements, as well as the connections between the programmable elements and special purpose elements, such that a desired FSM is implemented through the sequential activation of nodes based on reactions to the data at the data input block 52. In an example, a programmable element remains active for a single data cycle (e.g., a single character, a set of characters, a single clock cycle) and then becomes inactive unless re-activated by an upstream programmable element. [0049] A terminal node 96 can be considered to store a compressed history of past events. For example, the one or more patterns of input data required to reach a terminal node 96 can be represented by the activation of that terminal node 96. In an example, the output provided by a terminal node 96 is binary, that is, the output indicates whether the pattern of interest has been matched or not. The ratio of terminal nodes 96 to standard nodes 94 in a diagram 90 may be quite small. In other words, although there may be a high complexity in the FSM, the output of the FSM may be small by comparison. [0050] In an example, the output of the FSM lattice 30 can comprise a state vector. The state vector comprises the state (e.g., activated or not activated) of programmable elements of the FSM lattice 30. In an example, the state vector includes the states for the programmable elements corresponding to terminal nodes 96. Thus, the output can include a collection of the indications provided by all terminal nodes 96 of a diagram 90. The state vector can berepresented as a word, where the binary indication provided by each terminal node 96 comprises one bit of the word. This encoding of the terminal nodes 96 can provide an effective indication of the detection state (e.g., whether and what sequences of interest have been detected) for the FSM lattice 30. In another example, the state vector can include the state of all or a subset of the programmable elements whether or not the programmable elements corresponds to a terminal node 96. [0051] As mentioned above, the FSM lattice 30 can be programmed to implement a pattern recognition function. For example, the FSM lattice 30 can be configured to recognize one or more data sequences (e.g., signatures, patterns) in the input data. When a data sequence of interest is recognized by the FSM lattice 30, an indication of that recognition can be provided at the output block 54. In an example, the pattern recognition can recognize a string of symbols (e.g., ASCII characters) to; for example, identify malware or other information in network data. [0052] FIG. 7 illustrates an example of hierarchical structure 100, wherein two levels of FSM lattices 30 are coupled in series and used to analyze data. Specifically, in the illustrated embodiment, the hierarchical structure 100 includes a first FSM lattice 30A and a second FSM lattice 30B arranged in series. Each FSM lattice 30 includes a respective data input block 52 to receive data input, a programming interface block 56 to receive programming signals and an output block 54. [0053] The first FSM lattice 30A is configured to receive input data, for example, raw data at a data input block. The first FSM lattice 30A reacts to the input data as described above and provides an output at an output block. The output from the first FSM lattice 30A is sent to a data input block of the second FSM lattice 30B. The second FSM lattice 30B can then react based on the output provided by the first FSM lattice 30A and provide a corresponding output signal 102 of the hierarchical structure 100. This hierarchical coupling of two FSM lattices 30A and 30B in series provides a means to transfer information regarding past events in a compressed word from a first FSM lattice 30A to a second FSM lattice 30B. The information transferred can effectively be a summary of complex events (e.g., sequences of interest) that were recorded by the first FSM lattice 30A.[0054] The two-level hierarchy 100 of FSM lattices 30A, 30B shown in FIG. 7 allows two independent programs to operate based on the same data stream. The two-stage hierarchy can be similar to visual recognition in a biological brain which is modeled as different regions. Under this model, the regions are effectively different pattern recognition engines, each performing a similar computational function (pattern matching) but using different programs (signatures). By connecting multiple FSM lattices 30A, 30B together, increased knowledge about the data stream input may be obtained. [0055] The first level of the hierarchy (implemented by the first FSM lattice 30A) can, for example, perform processing directly on a raw data stream. That is, a raw data stream can be received at an input block 52 of the first FSM lattice 30A and the programmable elements of the first FSM lattice 30A can react to the raw data stream. The second level (implemented by the second FSM lattice 30B) of the hierarchy can process the output from the first level. That is, the second FSM lattice 30B receives the output from an output block 54 of the first FSM lattice 30A at an input block 52 of the second FSM lattice 30B and the programmable elements of the second FSM lattice 30B can react to the output of the first FSM lattice 30A. Accordingly, in this example, the second FSM lattice 30B does not receive the raw data stream as an input, but rather receives the indications of patterns of interest that are matched by the raw data stream as determined by the first FSM lattice 30A. The second FSM lattice 30B can implement a FSM that recognizes patterns in the output data stream from the first FSM lattice 30A. [0056] FIG. 8 illustrates an example of a method 1 10 for a compiler to convert source code into an image configured to program a FSM lattice, such as lattice 30, to implement a FSM. Method 1 10 includes parsing the source code into a syntax tree (block 1 12), converting the syntax tree into an automaton (block 1 14), optimizing the automaton (block 1 16), converting the automaton into a netlist (block 1 18), placing the netlist on hardware (block 120), routing the netlist (block 122), and publishing the resulting image (block 124). [0057] In an example, the compiler 20 includes an application programming interface (API) that allows software developers to create images for implementing FSMs on the FSM lattice 30. The compiler 20 provides methods to convert an input set of regular expressions in the source code into an image that is configured to program the FSM lattice 30. The compiler 20can be implemented by instructions for a computer having a von Neumann architecture. These instructions can cause a processor 12 on the computer to implement the functions of the compiler 20. For example, the instructions, when executed by the processor 12, can cause the processor 12 to perform actions as described in blocks 1 12, 1 14, 1 16, 1 18, 120, 122, and 124 on source code that is accessible to the processor 12. [0058] In an example, the source code describes search strings for identifying patterns of symbols within a group of symbols. To describe the search strings, the source code can include a plurality of regular expressions (regexs). A regex can be a string for describing a symbol search pattern. Regexes are widely used in various computer domains, such as programming languages, text editors, network security, and others. In an example, the regular expressions supported by the compiler include criteria for the analysis of unstructured data. Unstructured data can include data that is free form and has no indexing applied to words within the data. Words can include any combination of bytes, printable and non-printable, within the data. In an example, the compiler can support multiple different source code languages for implementing regexes including Perl, (e.g., Perl compatible regular expressions (PCRE)), PHP, Java, and .NET languages. [0059] At block 1 12 the compiler 20 can parse the source code to form an arrangement of relationally connected operators, where different types of operators correspond to different functions implemented by the source code (e.g., different functions implemented by regexes in the source code). Parsing source code can create a generic representation of the source code. In an example, the generic representation comprises an encoded representation of the regexs in the source code in the form of a tree graph known as a syntax tree. The examples described herein refer to the arrangement as a syntax tree (also known as an "abstract syntax tree") in other examples, however, a concrete syntax tree or other arrangement can be used. [0060] Since, as mentioned above, the compiler 20 can support multiple languages of source code, parsing converts the source code, regardless of the language, into a non-language specific representation, e.g., a syntax tree. Thus, further processing (blocks 1 14, 1 16, 1 18, 120) by the compiler 20 can work from a common input structure regardless of the language of the source code.[0061] As noted above, the syntax tree includes a plurality of operators that are relationally connected. A syntax tree can include multiple different types of operators. That is, different operators can correspond to different functions implemented by the regexes in the source code. [0062] At block 1 14, the syntax tree is converted into an automaton. An automaton comprises a software model of a FSM and can accordingly be classified as deterministic or non- deterministic. A deterministic automaton has a single path of execution at a given time, while a non-deterministic automaton has multiple concurrent paths of execution. The automaton comprises a plurality of states. In order to convert the syntax tree into an automaton, the operators and relationships between the operators in the syntax tree are converted into states with transitions between the states. In an example, the automaton can be converted based partly on the hardware of the FSM lattice 30. [0063] In an example, input symbols for the automaton include the symbols of the alphabet, the numerals 0-9, and other printable characters. In an example, the input symbols are represented by the byte values 0 through 255 inclusive. In an example, an automaton can be represented as a directed graph where the nodes of the graph correspond to the set of states. In an example, a transition from state p to state q on an input symbol a, i.e. <¾?,«), is shown by a directed connection from node p to node q. In an example, a reversal of an automaton produces a new automaton where each transition p→q on some symbol a is reversed q→p on the same symbol. In a reversal, start state becomes a final state and the final states become start states. In an example, the language recognized (e.g., matched) by an automaton is the set of all possible character strings which when input sequentially into the automaton will reach a final state. Each string in the language recognized by the automaton traces a path from the start state to one or more final states. [0064] At block 1 16, after the automaton is constructed, the automaton is optimized to, among other things, reduce its complexity and size. The automaton can be optimized by combining redundant states. [0065] At block 1 18, the optimized automaton is converted into a netlist. Converting the automaton into a netlist maps each state of the automaton to a hardware element (e.g., SMEs 34,36, other elements) on the FSM lattice 30, and determines the connections between the hardware elements. [0066] At block 120, the netlist is placed to select a specific hardware element of the target device (e.g., SMEs 34, 36, special purpose elements 58) corresponding to each node of the netlist. In an example, placing selects each specific hardware element based on general input and output constraints for of the FSM lattice 30. [0067] At block 122, the placed netlist is routed to determine the settings for the programmable switching elements (e.g., inter-block switching elements 40, intra-block switching elements 42, and intra-row switching elements 44) in order to couple the selected hardware elements together to achieve the connections describe by the netlist. In an example, the settings for the programmable switching elements are determined by determining specific conductors of the FSM lattice 30 that will be used to connect the selected hardware elements, and the settings for the programmable switching elements. Routing can take into account more specific limitations of the connections between the hardware elements that placement at block 120. Accordingly, routing may adjust the location of some of the hardware elements as determined by the global placement in order to make appropriate connections given the actual limitations of the conductors on the FSM lattice 30. [0068] Once the netlist is placed and routed, the placed and routed netlist can be converted into a plurality of bits for programming of a FSM lattice 30. The plurality of bits are referred to herein as an image. [0069] At block 124, an image is published by the compiler 20. The image comprises a plurality of bits for programming specific hardware elements of the FSM lattice 30. In embodiments where the image comprises a plurality of bits (e.g., 0 and 1), the image can be referred to as a binary image. The bits can be loaded onto the FSM lattice 30 to program the state of SMEs 34, 36, the special purpose elements 58, and the programmable switching elements such that the programmed FSM lattice 30 implements a FSM having the functionality described by the source code. Placement (block 120) and routing (block 122) can map specific hardware elements at specific locations in the FSM lattice 30 to specific states in the automaton. Accordingly, the bits in the image can program the specific hardware elements to implement thedesired function(s). In an example, the image can be published by saving the machine code to a computer readable medium. In another example, the image can be published by displaying the image on a display device. In still another example, the image can be published by sending the image to another device, such as a programming device for loading the image onto the FSM lattice 30. In yet another example, the image can be published by loading the image onto a FSM lattice (e.g., the FSM lattice 30). [0070] In an example, an image can be loaded onto the FSM lattice 30 by either directly loading the bit values from the image to the SMEs 34, 36 and other hardware elements or by loading the image into one or more registers and then writing the bit values from the registers to the SMEs 34, 36 and other hardware elements. In an example, the hardware elements (e.g., SMEs 34, 36, special purpose elements 58, programmable switching elements 40, 42, 44) of the FSM lattice 30 are memory mapped such that a programming device and/or computer can load the image onto the FSM lattice 30 by writing the image to one or more memory addresses. [0071] Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code may be tangibly stored on one or more volatile or non-volatile computer-readable media during execution or at other times. These computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like. [0072] Referring now to FIG. 9, an embodiment of the state machine engine 14 is illustrated. As previously described, the state machine engine 14 is configured to receive data from a source, such as the memory 16 over a data bus. In the illustrated embodiment, data may be sent to the state machine engine 14 through a bus interface, such as a DDR3 bus interface 130.The DDR3 bus interface 130 may be capable of exchanging data at a rate greater than or equal to 1 GByte/sec. As will be appreciated, depending on the source of the data to be analyzed, the bus interface 130 may be any suitable bus interface for exchanging data to and from a data source to the state machine engine 14, such as a NAND Flash interface, PCI interface, etc. As previously described, the state machine engine 14 includes one or more FSM lattices 30 configured to analyze data. Each FSM lattice 30 may be divided into two half- lattices. In the illustrated embodiment, each half lattice may include 24K SMEs (e.g., SMEs 34, 36), such that the lattice 30 includes 48K SMEs. The lattice 30 may comprise any desirable number of SMEs, arranged as previously described with regard to FIGS. 2-5. Further, while only one FSM lattice 30 is illustrated, the state machine engine 14 may include multiple FSM lattices 30, as previously described. [0073] Data to be analyzed may be received at the bus interface 130 and transmitted to the FSM lattice 30 through a number of buffers and buffer interfaces. In the illustrated embodiment, the data path includes data buffers 132, process buffers 134 and an inter-rank (IR) bus and process buffer interface 136. The data buffers 132 are configured to receive and temporarily store data to be analyzed. In one embodiment, there are two data buffers 132 (data buffer A and data buffer B). Data may be stored in one of the two data buffers 132, while data is being emptied from the other data buffer 132, for analysis by the FSM lattice 30. In the illustrated embodiment, the data buffers 132 may be 32 KBytes each. The IR bus and process buffer interface 136 may facilitate the transfer of data to the process buffer 134. The IR bus and process buffer 136 ensures that data is processed by the FSM lattice 30 in order. The IR bus and process buffer 136 may coordinate the exchange of data, timing information, packing instructions, etc. such that data is received and analyzed in the correct order. Generally, the IR bus and process buffer 136 allows the analyzing of multiple data sets in parallel through logical ranks of FSM lattices 30. [0074] In the illustrated embodiment, the state machine engine 14 also includes a decompressor 138 and a compressor 140 to aid in the transfer of the large amounts of data through the state machine engine 14. The compressor 140 and de-compressor 138 work in conjunction such that data can be compressed to minimize the data transfer times. By compressing the data to be analyzed, the bus utilization time may be minimized. Based on information provided bythe compiler 20, a mask may be provided to the state machine engine 14 to provide information on which state machines are likely to be unused. The compressor 140 and de-compressor 138 can also be configured to handle data of varying burst lengths. By padding compressed data and including an indicator as to when each compressed region ends, the compressor 140 may improve the overall processing speed through the state machine engine 14. The compressor 140 and de-compressor 138 may also be used to compress and decompress match results data after analysis by the FSM lattice 30. [0075] As previously described, the output of the FSM lattice 30 can comprise a state vector. The state vector comprises the state (e.g., activated or not activated) of programmable elements of the FSM lattice 30. Each state vector may be temporarily stored in the state vector cache memory 142 for further hierarchical processing and analysis. That is, the state of each state machine may be stored, such that the final state may be used in further analysis, while freeing the state machines for reprogramming and/or further analysis of a new data set. Like a typical cache, the state vector cache memory allows storage of information, here state vectors, for quick retrieval and use, here by the FSM lattice 30, for instance. Additional buffers, such as the state vector memory buffer, state vector intermediate input buffer 146 and state vector intermediate output buffer 148, may be utilized in conjunction with the state vector cache memory 142 to accommodate rapid analysis and storage of state vectors, while adhering to packet transmission protocol through the state machine engine 14. [0076] Once a result of interest is produced by the FSM lattice 30, match results may be stored in a match results memory 150. That is, a "match vector" indicating a match (e.g., detection of a pattern of interest) may be stored in the match results memory 150. The match result can then be sent to a match buffer 152 for transmission over the bus interface 130 to the processor 12, for example. As previously described, the match results may be compressed. [0077] Additional registers and buffers may be provided in the state machine engine 14, as well. For instance, the state machine engine 14 may include control and status registers 154. In addition, restore and program buffers 156 may be provided for using in programming the FSM lattice 30 initially, or restoring the state of the machines in the FSM lattice 30 during analysis.Similarly, save and repair map buffers 158 may also be provided for storage of save and repair maps for setup and usage. [0078] FIG. 10 illustrates a second example of a row 38 similar to that discussed above with respect to FIG. 4. The row 38 may include programmable intra-row switching elements 44 and row interconnection/interconnect conductors 68, 70 (which can also be referred to as "row routing lines", as described below). [0079] Row 38 of FIG. 10 may include eight GOTS 60, a special purpose element 58, inputs 62, inputs 64, outputs 66, a match element 160, a plurality of row routing lines 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, and 192 (collectively referred to hereafter as "row routing lines 162-192"), a special purpose element routing line 194, and a plurality of junction points 196. [0080] Furthermore, in addition to being coupled to the illustrated SMEs 34, 36 in FIG. 1 1 , the local routing matrix 172 may be coupled to all pairs of SMEs 34, 36 for the GOTs 60 in a particular row 38. Accordingly, the local routing matrix 172 may include programmable intra- row switching elements 44 and row interconnection/interconnect conductors 68, 70 (which can also be referred to as "row routing lines", as described below). [0081] The GOTS 60 and the special purpose element 58 illustrated in FIG. 10 are substantially similar to the GOTS 60 and the special purpose element 58 previously discussed with respect to FIG. 4. Accordingly, each GOT 60 receives an input 62, which may be a unified enable input, to operate as an enable signal for a detection cell 86 of a SME 34. Likewise, each GOT 60 also receives an input 64, which may also be a unified enable input, to operate as an enable signal for a detection cell 86 of a SME 36. These unified enable inputs 62, 64 may activate the detection cells 86 of the SMEs 34, 36 to output a respective result of an analysis performed by the respective SME, for example, a match in an analyzed data stream from a single SME 34, which may be utilized in conjunction with results from other SMEs 34, 36 to, for example, search for a pattern in a data stream. For example, unified enable input 62 and unified enable input 64 allow for selective activation of the SMEs 34, 36 so that results generated by each of the active SMEs 34, 36 may be utilized as part of an overall broader analysis of a data stream.[0082] The result generated by an SME 34, 36 of a GOT 60 may be selectively provided from the GOT on output 66. In one embodiment, the possible outputs of the GOT 60 may include no output, an output of the first SME 34, i.e., output 72, an output of the second SME 36, i.e., output 74, or the output of the first SME 34 or the output of the second SME 36, i.e., output 72 or output 74. Thus, a GOT 60 may be programmed to output a selected result from a GOT 60. This programming may be accomplished, for example, based on a loaded image performed during an initial programming stage of the FSM lattice 30. Results from the GOTs 60 may be provided to a match element 160, which may operate to output a selected result generated from the row 38 for a given data stream search or a portion of a data stream search. [0083] Additionally, row 38 may include row routing lines 162-192 (which may also be referred to as row interconnection/interconnect conductors). In the present embodiment, there are sixteen row lines 162-192 that are selectively coupleable to eight GOTS 60 and to the special purpose element 58. However, it should be appreciated that fewer or more row routing lines may be utilized in conjunction with the row 38. [0084] Each of the row routing lines 162-192 may be utilized to provide enable signals for any of the SMEs 34, 36 of one or more GOTS 60 along inputs 62, 64. Accordingly, through use of these row routing lines 162-192, any particular detection cell 86 for any particular SME (e.g., SME 34) may be activated. This may be accomplished by selectively coupling (e.g., in accordance with a loaded image) the row routing lines 162- 192 to unified enable inputs 62, 64 of the SMEs 34, 36. Moreover, to provide further flexibility in providing enable signals to the SMEs 34, 36, the row routing lines 162-192 may be divided up amongst two SMEs 34, 36 of a given GOT 60. For example, row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, may be utilized to activate any of the SMEs 34, 36 in the row 38. For example, a GOT 60 may transmit an output 66 to the row routing line coupled thereto, for example, row routing line 162. This signal may be transmitted into the intra-block switch, where it may be routed, for example, on row routing line 164 to an additional GOT 60 in the row 38. Additionally, row routing lines 178, 182, 186, and 190 may activate SMEs 34 in row 38, for example, by receiving signals from intra-block switch 42, while row routing lines 180, 184, 188, and 192 may activate SMEs 36 in row 38 via, for example, signals received from the intra-block witch 42. In this manner, theoverall number of row routing lines 162-192 may be reduced, while still allowing for overall flexibility and the ability to activate any detection cell 86 of any of the SMEs 34, 36 in a row 38. [0085] As illustrated in FIG. 10, each of the row routing lines 162-192 includes a plurality of junction points 196. These junction points 196 may, for example, include the intra- row switching elements 44 of FIG. 3, since the junction points 196 may be utilized to selectively couple any GOT 60 to any other GOT 60, or any GOT 60 to any other element (e.g., a special purpose element 58) within the row 38 (or, for that matter, within another row and/or another block). However, these connections may be limited by available junction points 196. For example, each of row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, may be utilized to activate any of the SMEs 34, 36 in the row 38. However, each of row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 also are selectively coupleable to the output of a respective different one of the GOTs 60. For example, an output from any one of the GOTs 60 may only be provided from that GOT 60 on a respective one of the row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 coupleable thereto. Thus, in one embodiment, because row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 are coupleable to the outputs 66 of the GOTs 60, the row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 may provide (e.g., drive-out) signals to the intra-block switch 42. In contrast, in one embodiment, row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may receive (e.g. be driven by) signals from the intra-block switch 42 that may be received from, for example, other rows 38 or blocks 32. [0086] In addition to row routing lines 162-192, the row 38 may include a special purpose element routing line 194 coupled to a special purpose element 58. Similar to row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, the special purpose routing line 194 may provide (e.g., drive-out) signals to the intra-block switch 42. In one embodiment, the special purpose element routing line 194 may also be coupleable to the match element 160. For example, if the special purpose element 58 comprises a counter, an output of the counter may be provided along the special purpose routing line 194. Similarly, if the special purpose element 58 includes a Boolean logic element, such as a Boolean cell, an output of the Boolean logic element may be provided along the special purpose routing line 194. Through the use of these special purpose elements, repetitive searches (e.g., find an element ten times) or cascaded searches (e.g., find elements x, y, and z) may be simplified into a single output that can be provided along thespecial purpose routing line 194 to either or both of the intra-block switch 42 and the match element 160. [0087] A more detailed illustration of the match element 160 is presented in FIG. 11. As illustrated, the match element 160 may include four data inputs 198, 200, 202, and 204, two outputs, and six control inputs 210, 212, 214, 216, 218, and 220. Moreover, the match element may include two 2-to-l multiplexers 222, 224. While 2-to-l multiplexers 222, 224 are illustrated, it should be noted that other configurations such as a 3-to-l multiplexer, a 4-to-l multiplexer, or other elements may be utilized in place of the 2-to-l multiplexers 224, 224 as desired, for example, to allow for flexibility in routing/output configurations or as silicon space allows. [0088] In one embodiment, data input 198 of the match element 160 is coupled to row routing line 176, data input 200 is coupled to row routing line 174, data input 202 is coupled to special purpose routing line 194, and data input 204 is coupled to row routing line 168. Selection of these particular lines is illustrative only, and has been chosen to demonstrate flexibility in receiving output signals from the row 38. By choosing row routing line 168 and row routing line 176 as connecting to the match element 160, parity between the GOTs 60 can be established. For example, a result of a first analysis performed on at least a portion of a data stream by one GOT 60 in a first half of all the GOTs 60 (GOTs 60 zero through three) can be provided on routing line 168 to the match element 160 while a result of a second analysis performed on at least a portion of the data stream by another GOT 60 in a second half of all the GOTs 60 (GOTs 60 four through seven) can be provided on routing line 176 to the match element 160. Splitting the inputs 200, 204 this way can allow for reduced paths to provide results to the match element 160. Additionally, by receiving a result from the special purpose element 58 along special purpose routing line 194 at the match element 160, results of cascaded searches may be provided once to the match element 160. Finally, selection of row routing line 174 adds flexibility to the overall system of the row 38. However, as noted, these selections are merely illustrative. [0089] As illustrated, the data inputs 198, 200 of the match element 160 may be provided to the 2-to-l multiplexer 222, while the data inputs 202, 204 of the match element 160 may be provided to the 2-to-l multiplexer 224. The 2-to-l multiplexers 222, 224 may each also receivecontrol signals from control inputs 210, 212, 214, 216, 218, and 220, which may, for example, be programmed based on a loaded image performed during an initial programming stage of the FSM lattice 30. In one embodiment, the 2-to-l multiplexer 222 may receive a select signal SO from control input 210, a select signal S I from control input 212, and an output enable signal from control input 214. Similarly, the 2-to-l multiplexer 224 may receive a select signal SO from control input 216, a select signal S I from control input 218, and an output enable signal from control input 220. The select signals SO, SI may be utilized to select which of the data inputs are to be provided to output 206 and 208, respectively for transmitting results of a data search to, for example, output block 54. Furthermore, use of multiple select lines carrying the select signals SO, SI may allow for each of the 2-to-l multiplexers 222, 224 to be built without an inverter, thus reducing the area required to implement the 2-to-l multiplexers 222, 224. However, in one embodiment, a single select line carrying a single select signal, e.g., SO, may be utilized. [0090] Additionally, the output enable signal from control input 214 may be a clocking signal or other enable signal that allows for outputs 206 and 208 to be provided only when the signals on data inputs 198, 200, 202, and 204 are stable. FIG. 13 illustrates a truth table 226 that sets forth an example of how the select signal SO from control input 210, a select signal SI from control input 212 may programmably select the output 208 of the 2-to-l multiplexer 224. [0091] As shown in FIG. 13, a truth table 226 corresponding to the output 208 of the match element 160 is illustrated. It should be noted that the output 208 represented in the truth table 226 assumes that the output enable signal from control input 220 has been received at the 2-to-l multiplexer 224. As illustrated in the truth table 226, when both the select signal SO from control input 216 and the select signal SI from control input 218 are low (i.e., 0), the output 208 of the 2-to-l multiplexer 224 will be low. For example, no result from the row 38 will be provided from the match element 160. When the select signal SO from control input 216 is high (i.e., 1) and the select signal SI from control input 218 is low, the output 208 of the 2-to-l multiplexer 224 will be the result on the row routing line 168. Conversely, when the select signal SO from control input 216 is low and the select signal SI from control input 218 is high, the output 208 of the 2-to-l multiplexer 224 will be the result on the special purpose routing line 194. Finally, the condition whereby both the select signal SO from control input 216 and theselect signal S I from control input 218 are high is forbidden. Accordingly, such a state is avoided during the programming of the match element 160. In this manner, match element 160 may programmably select no output, an output from a first data input 204 (the result on the row routing line 168) or an output from a second data input 202 (the result on the special purpose routing line 194). Furthermore, it should be noted that match element may operate in other programmable configurations not limited to the specific embodiment illustrated in FIG. 12. [0092] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims.
A complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell. A CMOS SRAM cell in accordance with an aspect of the present disclosure includes a bit line and a word line. Such a CMOS SRAM memory cell further includes a CMOS memory cell having at least a first p-channel device comprising a first channel material that differs from a substrate material of the CMOS memory cell, the first channel material having an intrinsic channel mobility greater than the intrinsic channel mobility of the substrate material, the first p-channel device coupling the CMOS memory cell to the bit line and the word line.
CLAIMSWHAT IS CLAIMED IS:1. A complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell, comprising:a bit line;a word line; anda CMOS memory cell having at least a first p-channel device comprising a first channel material that differs from a substrate material of the CMOS memory cell, the first channel material having an intrinsic channel mobility greater than the intrinsic channel mobility of the substrate material, the first p-channel device coupling the CMOS memory cell to the bit line and the word line.2. The CMOS SRAM cell of claim 1, in which the first channel material comprises SiGe.3. The CMOS SRAM cell of claim 1, in which the first channel material comprises a III-V material.4. The CMOS SRAM cell of claim 1, comprising at least one of a six transistor (6T) SRAM cell, an eight transistor (8T) SRAM cell, and a ten transistor (10T) SRAM cell.5. The CMOS SRAM cell of claim 1 , in which the CMOS SRAM cell is a planar device.6. The CMOS SRAM cell of claim 1 , in which the CMOS SRAM cell is a FinFET device.7. The CMOS SRAM cell of claim 1 , in which the CMOS SRAM cell is a gate-all-around nanowire device.8. The CMOS SRAM cell of claim 1, further comprising a bit line bar and a second p-channel device, in which the CMOS memory cell is coupled to the bit line bar by the second p-channel device.9. The CMOS SRAM cell of claim 8, in which the second p-channel device comprises a second channel material that differs from the substrate material of the CMOS memory cell, and in which the intrinsic channel mobility of the second channel material is greater than the intrinsic channel mobility of the substrate material of the CMOS memory cell.10. The CMOS SRAM cell of claim 1, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.11. A complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell, comprising:a CMOS memory cell having a bit line and a word line; andmeans for coupling the CMOS memory cell to the bit line and the word line, in which the means for coupling has an intrinsic channel mobility higher than the intrinsic channel mobility of a substrate material of the CMOS memory cell.12. The CMOS SRAM cell of claim 11, in which the coupling means comprises SiGe.13. The CMOS SRAM cell of claim 11, in which the coupling means comprises a III-V material.14. The CMOS SRAM cell of claim 11, comprising at least one of a six transistor (6T) SRAM cell, an eight transistor (8T) SRAM cell, and a ten transistor (10T) SRAM cell.15. The CMOS SRAM cell of claim 11 , in which the CMOS SRAM cell is a planar device.16. The CMOS SRAM cell of claim 11, in which the CMOS SRAM cell is a FinFET device.17. The CMOS SRAM cell of claim 11, in which the CMOS SRAM cell is a gate-all-around nanowire device.18. The CMOS SRAM cell of claim 11, further comprising a bit line bar and a second means for coupling the CMOS memory cell to the bit line bar.19. The CMOS SRAM cell of claim 18, in which the intrinsic channel mobility of the second coupling means is greater than the intrinsic channel mobility of the substrate material of the CMOS memory cell.20. The CMOS SRAM cell of claim 11, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.21. A method for making a complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell, comprising:coupling a CMOS memory cell to a bit line with a first p-channel device; and coupling the CMOS memory cell to a word line with the first p-channel device, in which the first p-channel device comprises a channel material that differs from a substrate material, the channel material having an intrinsic channel mobility higher than the intrinsic channel mobility of the substrate material.22. The method of claim 21, further comprising coupling a second p-channel device between the CMOS memory cell and a bit line bar.23. The method of claim 22, in which the second p-channel device comprises a second channel material that differs from the substrate material, and in which the intrinsic channel mobility of the second channel material is greater than the intrinsic channel mobility of the substrate material of the CMOS memory cell.24. The method of claim 21, further comprising integrating the CMOS SRAM cell into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personalcommunication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.
FIN FIELD-EFFECT TRANSISTOR STATIC RANDOM ACCESS MEMORY DEVICES WITH P-CHANNEL METAL-OXIDE-SEMICONDUCTOR PASSGATE TRANSISTORSBACKGROUNDField[0001] Aspects of the present disclosure relate to semiconductor devices, and more particularly to a p-channel metal-oxide-semiconductor (PMOS) pass gate transistors in fin field-effect transistor (FinFET) static random access memory (SRAM) devices.Background[0002] The use of semiconductor materials for electronic devices is widespread. Many different materials, such as silicon (Si), gallium arsenide (GaAs), and other compound semiconductor materials may be used to create various types of devices, such as light emitting diodes, transistors, and solar cells, and may also be used to create integrated circuits including many individual devices.[0003] In semiconductor devices, memory is often used to configure the functions of logic blocks and the routing of interconnections between devices and circuits. For power and size considerations, SRAM may be used to allow for customization of circuit operation.[0004] SRAM memories may be fabricated from complementary metal-oxide- semiconductor (CMOS) circuits using field-effect transistor (FET) components.Recently, different structures for the transistors in CMOS have been introduced, where the transistor is a "fin" shaped (3D) structure. These structures are often referred to as "FinFET" structures.[0005] There are some associated problems with CMOS memory applications. The difference in charge carrier mobility in p-channel devices with respect to n-channel devices is heightened in faster CMOS memory applications.SUMMARY[0006] A complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell in accordance with an aspect of the present disclosure includes a bit line and a a word line. Such a CMOS SRAM memory cell further includes a CMOS memory cell having at least a first p-channel device comprising a first channel material that differs from a substrate material of the CMOS memory cell, the first channel material having an intrinsic channel mobility greater than the intrinsic channel mobility of the substrate material, the first p-channel device coupling the CMOS memory cell to the bit line and the word line.[0007] A complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell in accordance with another aspect of the present disclosure includes a CMOS memory cell having a bit line and a word line. Such a CMOS SRAM memory cell further includes means for coupling the CMOS memory cell to the bit line and the word line, in which the means for coupling has an intrinsic channel mobility higher than the intrinsic channel mobility of a substrate material of the CMOS memory cell.[0008] A method for making a complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell in accordance with an aspect of the present disclosure includes coupling a CMOS memory cell to a bit line with a first p-channel device. Such a method further includes coupling the CMOS memory cell to a word line with the first p-channel device, in which the first p-channel device comprises a channel material that differs from a substrate material, the channel material having an intrinsic channel mobility higher than the intrinsic channel mobility of the substrate material.[0009] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.BRIEF DESCRIPTION OF THE DRAWINGS[0010] For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.[0011] FIGURE 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure.[0012] FIGURE 2 illustrates a cross-sectional view of a die in accordance with an aspect of the present disclosure.[0013] FIGURE 3 illustrates a cross-sectional view of a metal-oxide-semiconductor field-effect transistor (MOSFET) device in an aspect of the present disclosure.[0014] FIGURE 4 illustrates a transistor in accordance with an aspect of the present disclosure.[0015] FIGURES 5A-5C illustrate schematics of CMOS memory cells.[0016] FIGURE 6 illustrates a schematic of a CMOS memory cell in an aspect of the present disclosure.[0017] FIGURE 7A illustrates a cross-sectional view of a PMOS device in accordance with an aspect of the present disclosure.[0018] FIGURE 7B illustrates a top-down view of a CMOS memory cell in accordance with an aspect of the present disclosure[0019] FIGURE 8 is a process flow diagram illustrating a method for fabricating a device on a semiconductor substrate according to an aspect of the present disclosure.[0020] FIGURE 9 is a block diagram showing an exemplary wireless communication system in which a configuration of the disclosure may be advantageously employed. [0021] FIGURE 10 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component according to one configuration.DETAILED DESCRIPTION[0022] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent, however, to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. As described herein, the use of the term "and/or" is intended to represent an "inclusive OR", and the use of the term "or" is intended to represent an "exclusive OR".[0023] Semiconductor fabrication processes are often divided into three parts: a front end of line (FEOL), a middle of line (MOL) and a back end of line (BEOL). Front end of line processes include wafer preparation, isolation, well formation, gate patterning, spacers, and dopant implantation. A middle of line process includes gate and terminal contact formation. Back end of line processes include forming interconnects and dielectric layers for coupling to the FEOL devices. These interconnects may be fabricated with a dual damascene process using plasma-enhanced chemical vapor deposition (PECVD) deposited interlayer dielectric (ILD) materials. Various materials may be used in FEOL, MOL, or BEOL processes to increase performance of the semiconductor devices.[0024] FIGURE 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure. A wafer 100 may be a semiconductor wafer, or may be a substrate material with one or more layers of semiconductor material on a surface of the wafer 100. When the wafer 100 is a semiconductor material, it may be grown from a seed crystal using the Czochralski process, where the seed crystal is dipped into a molten bath of semiconductor material and slowly rotated and removed from the bath. The molten material then crystalizes onto the seed crystal in the orientation of the crystal. [0025] The wafer 100 may be a compound material, such as gallium arsenide (GaAs) or gallium nitride (GaN), a ternary material such as indium gallium arsenide (InGaAs), quaternary materials, or any material that can be a substrate material for other semiconductor materials. Although many of the materials may be crystalline in nature, polycrystalline or amorphous materials may also be used for the wafer 100.[0026] The wafer 100, or layers that are coupled to the wafer 100, may be supplied with materials that make the wafer 100 more conductive. For example, and not by way of limitation, a silicon wafer may have phosphorus or boron added to the wafer 100 to allow for electrical charge to flow in the wafer 100. These additives are referred to as dopants, and provide extra charge carriers (either electrons or holes) within the wafer 100 or portions of the wafer 100. By selecting the areas where the extra charge carriers are provided, which type of charge carriers are provided, and the amount (density) of additional charge carriers in the wafer 100, different types of electronic devices may be formed in or on the wafer 100.[0027] The wafer 100 has an orientation 102 that indicates the crystalline orientation of the wafer 100. The orientation 102 may be a flat edge of the wafer 100 as shown in FIGURE 1 , or may be a notch or other indicia to illustrate the crystalline orientation of the wafer 100. The orientation 102 may indicate the Miller Indices for the planes of the crystal lattice in the wafer 100.[0028] The Miller Indices form a notation system of the crystallographic planes in crystal lattices. The lattice planes may be indicated by three integers h, k, and I, which are the Miller indices for a plane (hkf ) in the crystal. Each index denotes a plane orthogonal to a direction (h, k, i) in the basis of the reciprocal lattice vectors. The integers are usually written in lowest terms (e.g., their greatest common divisor should be 1). Miller index (100) represents a plane orthogonal to direction h; index 010 represents a plane orthogonal to direction k, and index 001 represents a plane orthogonal to I. For some crystals, negative numbers are used (written as a bar over the index number) and for some crystals, such as gallium nitride, more than three numbers may be employed to adequately describe the different crystallographic planes.[0029] Once the wafer 100 has been processed as desired, the wafer 100 is divided up along dicing lines 104. The dicing lines 104 indicate where the wafer 100 is to be broken apart or separated into pieces. The dicing lines 104 may define the outline of the various integrated circuits that have been fabricated on the wafer 100.[0030] Once the dicing lines 104 are defined, the wafer 100 may be sawn or otherwise separated into pieces to form die 106. Each of the die 106 may be an integrated circuit with many devices or may be a single electronic device. The physical size of the die 106, which may also be referred to as a chip or a semiconductor chip, depends at least in part on the ability to separate the wafer 100 into certain sizes, as well as the number of individual devices that the die 106 is designed to contain.[0031] Once the wafer 100 has been separated into one or more die 106, the die 106 may be mounted into packaging to allow access to the devices and/or integrated circuits fabricated on the die 106. Packaging may include single in-line packaging, dual in-line packaging, motherboard packaging, flip-chip packaging, indium dot/bump packaging, or other types of devices that provide access to the die 106. The die 106 may also be directly accessed through wire bonding, probes, or other connections without mounting the die 106 into a separate package.[0032] FIGURE 2 illustrates a cross-sectional view of a die 106 in accordance with an aspect of the present disclosure. In the die 106, there may be a substrate 200, which may be a semiconductor material and/or may act as a mechanical support for electronic devices. The substrate 200 may be a doped semiconductor substrate, which has either electrons (designated n-type) or holes (designated p-type) charge carriers present throughout the substrate 200. Subsequent doping of the substrate 200 with charge carrier ions/atoms may change the charge carrying capabilities of the substrate 200.[0033] Within a substrate 200 (e.g., a semiconductor substrate), there may be wells 202 and 204, which may be the source and/or drain of a field-effect transistor (FET), or wells 202 and/or 204 may be fin structures of a fin structured FET (FinFET). Wells 202 and/or 204 may also be other devices (e.g., a resistor, a capacitor, a diode, or other electronic devices) depending on the structure and other characteristics of the wells 202 and/or 204 and the surrounding structure of the substrate 200.[0034] The semiconductor substrate may also have wells 206 and 208. The well 208 may be completely within the well 206, and, in some cases, may form a bipolar junction transistor (BJT). The well 206 may also be used as an isolation well to isolate the well 208 from electric and/or magnetic fields within the die 106.[0035] Layers 210 through 214 may be added to the die 106. The layer 210 may be, for example, an oxide or insulating layer that may isolate the wells 202-208 from each other or from other devices on the die 106. In such cases, the layer 210 may be silicon dioxide, a polymer, a dielectric, or another electrically insulating layer. The layer 210 may also be an interconnection layer, in which case it may be a conductive material such as copper, tungsten, aluminum, an alloy, or other like conductive material.[0036] The layer 212 may also be a dielectric or conductive layer, depending on the desired device characteristics and/or the materials of the layers 210 and 214. The layer 214 may be an encapsulating layer, which may protect the layers 210 and 212, as well as the wells 202-208 and the substrate 200, from external forces. For example, and not by way of limitation, the layer 214 may be a layer that protects the die 106 from mechanical damage, or the layer 214 may be a layer of material that protects the die 106 from electromagnetic or radiation damage.[0037] Electronic devices designed on the die 106 may include many features or structural components. For example, the die 106 may be exposed to any number of methods to impart dopants into the substrate 200, the wells 202-208, and, if desired, the layers 210-214. For example, and not by way of limitation, the die 106 may be exposed to ion implantation, deposition of dopant atoms that are driven into a crystalline lattice through a diffusion process, chemical vapor deposition, epitaxial growth, or other methods. Through selective growth, material selection, and removal of portions of the layers 210-214, and through selective removal, material selection, and dopant concentration of the substrate 200 and the wells 202-208, many different structures and electronic devices may be formed within the scope of the present disclosure.[0038] Further, the substrate 200, the wells 202-208, and the layers 210-214 may be selectively removed or added through various processes. Chemical wet etching, chemical mechanical planarization (CMP), plasma etching, photoresist masking, damascene processes, and other methods may create the structures and devices of the present disclosure. [0039] FIGURE 3 illustrates a cross-sectional view of a metal-oxide-semiconductor field-effect transistor (MOSFET) device 300 in an aspect of the present disclosure. The MOSFET device 300 may have four input terminals. The four inputs are a source 302, a gate 304, a drain 306, and a substrate 308. The source 302 and the drain 306 may be fabricated as the wells 202 and 204 in the substrate 308, or may be fabricated as areas above the substrate 308, or as part of other layers on the die 106 if desired. Such other structures may be a fin or other structure that protrudes from a surface of the substrate 308. Further, the substrate 308 may be the substrate 200 on the die 106, but substrate 308 may also be one or more of the layers 210-214 that are coupled to the substrate 200.[0040] The MOSFET device 300 is a unipolar device, as electrical current is produced by only one type of charge carrier (e.g., either electrons or holes) depending on the type of the MOSFET device 300. The MOSFET device 300 operates by controlling the amount of charge carriers in the channel 310 between the source 302 and the drain 306. A voltage Vsource 312 is applied to the source 302, a voltage Vgate 314 is applied to the gate 304, and a voltage Vdrain 316 is applied to the drain 306. A separate voltage Vsubstrate 318 may also be applied to the substrate 308, although the voltageVsubstrate 318 may be coupled to one of the voltage Vsource 312, the voltage Vgate 314 or the voltage Vdrain 316.[0041] To control the charge carriers in the channel 310, the voltage Vgate 314 creates an electric field in the channel 310 when the gate 304 accumulates charges. The opposite charge to that accumulating on the gate 304 begins to accumulate in the channel 310. The gate insulator 320 insulates the charges accumulating on the gate 304 from the source 302, the drain 306, and the channel 310. The gate 304 and the channel 310, with the gate insulator 320 in between, create a capacitor, and as the voltage Vgate 314 increases, the charge carriers on the gate 304, acting as one plate of this capacitor, begin to accumulate. This accumulation of charges on the gate 304 attracts the opposite charge carriers into the channel 310. Eventually, enough charge carriers areaccumulated in the channel 310 to provide an electrically conductive path between the source 302 and the drain 306. This condition may be referred to as opening the channel of the FET. [0042] By changing the voltage Vsource 312 and the voltage Vdrain 316, and their relationship to the voltage Vgate 314, the amount of voltage applied to the gate 304 that opens the channel 310 may vary. For example, the voltage Vsource 312 is usually of a greater potential than that of the voltage Vdrain 316. Making the voltage differential between the voltage Vsource 312 and the voltage Vdrain 316 larger changes the amount of the voltage Vgate 314 used to open the channel 310. Further, a larger voltage differential will change the amount of electromotive force moving charge carriers through the channel 310, creating a larger current through the channel 310.[0043] The gate insulator 320 material may be silicon oxide, or may be a dielectric or other material with a different dielectric constant (k) than silicon oxide. Further, the gate insulator 320 may be a combination of materials or different layers of materials. For example, the gate insulator 320 may be Aluminum Oxide, Hafnium Oxide, Hafnium Oxide Nitride, Zirconium Oxide, or laminates and/or alloys of these materials. Other materials for the gate insulator 320 may be used without departing from the scope of the present disclosure.[0044] By changing the material for the gate insulator 320, and the thickness of the gate insulator 320 (e.g., the distance between the gate 304 and the channel 310), the amount of charge on the gate 304 to open the channel 310 may vary. A symbol 322 showing the terminals of the MOSFET device 300 is also illustrated. For n-type MOSFETs (using electrons as charge carriers in the channel 310), an arrow is applied to the substrate 308 terminal in the symbol 322 pointing away from the gate 304 terminal. For p-type MOSFETs (using holes as charge carriers in the channel 310), an arrow is applied to the substrate 308 terminal in the symbol 322 pointing toward the gate 304 terminal.[0045] The gate 304 may also be made of different materials. In some designs, the gate 304 is made from polycrystalline silicon, also referred to as polysilicon or poly, which is a conductive form of silicon. Although referred to as "poly" or "polysilicon" herein, metals, alloys, or other electrically conductive materials are contemplated as appropriate materials for the gate 304 as described in the present disclosure.[0046] In some MOSFET designs, a high-k value material may be desired in the gate insulator 320, and in such designs, other conductive materials may be employed. For example, and not by way of limitation, a "high-k metal gate" design may employ a metal, such as copper, for the gate 304 terminal. Although referred to as "metal," polycrystalline materials, alloys, or other electrically conductive materials are contemplated as appropriate materials for the gate 304 as described in the present disclosure.[0047] Conductive interconnects (e.g., traces) can be used for interconnection to the MOSFET device 300, or for interconnection to other devices in a die 106 (e.g., a semiconductor die). These conductive interconnect traces may be in one or more of layers 210-214, or may be in other layers of the die 106.[0048] FIGURE 4 illustrates a transistor in accordance with an aspect of the present disclosure. A fin-structured FET (FinFET 400) operates in a similar fashion to the MOSFET device 300 described with respect to FIGURE 3. A fin 402 in a FinFET 400, however, is grown or otherwise coupled to the substrate 308. The fin 402 includes the source 302, the gate 304, and the drain 306. The gate 304 is coupled to the fin 402 through the gate insulator 320. In a FinFET structure, the physical size of the FinFET 400 may be smaller than the MOSFET device 300 structure shown in FIGURE 3. This reduction in physical size allows for more devices per unit area on the die 106.[0049] FIGURE 5A illustrates a schematic of a CMOS memory cell 500. FIGURE 5A illustrates a six transistor (6T) cell (also known as a single port cell). In FIGURE 5A, pass gate transistors 502 and 504 are n-channel (NMOS) devices. A memory cell 506 includes a first p-channel pull-up transistor 508 and a second p-channel pull-up transistor 510, and also includes a first NMOS pull-down transistor 512 and a second NMOS pull-down transistor 514. The first p-channel pull-up transistor 508 and the second p-channel pull-up transistor 510 are coupled to a supply voltage (VDD) 516. In addition, the first NMOS pull-down transistor 512 and the second NMOS pull-down transistor 514 are coupled to ground 518.[0050] The pass gate transistor 502 source and drain are coupled between the memory cell 506 and a bit line (BL) 520. The pass gate transistor 504 source and drain are coupled between the memory cell 506, and a bit line bar (BLB) 522. The gates of the pass gate transistors 502 and 504 are coupled to a word line (WL) 524. [0051] To read the memory cell 506, the voltage on the word line 524 is raised, which may be to the voltage of the supply voltage 516. Raising the voltage of the word line 524 provides voltage to the gate of the pass gate transistor 502. This opens the channel in the pass gate transistor 502. Current flows from the bit line 520 through the pass gate transistor 502, and then through the first NMOS pull-down transistor 512 to ground 518. A current path 526 is shown to indicate the direction and path of the current flow through the CMOS memory cell 500 during a read operation.[0052] FIGURE 5B illustrates an eight transistors (8T) (dual port) CMOS memory cell 528. In CMOS memory cell 528, additional NMOS transistors 530 and 532 are employed for reading the memory cell 506. To read the memory cell 506, the read bit line (RBL) 534 is set high, and the read word line 536 is also set high, which may be to VDD 516. This allows the current path 526 to be opened and the memory cell 506 to be read.[0053] FIGURE 5C illustrates a ten transistor (10T) (three port) CMOS memory cell 538. In CMOS memory cell 538, two more additional NMOS transistors 540 and 542 are employed for reading the memory cell 506. To read the memory cell 506, the second read bit line (RBL2) 544 is set high, and the read word line 546 is also set high, which may be to VDD 516. This allows the current path 548 to be opened and the memory cell 506 to be read.[0054] FIGURE 6 illustrates a schematic of a CMOS memory cell 600 in an aspect of the present disclosure. In FIGURE 6, p-channel (PMOS) devices are used as a first PMOS pass gate device 602 and a second PMOS pass gate device 604 for the CMOS memory cell 600. The first PMOS pass gate device 602 and the second PMOS pass gate device 604 are shown as transistors in FIGURE 6, but may be other devices. When a read operation is performed on the CMOS memory cell 600, a voltage on the word line 524 is reduced instead of increased. The voltage on the word line 524 may be reduced to zero volts. Further, voltages on the bit line 520 and bit line bar 522 are also reduced, and may also be reduced to zero volts. These voltage conditions open the channel in the first PMOS pass gate device 602. Current flows from the bit line 520 through the first PMOS pass gate device 602, and then through the first p-channel pull- up transistor 508 to the supply voltage (VDD) 518. The present disclosure contemplates employing PMOS devices for pass gate devices 602 and/or 604, as well as, alternatively or collectively, employing PMOS devices within the scope of the present disclosure for transistors 530, 532, 540, and/or 542.[0055] FIGURE 7A illustrates a cross-sectional view of a PMOS device in accordance with an aspect of the present disclosure. A PMOS MOSFET device 700 includes a source 702, a gate 704, a drain 706, and a semiconductor substrate 708. Although shown as a planar device, the PMOS MOSFET device 700 may be a FinFET device or a gate-all-around nanowire device without departing from the scope of the present disclosure.[0056] In the PMOS MOSFET device 700, electrical current through the channel is produced by holes, and as such the source 702 and the drain 706 are materials that are missing a valence electron in the atomic outer shell. In a silicon-based PMOS device, the source 702 and drain 706 may be doped silicon, where the dopant(s) are from Group III of the periodic table (i.e., boron, aluminum, gallium, indium, and/or tellurium). In other semiconductor material systems, the material used either as a dopant or as the underlying material may be from other periodic table groups.[0057] In the PMOS MOSFET device 700, the source 702 and/or the drain 706 may include stressor geometries and/or stressor materials to increase the charge carrier mobility in the channel 710. For example, and not by way of limitation, in asemiconductor substrate 708 composed of silicon, silicon germanium (SiGe) may be a material in the source 702 and/or drain 706 to provide stress on the channel 710. The difference in the lattice geometries, as well as the difference in atomic size and atomic bond length between SiGe and silicon provides a compressive stress on the channel 710. The stress on the channel 710 increases the hole mobility through the channel 710.[0058] As shown in FIGURE 7A, the source 702 and/or the drain 706 may also have irregular shapes, such as saw tooth shapes, grooves, curved shapes, or other shapes or portions of the source 702 and/or drain 706 that lie underneath the gate 704. Such stressor regions 712 help increase the stress on the channel 710.[0059] In an aspect of the present disclosure, the channel 710 may also include different materials to increase the stress in the channel 710. For example, SiGe may also be in the channel 710 to provide additional stress throughout the channel 710, which would further increase the hole mobility in the PMOS MOSFET device 700. The stressor regions 712 and different materials in the channel 710, source 702, and/or drain 706, increase the carrier mobility through the PMOS MOSFET device 700 over that of a channel 710 composed of silicon (e.g., in a silicon-based MOSFET device). In other words, the channel 710 may have a material, geometry, or other property that has an intrinsic channel mobility greater than an intrinsic channel mobility of thesemiconductor substrate 708.[0060] Because NMOS devices and PMOS devices have different charge carrier mobility, different materials may be used for PMOS devices than for NMOS devices. One of the materials in PMOS devices is silicon-germanium (SiGe), but other materials, such as Group Ill-Group V (III-V) binary materials, II-VI materials, or other materials having a channel mobility higher than that of silicon may be employed in the p-channel device portions of CMOS devices.[0061] By increasing the channel 710 charge carrier mobility of the PMOS MOSFET device 700, when used as the first PMOS pass gate device 602 and/or the second PMOS pass gate device 604, or as the first p-channel pull-up transistor 508 and/or the second p- channel pull-up transistor 510, the carrier mobility through the PMOS portions of a CMOS device are increased. As such, the speed through the CMOS memory cell 600 for a read operation is increased. Similar speed increases are realized for write operations, because the current is flowing through devices having a carrier mobility greater than that of the silicon NMOS devices in the CMOS memory cells.[0062] Because these improvements are at the bit cell level, the overall Static Random Access Memory (SRAM) bitcell/array performance and reliability are improved. These improvements will be applicable regardless of scaling of the devices, because the materials are not as affected by lithography as other speed improvement techniques.[0063] Although SiGe is described in FIGURES 5, 6, and 7A, any other semiconductor material composition having a higher carrier mobility than that of silicon may realize the improvements and structures of the present disclosure. Having greater carrier mobility through multiple devices within the CMOS memory cell 600 increases the read/write speeds and improves cell write margins over NMOS-pass gate devices. This technique also improves FinFET performance in small geometries (e.g., below 14 nanometers), where SRAM performance tends to degrade due to supply voltage scaling and higher current variations.[0064] For example, and not by way of limitation, a SiGe PMOS pull up (PU) transistor (e.g., The first p-channel pull-up transistor 508 and/or the second p-channel pull-up transistor 510) in the CMOS memory cell 600 improves the minimum read voltage (Read Vmin) of the SRAM bit cell by -10%. A SiGe PMOS pass gate (PG) transistor 602/604 improves the SRAM read performance and write margin (WRM) (e.g., by -20% and -40%, respectively).[0065] Si-Ge channel PMOS pass gate transistors 602/604 also offer a built-in guard band against negative bias temperature insensitivity (NBTI) degradation. NBTI severely degrades the CMOS memory cell 600 read stability (e.g., minimum read voltage, Vmin) over time. This reliability improvement is based on a reduced interaction between channel carriers and defects in the gate dielectric in the pass gate and pull up transistors. These performance enhancements may be realized in any CMOS SRAM memory cell, such as a 6T SRAM cell, an 8T SRAM cell, and a 10T SRAM cell. Further, the SRAM cell may be a planar device, a FinFET device, or a gate-all-around nanowire device.[0066] FIGURE 7B illustrates a top-down view of a CMOS memory cell in accordance with an aspect of the present disclosure. The CMOS memory cell 500 includes an n- well 714 and an n-well 716. The PMOS MOSFET device 700 may be included within the n-wells 714 and 716. In the n-well 714, devices (e.g., the first PMOS pass gate device 602 and the first p-channel pull-up transistor 508) are coupled to the bit line 520, the supply voltage 516 (e.g., VDD) and the word line 524. In the n-well 716, devices (e.g., second PMOS pass gate device 604 and the second p-channel pull-up transistor 510) are coupled to the bit line bar 522, the supply voltage 516, and the word line 524. The CMOS memory cell 500 also includes the first NMOS pull-down transistor 512 and the second NMOS pull-down transistor 514, coupled to Vss (e.g., ground 518), and to the n-wells 714 and 716, as shown in FIGURE 6.[0067] FIGURE 8 is a process flow diagram illustrating a method 800 for fabricating a device on a semiconductor substrate according to an aspect of the present disclosure. In block 802, a CMOS memory cell is coupled to a bit line with a first p-channel device. In block 804, the CMOS memory cell is coupled to a word line with the first p-channel device. The first p-channel device includes a first channel material that differs from a substrate material of the CMOS memory cell. The first channel material has an intrinsic channel mobility greater than the intrinsic channel mobility of the substrate material. In addition, the first p-channel device couples the CMOS memory cell to the bit line and the word line, for example, as shown in FIGURE 6.[0068] According to a further aspect of the present disclosure, a complementary metal oxide semiconductor (CMOS) static random access memory (SRAM) cell is described. In one configuration, the CMOS SRAM cell includes a CMOS memory cell having a bit line and a word line. The CMOS SRAM cell may be, for example, the memory cell 506 as shown in FIGURE 5. The CMOS SRAM cell also includes a bit line and a word line. The bit line may be the bit line 520 and the word line may be the word line 524 as shown in FIGURE 5. The CMOS SRAM cell also includes means for coupling the CMOS memory cell to the bit line and the word line. The means for coupling has an intrinsic channel mobility greater than the intrinsic channel mobility of a substrate of the CMOS memory cell. The coupling means may be, for example, the first PMOS pass gate device 602 as shown in FIGURE 6. In another aspect, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means.[0069] FIGURE 9 is a block diagram showing an exemplary wireless communication system 900 in which an aspect of the disclosure may be advantageously employed. For purposes of illustration, FIGURE 9 shows three remote units 920, 930, and 950 and two base stations 940. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 920, 930, and 950 include IC devices 925A, 925C, and 925B that include the disclosed PMOS transistors. It will be recognized that other devices may also include the disclosed PMOS transistors, such as the base stations, switching devices, and network equipment. FIGURE 9 shows forward link signals 980 from the base station 940 to the remote units 920, 930, and 950 and reverse link signals 990 from the remote units 920, 930, and 950 to base stations 940. [0070] In FIGURE 9, remote unit 920 is shown as a mobile telephone, remote unit 930 is shown as a portable computer, and remote unit 950 is shown as a fixed location remote unit in a wireless local loop system. For example, a remote unit may be a mobile phone, a hand-held personal communication systems (PCS) unit, a portable data unit such as a personal data assistant, a GPS enabled device, a navigation device, a set top box, a music player, a video player, an entertainment unit, a fixed location data unit such as meter reading equipment, or other devices that store or retrieve data or computer instructions, or combinations thereof. Although FIGURE 9 illustrates remote units according to the aspects of the disclosure, the disclosure is not limited to these exemplary illustrated units. Aspects of the disclosure may be suitably employed in many devices, which include the disclosed devices.[0071] FIGURE 10 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the devices disclosed above. A design workstation 1000 includes a hard disk 1002 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 1000 also includes a display 1004 to facilitate design of a circuit 1006 or a semiconductor component 1008 such as a PMOS transistor of the present disclosure. A storage medium 1010 is provided for tangibly storing the design of the circuit 1006 or the semiconductor component 1008. The design of the circuit 1006 or the semiconductor component 1008 may be stored on the storage medium 1010 in a file format such as GDSII or GERBER. The storage medium 1010 may be a CD-ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 1000 includes a drive apparatus 1012 for accepting input from or writing output to the storage medium 1010.[0072] Data recorded on the storage medium 1010 may specify logic circuitconfigurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 1010 facilitates the design of the circuit 1006 or the semiconductor component 1008 by decreasing the number of processes for designing semiconductor wafers. [0073] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. A machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to types of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to a particular type of memory or number of memories, or type of media upon which memory is stored.[0074] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be an available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0075] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.[0076] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relational terms, such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, above becomes below, and vice versa. Additionally, if oriented sideways, above and below may refer to sides of a substrate or electronic device.Moreover, the scope of the present application is not intended to be limited to the particular configurations of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding configurations described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.[0077] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0078] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general- purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general- purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0079] The steps of a method or algorithm described in connection with the disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0080] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store specified program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0081] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Apparatus and methods are presented for a touch user interface using an image sensor. A method for processing image-based input commands for a user interface includes receiving image frames from a sensor, determining when the sensor enters a cover state, determining, from subsequent image frames, when the sensor enters a de cover state, analyzing information based upon the subsequent image frames to interpret a user command, and issuing the user command to a user interface. An apparatus having an image-based user interface includes an image sensor, and a processor connected to a memory, where the processor is configured with logic to receive image frames from the image sensor, to determine when the image sensor enters a cover state, determine, from subsequent image frames, when the image sensor enters a de cover state, to analyze information based upon the subsequent image frames to interpret a user command, and to issue the user command to a user interface.
CLAIMS 1. A method for processing image-based input commands for a user interface, comprising: receiving image frames from a sensor; determining when the sensor enters a cover state; determining, from subsequent image frames, when the sensor enters a de-cover state; analyzing information based upon the subsequent image frames to interpret a user command; and issuing the user command to a user interface. 2. The method according to claim 1, further comprising: subdividing an image frame into tiles; computing a metric for each tile; and performing a count of the tiles which have a predetermined value for the metric. 3. The method according to claim 2, further comprising: performing the method of claim 2 on subsequently received frames until the count exceeds a predetermined number. 4. The method according to claim 3, wherein when the count exceeds the predetermined number, further comprising: storing a reference metric for each tile;subdividing the subsequent frames into tiles; computing a metric for each tile; and computing at least one trail value for tiles having metrics exceeding predetermined values. 5. The method according to claim 4, further comprising performing the method of claim 4 on subsequently received frames until all tiles have a corresponding trail value. 6. The method according to claim 5, further comprising: interpreting the user command as "select" or "enter" if a predetermined number of image frames are processed before all of the tiles have a corresponding trail value. 7. The method according to claim 1, further comprising: computing a gradient of a de-cover map; determining the direction of movement based upon the gradient; and issuing a command to the user interface based upon the direction. 8. The method according to claim 7, further comprising: determining if the gradient exceeds a predetermined value; determining if a predetermined number of trail values exceed a predetermined value; and interpreting the user command as a "select" or "enter" based upon the gradient and trail determination. 9. The method according to claim 2, wherein the metric includes an average of the luminance and a standard deviation of the luminance. 10. The method according to claim 1, wherein the sensor is a camera. 11. The method according to claim 10, wherein the user command is entered by placing a finger over the camera. 12. The method according to claim 11, wherein a series of gestures are interpreted as a command associated with the camera's control parameters. 13. The method according to claim 1, wherein the image frames received from the sensor are substantially based upon infrared radiation. 14. An apparatus having an image-based user interface, comprising: an image sensor; and a processor connected to a memory, wherein the processor is configured with logic to receive image frames from the image sensor; determine when the image sensor enters a cover state; determine, from subsequent image frames, when the image sensor enters a de-cover state; analyze information based upon the subsequent image frames to interpret a user command; and issue the user command to a user interface. 15. The apparatus according to claim 14, wherein the processor is further configured with logic to subdivide an image frame into tiles; compute a metric for each tile; and perform a count of the tiles which have a predetermined value for the metric. 16. The apparatus according to claim 15, wherein the processor is further configured with logic to: perform the logic of claim 15 on subsequently received frames until the count exceeds a predetermined number. 17. The apparatus according to claim 16, wherein the processor is further configured with logic to store a reference metric for each tile; subdivide the subsequent frames into tiles; compute a metric for each tile; and compute at least one trail value for tiles having metrics exceeding predetermined values. 18. The apparatus according to claim 17, wherein the processor is further configured with logic to perform the logic of claim 4 on subsequently received frames until all tiles have a corresponding trail value. 19. The apparatus according to claim 18, wherein the processor is further configured with logic to interpret the user command as "select" or "enter" if a predetermined number of image frames are processed before all of the tiles have a corresponding trail value. 20. The apparatus according to claim 14, wherein the processor is further configured with logic to compute a gradient of a de-cover map; determine the direction of movement based upon the gradient; and issue a command to the user interface based upon the direction. 21. The apparatus according to claim 20, wherein the processor is further configured with logic to determine if the gradient exceeds a predetermined value; determine if a predetermined number of trail values exceed a predetermined value; and interpret the user command as a "select" or "enter" based upon the gradient and trail determination. 22. The apparatus according to claim 15, wherein the metric includes an average of the luminance and a standard deviation of the luminance. 23. The apparatus according to claim 14, wherein the sensor is a camera and the user command is entered by placing a finger over the camera. 24. The apparatus according to claim 23, wherein the camera is recessed from a body of the apparatus so the finger does not come in physical contact with the camera. 25. A mobile device having an image-based touch user interface, comprising: a camera; and a processor connected to a memory, wherein the processor comprises logic configured to: receive an image frame from the camera; subdivide the image frame into tiles; compute a metric for each tile; perform a count of the tiles which have a predetermined value for the metric; determine a de-cover map based upon trail values from subsequent image files; compute a gradient of a de-cover map; determine the direction of movement based upon the gradient; and issue a command to the user interface based upon the direction. 26. An apparatus for processing image-based input commands for a user interface, comprising: means for receiving image frames from a sensor; means for determining when the sensor enters a cover state;means for determining, from subsequent image frames, when the sensor enters a de-cover state; means for analyzing information based upon the subsequent image frames to interpret a user command; and means for issuing the user command to a user interface. 27. The apparatus according to claim 26, further comprising: means for subdividing an image frame into tiles; means for computing a metric for each tile; and means for performing a count of the tiles which have a predetermined value for the metric. 28. The apparatus according to claim 27, further comprising: means for processing subsequently received frames until the count exceeds a predetermined number. 29. The apparatus according to claim 28, wherein when the count exceeds the predetermined number, further comprising: means for storing a reference metric for each tile; means for subdividing the subsequent frames into tiles; means for computing a metric for each tile; and means for computing at least one trail value for tiles having metrics exceeding predetermined values. 30. A computer-readable medium including program code stored thereon, which, when executed by a machine, cause the machine to perform operations for processing image-based input commands for a user interface, the computer-readable medium comprising: program code to receive image frames from a sensor; program code to determine when the sensor enters a cover state; program code to determine, from subsequent image frames, when the sensor enters a de-cover state; program code to analyze information based upon the subsequent image frames to interpret a user command; and program code to issue the user command to a user interface. 31. The computer-readable medium according to claim 30, further comprising: program code to subdivide the image frame into tiles; program code to compute a metric for each tile; and program code to perform a count of the tiles which have a predetermined value for the metric. 32. The computer-readable medium according to claim 31, further comprising: program code to process subsequently received frames until the count exceeds a predetermined number. 33. The computer-readable medium according to claim 32, wherein when the count exceeds the predetermined number, further comprising: program code to store a reference metric for each tile; program code to subdivide the subsequent frames into tiles; program code to compute a metric for each tile; and program code to compute at least one trail value for tiles having metrics exceeding predetermined values.
APPARATUS AND METHODS FOR A TOUCH USER INTERFACE USING AN IMAGE SENSOR FIELD OF DISCLOSURE [0001] The embodiments of the disclosure relate generally to image sensor based interfaces, and more specifically, mobile devices having interfaces which utilize an image sensor for receiving user commands. BACKGROUND [0002] As mobile devices have increased in power and sophistication, user interface developers are facing the challenges of exploiting the devices' expanding capabilities while simultaneously improving their ease of use. [0003] Touch screens have increased in popularity as a user interface for mobile devices due to recent advances multi-touch functionality and their intuitive approach which simplifies complex user interface navigation. Touch screens also may have the advantage of maximizing the screen size of the mobile device because real keyboards and/or other physical cursor control interfaces can be omitted. However, touch screens may be associated with a number of operational drawbacks, such as the lack of tactile feedback of virtual keyboards and other controls, screen occlusion by the user's finger, and/or the smudging the surface of the display during use. Moreover, touch screen displays are typically more expensive to develop and manufacture than their non-touch counterparts. [0004] Given the aforementioned drawbacks of touch-screen displays, some users prefer using a physical keypad along with a smaller display on their mobile devices. In conjunction with such physical user interfaces, other conventional approaches have been suggested for bringing the intuitive nature of touch screen capabilities for use in existing and future mobile devices. These approaches can leverage the integrated digital cameras which are commonly included with many mobile devices. [0005] Some of these conventional approaches suggest using MPEG motion vector algorithms to determine how a user is moving a hand in front of the camera. Other systems may estimate the orientation (e.g., tilt) of the mobile device using the integrated camera for determining user input. These approaches may involve algorithms operating in real-time to ensure the user interface is sufficiently responsive. Accordingly, they may be computationally intensive and can burden the mobile device's on-boardprocessor(s) and/or utilize specialized hardware. The conventional approaches may therefore adversely impact cost and increase the power consumption of the mobile device. [0006] In addition, these conventional approaches may require the user to perform exaggerated hand and/or arm motions in front of the camera, which may undesirably draw attention to the user and/or induce fatigue over time. Also, these algorithms may present challenges for determining how to designate selection points and/or performing relative navigation tasks (e.g., resetting selection points when sliding/dragging/etc, objects in the user interface a distance which exceeds a single user motion). Moreover, such techniques may require a user keeping his hand steady or still to properly make selections and/or to avoid unintentionally selecting an item. [0007] Accordingly, it would be desirable provide a touch user interface navigation technique for existing and future camera phones, which can avoid the aforementioned drawbacks and be implemented in a cost-effective manner. SUMMARY [0008] Exemplary embodiments of the invention are directed to apparatus and methods for a touch user interface using an image sensor. [0009] In one embodiment, a method for processing image-based input commands for a user interface is presented. The method may includes receiving image frames from a sensor, determining when the sensor enters a cover state, determining, from subsequent image frames, when the sensor enters a de-cover state, analyzing information based upon the subsequent image frames to interpret a user command, and issuing the user command to a user interface. [0010] In another embodiment, an apparatus having an image-based user interface is presented. The apparatus may include an image sensor, and a processor connected a memory, where the processor is configured with logic to receive image frames from the image sensor, determine when the image sensor enters a cover state, determine, from subsequent image frames, when the image sensor enters a de-cover state, analyze information based upon the subsequent image frames to interpret a user command, and issue the user command to a user interface. [0011] Another embodiment of the invention can include a mobile device having an image-based touch user interface, including a camera; and a processor connected to amemory. The processor includes logic configured to receive an image frame from the camera; to subdivide the image frame into tiles; compute a metric for each tile; to perform a count of the tiles which have a predetermined value for the metric; to determine a de-cover map based upon trail values from subsequent image files; to compute a gradient of a de-cover map; to determine the direction of movement based upon the gradient; and to issue a command to the user interface based upon the direction. [0012] Another embodiment of the invention can include an apparatus for processing image-based input commands for a user interface, including means for receiving image frames from a sensor; means for determining when the sensor enters a cover state; means for determining, from subsequent image frames, when the sensor enters a de cover state; means for analyzing information based upon the subsequent image frames to interpret a user command; and means for issuing the user command to a user interface. [0013] Another embodiment of the invention can include a computer-readable medium including program code stored thereon, which, when executed by a machine, cause the machine to perform operations for processing image-based input commands for a user interface. The computer-readable medium including program code to receive image frames from a sensor; program code to determine when the sensor enters a cover state; program code to determine, from subsequent image frames, when the sensor enters a de cover state; program code to analyze information based upon the subsequent image frames to interpret a user command; and program code to issue the user command to a user interface. BRIEF DESCRIPTION OF THE DRAWINGS [0014] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof. [0015] Figs. 1A-1D are diagrams showing an overview of the operation of an exemplary mobile device having an image-based touch user interface. [0016] Fig. 2 is a block diagram showing an exemplary configuration of a mobile device having an image-based touch user interface.[0017] Fig. 3 is a flowchart depicting an exemplary top-level process associated with the image-based touch user interface. [0018] Fig. 4 is a flowchart depicting an exemplary process for determining a cover state associated with the image-based touch user interface. [0019] Fig. 5 is a flowchart depicting an exemplary process for determining a de-cover state associated with the image-based touch user interface. [0020] Fig. 6 is a flowchart depicting an exemplary process for determining a user command associated with the image-based touch user interface. DETAILED DESCRIPTION [0021] Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well- known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. [0022] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation. [0023] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising,", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [0024] Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one ormore processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, "logic configured to" perform the described action. [0025] Figs. 1A-1D are diagrams showing an overview of the operation of an exemplary mobile device 100 having an image-based touch user interface (IBTUI). Fig. IA shows an exemplary mobile device 100 as a flip-phone (shown with the top portion of the phone cut away). The back surface 105 of the device includes the outer portion of an image sensor 110 which may be continuously collecting image frames 115. During operation, the IBTUI may track the path of an object exiting the field of view of the sensor 110 after the sensor has been covered. Once tracking is complete, a command may be interpreted by the IBTUI based upon the nature of the tracked motion. [0026] As shown in Fig. IB, a user may initiate command entry by initially placing a finger 120 over image sensor 110. This action may substantially cover or fully cover the image sensor 110 so as to produce one or more image frames 125 having low luminance values. This places the IBTUI in a "cover state," and signals the IBTUI to track the motion of finger 120 as it leaves the field of view of the image sensor 110. As shown in Fig. 1C, the finger 120 is leaving field of view of the image sensor 110 by moving towards the left of the page. A series of image frames 130 may be produced having luminance variations corresponding to this movement. The image frames may be processed to interpret the movement as a command. In this instance, the movement of the finger may be interpreted as a command to produce a corresponding movement of a cursor in the mobile device's graphical user interface (GUI). Fig. ID shows a movement of the user's finger 120 going towards the bottom of the page, and a series of image frames 135 being produced having corresponding luminance variations. This movement may produce a command moving the cursor in the mobile device's GUI a downward. As will be described below, other movements and/or gestures may be interpreted as different commands.[0027] While mobile device 100 is shown as a camera flip-phone, other embodiments of the invention may be directed to any type of device, as will be described in more detail below. [0028] Fig. 2 is a block diagram showing an exemplary configuration 210 of a mobile device 200 having an image-based touch user interface (IBTUI). The mobile device 200 may have a platform 210 that can exchange data and/or commands over a network. The platform 210 can include a transceiver 215 (which may further include a transmitter and receiver which is not explicitly shown) operably coupled to a processor 220, or other controller, microprocessor, ASIC, logic circuit, or any other data processing device. The processor 220 may execute programs stored in the memory 225 of the mobile device 200. One program which may execute thereon can be associated with the image-based touch user interface which may provide inputs to the mobile device's 200 graphical user interface. The memory 225, which may store executable modules (e.g., the IBTUI, GUI, etc.), image frames, and other data structures, including those associating with the operation of the IBTUI. The memory 225 can be comprised of read-only and/or random-access memory (RAM and ROM), EEPROM, flash cards, or any memory common to such platforms. The image sensor 230 may be functionally coupled to the processor 220 and may typically be sensitive to visible light. Other embodiments of the invention may feature an image sensor 230 which is also capable of exploiting other wavelengths so the IBTUI may operate in the absence of visible light. An optical component of the image sensor associated with the outer surface of the mobile device 200 (e.g., a clear cover protecting a camera lens) may mounted in a recessed manner. With this arrangement, the user's finger may not actually come into physical contact with the image sensor, thus preventing the user's finger from introducing foreign objects (e.g. dirt, grease, etc.) into the image sensor's optical path, or otherwise damaging (e.g., scratching) the image sensor. Accordingly, the image- based touch user interface does not require actual touching of the image sensor. [0029] The image sensor 230 may be a camera that records image frames at a periodic rate (e.g., 30 frames/second) and may use conventional digital video formats. When accepting user input via the IBTUI, the image sensor 230 may continuously provide image frames for IBTUI processing. For example, the image sensor 230 may be providing image frames to the processor 220 when the user is displaying a "Contacts" screen, in order to accept input from the user's finger for use in cursor movement and/orselection within the screen. When the image sensor 230 is not providing image frames for the IBTUI, the image sensor may serve to provide pictures and/or videos. Additionally, the image sensor 230 may collect, and the IBTUI may utilize, image frames without any conventional processing associated with improving the aesthetic qualities of the image frames. For example, when the image sensor is being used for the IBTUI, the image frames may not have any white balance, color balance, auto-focus, image sharpening, etc. performed. Omitting such processing will reduce the computational burden placed on the mobile device 100 when using the IBTUI, and may further enhance battery life. [0030] The various logic elements for providing commands can be embodied in discrete elements, software modules executed on a processor or any combination of software and hardware to achieve the functionality disclosed herein. For example, the processor 220 and the memory 225 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component (e.g., in embedded memory in the processor 220). Therefore, the features of the mobile device 200 in FIG. 2 are to be considered merely illustrative and embodiments of the invention are not limited to the illustrated features or arrangement. [0031] Moreover, embodiments of the invention may be used in conjunction with any device and are not limited to the illustrated embodiments. For example, devices can include cellular telephones, access terminals, personal digital assistants, music players, radios, GPS receivers, laptop computers, kiosks, and the like. [0032] Fig. 3 is a flowchart depicting an exemplary top-level process 300 associated with the image-based touch user interface (IBTUI). The process 300 may begin when the mobile device 200 is initially turned on or power cycled, and the processor 220 begins initialization of a variety of processes for device operation (310). This may include the initialization of the graphical user interface and hardware and software/firmware/logic components associated with receiving and processing image frames from the image sensor 230. The image frames may be presented in a conventional video format (e.g., 30 frames/sec with each frame having 240x320 pixels) and use a luminance-chrominance color space (YCrCb). The frames may also be presented in a quasi-video format having a reduced frame rate and/or lower spatialsampling within each image frame. Additionally, the image frames may forgo pre-processing to enhance color, white balance, sharpness, and/or improve other esthetic qualities. [0033] The process may then begin analyzing images generated by image sensor 230 to determine if the image sensor is in a cover state. As defined herein, the cover state occurs when the image sensor 230 is covered by an object (typically the user's finger). This analysis may be performed on the luminance channel of the image frames, and may include computing one or more metrics based upon average brightness and/or detail (315). These metrics may be statistical in nature, and will be described in more detail below. The process may then make a determination as to whether the image sensor is in a cover state by performing a threshold comparison using the metrics computed in Block 315 (320). If the determination indicates the image sensor 230 is in the cover state, the process proceeds to Block 325; otherwise, the analysis in Block 315 continues until the cover state is reached. Details of Blocks 315 and 320 are presented below in the description of Fig. 4. [0034] When it is determined that the image sensor has entered the cover state, the process 300 then begins analyzing subsequent image frames to determine when the image sensor transitions to the "de-cover state" (325 and 330). As used herein, the de- cover state defined as the state when the user's finger has moved off the image sensor to the extent where its motion may be reliably tracked. This may be determined by computing luminance and/or detail metrics and their changes over time. During this process, a de-cover map may be produced to store the computed metrics and their temporal variations. Once the de-cover map is complete, the process may proceed to Block 335 where the de-cover map is analyzed. Details of Blocks 325 and 330 are presented below in the description of Fig. 5. [0035] In block 335, the de-cover map is analyzed to determine how the finger moved off of the image sensor 230. By analyzing the spatial variations within the de-cover map, the direction of finger movement may be determined (335). This information may be used to interpret a command which may in turn be provided to the mobile device's graphical user interface. Details of Blocks 335 and 340 are presented below in the description of Fig. 6. [0036] Accordingly, an embodiment of the invention may include a method 300 for processing image-based input commands for a user interface. The method may includereceiving image frames from a sensor and determining when the sensor enters a cover state (315, 320), determining, from subsequent image frames, when the sensor enters a de-cover state (325, 330), analyzing information based upon the subsequent image frames to interpret a user command (335, 330), and issuing the user command to a user interface (340). [0037] Moreover, another embodiment of the invention may include an apparatus 200 having an image-based user interface. The apparatus may include an image sensor 230 and a processor 220 connected to a memory 225. The processor may be configured with logic to receive image frames from a sensor and determine when the sensor enters a cover state (315, 320), determine, from subsequent image frames, when the sensor enters a de-cover state (325, 330), analyze information based upon the subsequent image frames to interpret a user command (335, 330), and issue the user command to a user interface (340). [0038] Fig. 4 is a flowchart depicting an exemplary process 400 for determining a cover state associated with the image-based touch user interface. The process 400 may start out by receiving image frame i from the image sensor 200 (410). The image frame may then be subdivided into n x m tiles (e.g., each tile may include approximately 60 x 80 pixels and n = m = 4, for 240x320 portrait preview frame). Luminance and/or detail metrics may then be computed for each tile by processor 220 for pixels from the luminance channel of the image (420). The luminance metric for each tile may be computed by determining the average luminance within each tile. The detail metric may be computed by determining the standard deviation of each tile. The standard deviation (std) may be approximated by the following equation for quick execution by processor 220: 255 ^ histiyaϊ) * avg - val\ std = ^ 255 where val: is the intensity values which may be taken on by 8 -bit pixels;hist(val): is the histogram of the luminance values, i.e. the number of pixels in the tile having luminance of val; and avg: is the previously computed average luminance value. [0039] Note that the above equation assumes the luminance pixels are stored using 8-bit integers, but the equation may be modified to accommodate other data types and embodiments of the invention are not limited to the aforementioned equation or data type. [0040] Once the luminance and/or detail metrics are computed for all of the tiles in image frame i, process 400 may proceed by counting the number of tiles which exceed a predetermined threshold value(s) (425). For example, in one embodiment, the number of tiles having an average value less than 30 and a std value less then 100 may be used to establish a count. This count number is then tested to determine if it exceeds a threshold value (430). The threshold value is predetermined, and it may be set to some fraction of the total number of tiles in the image frame (e.g., the predetermined threshold number may be set to .95*n*m). If the count number fails to exceed the threshold, the frame count is incremented and the next image frame is received for cover state determination processing (435, 410). Once it is determined that the count number exceeds the predetermined threshold number, the image sensor 230 is determined to be in a cover state. The processing may then proceed to the de-cover process 500, as described below. [0041] Fig. 5 is a flowchart depicting an exemplary process 500 for determining a de-cover state associated with the image-based touch user interface. Initially, the metrics of the tile values corresponding to the image frame when the cover state was detected are stored 510. For example, the average luminance and the std may be stored for each tile. These metrics may be stored in a data structure referred herein as the reference tile metrics, wherein the table may take the form of a multi-dimensional matrix). [0042] The next image frame is then received from image sensor 230, and is subdivided into n x m tiles as described above (515, 520). The process 500 may then compute luminance and/or detail metrics for each tile in a manner as described above for the cover state determination (525). Once processor 220 computes the metric(s) for each tile, each tile is examined and a trail value may be assigned thereto when the currenttile's metric exceeds the corresponding reference tile's metric by a predetermined amount. This comparison process may be performed between each tile in the current image frame j, and the previously stored reference tile metrics which were associated with image frame i. This comparison operation may be based upon predetermined threshold values. For example, a trail value for a tile may be computed when a given tile's average luminance exceeds the corresponding reference tile's average luminance by 30 levels, and/or when the given tile's std exceeds the std of the reference tile by 90 levels. [0043] Each trail value is associated with a tile, and may therefore be stored in an n x m data structure. The trail value may be computed using the following equation: * -u Λ Λ nn * f -\ avg(tile(x, y) - refTile(x, y) trailyx, y) = 100 * (j) where: j : is the current frame number which corresponds to time; avg(x,y): average luminance value of tile in position x,y in Frame j; refTile(x,y): average luminance value of the reference tile in position x,y; and T: threshold value (e.g., 30). [0044] In general, the trail value indicates when a specific tile was uncovered. The larger the trail value, the later in time that particular was uncovered. However, the trail values contain information about both time and amount of luminance gained in order to "break ties" as to when various tiles were uncovered. The time component of the trail value may be encoded by frame number j, and may only take on integer amounts. In order to provide greater granularity to the time component of the trail values, the time information (j) may be modified by the difference between the average luminance of the current tile and its corresponding reference. If this difference is large, it implies the tile in question was uncovered sooner, and thus an amount is deducted from the time information (this amount being the scaled difference). Each trail value may be stored in a two-dimensional structure called the de-cover map. The de-cover map may have n x m entries, each one corresponding to a tile position in the current image frame j. For example, a tile with trail value of 292 may be uncovered after a tile with trail 192 (a tile in the 200 's range was de-covered after tile in the 100's range). "Ties" between tiles having been uncovered during the same image frame may be "broken" based on light level gained. For example, assume a tile with a trail of 299 (frame j = 3, still a bit dark)was de-covered after tile with trail 292 (frame j = 3, but got much brighter relative to its stored covered luma value). [0045] Once a trail value has been computed for each tile position (each x, y over n, m tiles), the de-cover map is complete. A test may be performed to determine if a predetermined number of the tiles have an associated trail value (535) which may be associated with a de-cover state. The predetermined number of tiles may, for example, be a majority of tiles within the frame. If the determination is true, the process continues onto process 600. Otherwise, a test is performed to determine if a threshold number of frames have been processed (540, 545). If the number of frames exceeds the threshold, it implies the user is holding the finger on the image sensor, and the command may be interpreted as a "select" or "enter." (For example, the number of frames passed may be set to correspond to a two second time period.) Select / enter commands may be analogous to a mouse click or a key press of an "Enter" key on a keyboard, and it may be used to select an object or enter a value in the GUI of the mobile device 200. [0046] Fig. 6 is a flowchart depicting an exemplary process 600 for determining a user command associated with the image-based touch user interface. The de-cover map may be thought of as a 3-D surface, where the x, y indices are tile position, and the z values are the tile values which correspond to the time of the finger motion as it de-coved the lens. In block 610, a gradient of the de-cover map may be computed to determine the de-cover map's steepest assent. The gradient and the tail values in the de-cover map may be compared to thresholds as a secondary or alternative method to determine if the user wishes to enter a "select/enter" command into the GUI (615 and 630). In this instance, if the trail values are high and the gradient is low (indicating the trail values are uniform), the user may have been steadily holding a finger on the image sensor to indicate a "select/enter" command. [0047] From the gradient of the de-cover map, the direction of finger movement may be determined (620) in a number of different ways. For example, the processor 220 may find which of the n rows and m columns in the n x m de-cover map is the strongest. For example, for the following de-cover map: 480 480 560 570 470 480 480 560 460 470 480 480440 460 470 470 [0048] The largest column is the last one, and the largest row is the first one, so the algorithm would tell the system to move the cursor to the upper right since that direction has the largest density of trails. Once the direction of movement is determined, a command may be issued to the user interface indicative of direction. Typically these commands would be made in a format that the device drivers of the mobile device 200 may readily accept. [0049] In other embodiments, unique user gestures may be provided to the IBTUI for other commands. For example, multiple finger motions may be used to control unique features of the mobile device 200, or may be used as "shortcuts" for commands which may take multiple steps. For example, two down motions of the finger passed over the image sensor may be used to control parameters of the image sensor for taking photographs (i.e., control auto-focus, auto-exposure, hand-jitter reduction, etc.) in specific situations (e.g., setting short exposures for scanning barcodes). [0050] As mentioned above, the image sensor 200 may be sensitive to other wavelengths besides those corresponding to visible light. For example, the image sensor 200 may be sensitive to infrared radiation so it may be used in low light situations. Such embodiments may utilize a sensor having the ability to disengage an IR blocking filter. Other sensors may utilize an IR radiation source (e.g., IR LED) which may be activated when the amount of visible light is below a useable threshold. [0051] Moreover, the de-cover map may be extended to multi-dimensional structures having one dimension corresponding to time. Such data structures may be visualized as the three-dimensional rectangle, with the x-y dimensions corresponding to position, and the z dimension corresponding to time. In this data "volume," each data element may correspond to a trail value at point x, y in the image frame at time t,. [0052] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0053] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the invention. [0054] The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. [0055] Accordingly, an embodiment of the invention can include a computer readable media embodying a method for image-based touch user interface in accordance with functions, steps and/or actions described herein. Therefore embodiments of the invention can include a computer-readable medium including program code stored thereon, which, when executed by a machine, cause the machine to perform operations for processing image-based input commands for a user interface. The computer- readable medium including program code to receive image frames from a sensor; program code to determine when the sensor enters a cover state; program code to determine, from subsequent image frames, when the sensor enters a de cover state; program code to analyze information based upon the subsequent image frames to interpret a user command; and program code to issue the user command to a user interface. Accordingly, the invention is not limited to illustrated examples and anymeans for performing the functionality described herein are included in embodiments of the invention. [0056] While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
A method of forming a first and second transistor. The method provides a semiconductor surface (20). The method also forms a gate dielectric (30) adjacent the semiconductor surface. Further, the method forms a first transistor gate electrode (902) comprising a metal portion (402) in a fixed relationship with respect to the gate dielectric. Still further, the method forms a second transistor gate electrode (901) comprising a silicide (701) of the metal portion in a fixed relationship with respect to the gate dielectric.
A method of forming a plurality of transistors including a first and second transistor, comprising the steps of:providing a semiconductor surface;forming a gate dielectric adjacent the semiconductor surface;forming a first transistor gate electrode comprising a metal portion in a fixed relationship with respect to the gate dielectric; andforming a second transistor gate electrode comprising a silicide of the metal portion in a fixed relationship with respect to the gate dielectric.The method of claim 1 wherein the metal portion comprises cobalt.The method of claim 2 wherein the silicide of the metal portion comprises cobalt monosilicide.The method of claim 2 wherein the silicide of the metal portion comprises cobalt disilicide.The method of claim 1 wherein the step of forming the first transistor gate electrode comprises the steps of forming a metal layer adjacent the gate dielectric, wherein the metal layer comprises a first metal portion.The method of claim 5 wherein the step of forming the second transistor gate electrode comprises the steps of:forming a silicon layer adjacent the metal layer;patterning and etching the silicon layer to create a silicon portion from the silicon layer and adjacent a second metal portion of the metal layer; andannealing the silicon portion and the second metal portion.The method of claim 6 wherein the annealing step comprises annealing the silicon portion and the second metal portion at a temperature sufficient to convert the silicon portion and the second metal portion into a metal monosilicide portion.The method of claim 6 wherein the annealing step comprises annealing the silicon portion and the second metal portion at a temperature sufficient to convert the silicon portion and the second metal portion into a metal disilicide portion.The method of claim 6 and further comprising, at a same time:patterning a first photoresist portion and etching the first metal portion in response to the first photoresist portion; andpatterning a second photoresist portion and etching the annealed silicon portion and second metal portion in response to the second photoresist portion.The method of claim 9 and further comprising the step of removing the first and second photoresist portions.The method of claim 6 and further comprising the steps of:forming a clad layer in a fixed relationship to the first metal portion and in a fixed relationship to the annealed silicon portion and the second metal portion; andat a same time, the steps of:patterning a first photoresist portion and etching the clad layer and the first metal portion in response to the first photoresist portion; andpatterning a second photoresist portion and etching the clad layer and the annealed silicon portion and second metal portion in response to the second photoresist portion.The method of claim 11 and further comprising the step of removing the first and second photoresist portions.The method of claim 11 wherein the step of forming a clad layer comprises forming a conductive layer.The method of claim 11 wherein the step of forming a clad layer comprises forming a metal layer.The method of claim 11 wherein the step of forming a clad layer comprises forming a layer comprising a material selected from the group consisting of tantalum, titanium, and tungsten.The method of claim 11 wherein the step of forming a clad layer comprises forming a layer comprising a material selected from the group consisting of tantalum nitride, titanium nitride, and tungsten nitride.The method of claim 6 wherein the step of annealing the silicon portion and the second metal portion forms a silicide portion and leaves an unreacted silicon portion, and further comprising the step of removing the unreacted silicon portion.The method of claim 6 wherein the silicon layer comprises a polysilicon layer.The method of claim 6 wherein the silicon layer comprises an amorphous silicon layer.The method of claim 6 wherein the step of forming a silicon layer comprises sputtering the silicon layer.The method of claim 6 wherein the step of forming a silicon layer comprises forming the silicon layer with a plasma-enhanced chemical vapor deposition.The method of claim 6 wherein the step of forming a silicon layer comprises forming the silicon layer with a thermal chemical vapor deposition.The method of claim 6 wherein the step of etching the silicon layer comprises performing a dry etch on the silicon layer.The method of claim 6 wherein the step of etching the silicon layer comprises performing a wet etch on the silicon layer.The method of claim 1 wherein the semiconductor surface is a surface of a semiconductor substrate.The method of claim 1 and further comprising the steps of:forming a first source/drain region and a second source/drain region in a fixed relationship to the first transistor gate electrode; andforming a first source/drain region and a second source/drain region in a fixed relationship to the second transistor gate electrode.An integrated circuit comprising a plurality of transistors including a first and second transistor, the integrated circuit comprising:a semiconductor surface;a gate dielectric adjacent the semiconductor surface;a first transistor gate electrode comprising a metal portion in a fixed relationship with respect to the gate dielectric; anda second transistor gate electrode comprising a silicide of the metal portion in a fixed relationship with respect to the gate dielectric.The integrated circuit of claim 27 wherein the metal portion comprises cobalt.The integrated circuit of claim 28 wherein the silicide of the metal portion comprises cobalt monosilicide.The integrated circuit of claim 28 wherein the silicide of the metal portion comprises cobalt disilicide.The integrated circuit of claim 27:wherein the first transistor gate electrode further comprises a first clad portion in a fixed relationship with respect to the metal portion; andwherein the second transistor gate electrode further comprises a second clad portion in a fixed relationship with respect to the silicide of the metal portion.The integrated circuit of claim 31 wherein the first clad portion and the second clad portion comprise conductors.The integrated circuit of claim 31 wherein the first clad portion and the second clad portion comprise metals.The integrated circuit of claim 31 wherein the first clad portion and the second clad portion comprise a material selected from the group consisting of tantalum, titanium, and tungsten.The integrated circuit of claim 31 wherein the first clad portion and the second clad portion comprise a material selected from the group consisting of tantalum nitride, titanium nitride, and tungsten nitride.The integrated circuit of claim 27 and further comprising:a first source/drain region and a second source/drain region in a fixed relationship to the first transistor gate electrode; anda first source/drain region and a second source/drain region in a fixed relationship to the second transistor gate electrode.
BACKGROUND OF THE INVENTIONThe present embodiments relate to semiconductor transistor fabrication and are more particularly directed to complementary transistors.Integrated circuit technology continues to advance at a rapid pace, with many circuit technologies being implemented using semiconductor fabrication processes. With the advancement of semiconductor circuit fabrication, consideration is given to various aspects, including maximizing efficiency, lowering manufacturing cost, and increasing performance. With these goals in mind, one area relating to the preferred embodiments is the continuing trend of reducing the thickness of the transistor gate dielectrics. For example, in the past the gate dielectric layer thickness was on the order of 100 Angstroms, but more recently that thickness has reduced considerably with a more current goal being on the order of 20 Angstroms. Indeed, this goal will strive for even thinner gate dielectric layers in the foreseeable future. This goal reduces device size and facilitates improved device performance.While the above demonstrates the desirability and trend toward thinner gate dielectrics, such an approach also provides a considerable drawback. Specifically, overlying the thin gate dielectric is a polycrystalline silicon ("polysilicon") gate layer, and it is known in the art that polysilicon naturally includes a depletion region at the interface between the polysilicon gate and the gate dielectric. Typically, the depletion region manifests itself as providing the electrical equivalent of approximately a 3 Angstrom thick insulator and, as such, the region in effect provides an insulating effect rather than a conducting effect as would be present in the remainder of the polysilicon gate conductor. Using the preceding numeric example, therefore, for a 100 Angstrom thick gate dielectric, then the overlying effective 3 Angstrom thick polysilicon depletion region may be thought to effectively increase the overall insulation between the gate and the underlying transistor channel from 100 Angstroms to 103 Angstroms, that is, the effect of the depletion region affects the insulating thickness by three percent as such, for previous thicker gate insulators the effect of the polysilicon depletion region may be considered to have a negligible impact on the gate dielectric. In contrast, however, for a 20 Angstrom thick gate dielectric, then the polysilicon gate conductor depletion region may be thought to increase the gate insulator to 23 Angstroms, thereby representing an increase on the order of 15 percent. This increased percentage significantly reduces the benefits otherwise provided by the thinner gate dielectric.By way of further background, one approach in general to avoiding the depletion region phenomenon of polysilicon transistor gates is to use metal as an alternative material for the transistor gate since metal does not present a considerable depletion region, if any. Prior to the more recent use of polysilicon gates, metal gates were fairly common. The present inventors note, however, a previously identified drawback of such metal gates, which indeed led to the avoidance of such metals in contemporary devices. Specifically, each metal has a corresponding so-called work function, and in the transistor art each transistor also has a corresponding preferred value for a work function of the gate electrode. However, the desired work function value differs for different transistor types. For example, based on present day threshold voltage channel doping, a p-channel MOS transistor ("PMOS") is optimized when the gate electrode has a work function on the order of 5 eV while an n-channel MOS transistor ("NMOS") is optimized when the gate electrode has a work function on the order of 4 eV. The problem with previously-used metal gates arose with the development of CMOS circuits which, by definition, include both PMOS and NMOS transistors. Specifically, because a metal gate provides only a single work function, then it could not be selected to provide the two different desired work functions of the PMOS and NMOS devices. Instead, at best a metal could be selected to be between the desired work function of a PMOS and an NMOS transistor, which is sometimes referred to as the "midgap" between these devices (i.e., on the order of 4.5 eV for the preceding examples). This inability to match different work functions led to the use of polysilicon gates whereby the polysilicon gates of the NMOS devices could be doped in a first manner in view of the desired work function for NMOS transistors and the polysilicon gates of the PMOS devices could be doped in a second manner in view of the desired work function for PMOS transistors.In view of the above, there arises a need to address the limitations and drawbacks of the prior art, as is achieved by the preferred embodiments described below.BRIEF SUMMARY OF THE INVENTIONIn the preferred embodiment, there is a method of forming a first and second transistor. The method provides a semiconductor surface. The method also forms a gate dielectric adjacent the semiconductor surface. Further, the method forms a first transistor gate electrode comprising a metal portion in a fixed relationship with respect to the gate dielectric. Still further, the method forms a second transistor gate electrode comprising a silicide of the metal portion in a fixed relationship with respect to the gate dielectric. Other aspects are also disclosed and claimed.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGFigure 1 illustrates a cross-sectional view of a semiconductor structure according to the preferred embodiment after a first set of fabrication steps.Figure 2 illustrates a cross-sectional view of the structure from Figure 1 after additional processing steps.Figure 3 illustrates a cross-sectional view of the structure from Figure 2 after additional processing steps.Figure 4a illustrates a cross-sectional view of the structure from Figure 3 after additional processing steps.Figure 4b illustrates a cross-sectional view of the structure from Figure 3 after an alternative set of additional processing steps.DETAILED DESCRIPTION OF THE INVENTIONFigure 1 illustrates a cross-sectional view of a semiconductor structure shown generally at 10. By way of introduction, structure 10 represents a first set of steps in accordance with the preferred embodiment, with the following discussion and additional Figures illustrating various additional steps used to form complementary PMOS and NMOS transistors. Moreover, while the set of transistors ultimately illustrate depict only a single PMOS and a single NMOS transistor, one skilled in the art will readily appreciate that the present inventive teachings may apply to numerous transistors of both types in a circuit. Also by way of introduction, note that the various layers shown in the cross-section of Figure 1 and later Figures are not shown to scale to simplify the following discussion and because varying thickness may be employed as may be ascertained by one skilled in the art and also in view of various guidance given below.Turning to Figure 1 in greater detail, the preferred embodiment includes a semiconductor surface formed preferably by a substrate 20, where substrate 20 is preferably formed from silicon. A dielectric layer 30 is formed over substrate 20, where the material for dielectric layer 30 is preferably chosen so that portions of dielectric layer 30 later function as gate dielectrics for complementary PMOS and NMOS transistors. A metal layer 40 is formed over dielectric layer 30, where in the preferred embodiment metal layer 40 is formed from cobalt. Finally, a silicon layer 50 is formed over metal layer 40. In the preferred embodiment, silicon layer 50 may be polysilicon or amorphous silicon. As between these alternatives, preferably the selection is such that, at the stage in the process represented by Figure 1, the chosen silicon in layer 50 does not react with the underlying metal layer 40. Given this consideration, in most applications amorphous silicon is more preferred because it may be formed over metal layer 40 at lower temperatures than that typically required to form polysilicon. For example, an amorphous silicon layer 50 may be formed at temperatures lower than 500°C. Note also that the manner in which silicon layer 50 is formed may be according to various alternatives, again where the choice of such an alternative is preferably directed to ensuring no reaction between silicon layer 50 and metal layer 40. For example, a sputter technique may be used because it may be carried out at low temperatures such as room temperature, although from a manufacturing standpoint such a technique may prove relatively complex. As an alternative, a plasma-enhanced chemical vapor deposition ("CVD") may be used because it too uses a relatively low temperature, where this temperature may be above room temperature yet the CVD may prove more easily implemented as compared to the sputter technique. Lastly, a thermal CVD process may be used, but caution should be taken to ensure that any temperature constraint of that process does not cause a reaction between metal layer 40 and silicon layer 50.Figure 2 illustrates a cross-sectional view of structure 10 from Figure 1 after additional processing steps. In Figure 2, a portion of silicon layer 50 is removed, thereby leaving a remaining portion 501 of silicon. In the preferred embodiment, this selective removal is achieved by first forming a photoresist layer over silicon layer 50 and then patterning that photoresist layer to leave a photoresist portion 60 of the photoresist layer as shown in Figure 2. Thereafter, an etch selective to the metal of metal layer 40 is performed, that is, one which stops when it reaches metal layer 40. In the preferred embodiment, therefore, the silicon etch performed in connection with Figure 2 is selective to cobalt. The selective etch removes the area of silicon layer 50 that is not covered by photoresist portion 60, thereby leaving a remaining silicon portion 501 under photoresist portion 60. In the preferred embodiment the silicon etch is a dry process, although alternatively a wet etch may be implemented because the result of the etch does not provide critical dimensions in that a later etch is performed to provide various device boundaries as further appreciated below.Figure 3 illustrates a cross-sectional view of structure 10 from Figure 2 after two additional processing steps, and each of these steps is discussed below.As a first step reflected in Figure 3, photoresist portion 60 (see Figure 2) is removed, and this removal step may be accomplished in various manners. For example, either an oxygen or hydrogen ash may be used, although an oxygen approach is less favored because it may pose a risk of oxidation of the metal (e.g., cobalt) in the exposed area of metal layer 40. As still another example, a solvent may be chosen, in which case the particular solvent should be selected so as not to damage either the silicon in portion 501 underlying photoresist portion 60 or the metal in the exposed area of metal layer 40. In any event, once photoresist portion 60 is removed, silicon portion 501 is exposed.As a second step reflected in Figure 3, an anneal step is performed preferably after photoresist portion 60 is removed. The anneal step may be achieved using various temperatures and times, and by way of example a rapid thermal processing ("RTP") operation may be used whereby a relatively short anneal is performed at temperatures of 500 °C or greater. In the preferred embodiment, the anneal causes silicon portion 501 to react with the aligned portion of the metal underlying it in metal layer 40, and this reaction thereby forms a metal silicide portion 70. The actual temperature used in the anneal step may determine the precise type of silicide in silicide portion 70. For example, at temperatures on the order of 500 to 600 °C and using the preferred metal of cobalt for metal layer 40, then silicide portion 70 is likely to form a cobalt monosilicide. As another example, at temperatures on the order of 700 to 800 °C and again using the preferred metal of cobalt for metal layer 40, then silicide portion 70 is likely to form a cobalt disilicide. Indeed, in various applications, the cobalt disilicide result may be desirable because it has a lower resistivity as compared to cobalt monosilicide. In addition, note also from Figure 2 and the result in Figure 3 that the thickness of silicon layer 50, which determines the thickness of silicon portion 501, also may affect the extent to which the metal in metal layer 40 is converted to a silicide. Further in this regard, in an alternative method approach, the thickness of silicon layer 50 may be selected so as to achieve a desired extent and thickness of silicide in silicide portion 70. However, thickness control may prove difficult, but in view of this added complexity still another approach within the inventive scope is to choose the thickness of silicon layer 50 to be considerably large with the goal that not all of the silicon will be consumed during the anneal step. Using this approach, then after the anneal a portion of structure 10 is etched, such as through use of a blanket silicon etch, so as to remove the unconsumed silicon. With the unconsumed silicon removed, silicide portion 70 remains as shown in Figure 3. Lastly, note that the preferred anneal step does not materially affect a portion 401 of metal layer 40 to the right of Figure 3 because it was not in contact with any silicon as shown in Figure 2. Thus, portion 401 remains as the material originally used for metal layer 40 (e.g., cobalt).Figure 4a illustrates a cross-sectional view of structure 10 from Figure 3 after additional processing steps. First, a photoresist layer is formed and patterned over structure 10 to thereby form two photoresist portions 801 and 802 overlying the metal silicide and metal materials, respectively, underlying those portions. Second, an etch is performed down to dielectric layer 30. The resulting structures following this etch are therefore shown in Figure 4a and include two gate electrodes 901 and 902. With respect to gate electrode 901, it includes a silicide portion 701 which remains from the etch of silicide layer 70, and a portion of dielectric layer 30 separates silicide portion 701 from substrate 20 and, thus, this portion serves as a gate insulator 301. With respect to gate electrode 902, it includes a metal portion 402 which remains from the etch of metal portion 401, and a portion of dielectric layer 30 separates metal portion 402 from substrate 20 and, thus, this portion serves as a gate insulator 302.Various additional observations now may be made with respect to the resulting structure in Figure 4a. As a first observation, gate electrodes 901 and 902 provide structures from which two different transistors may be formed, where the gate of each respective transistor has a different work function because each electrode includes a different material. For example, an NMOS transistor may be formed with respect to gate electrode 901 which thereby implements a gate having the work function of the metal silicide of silicide portion 701, while a PMOS transistor may be formed with respect to electrode 902 which thereby implements a gate having the work function of the metal of metal portion 402. To further illustrate these aspects and as a second observation, photoresist portions 801 and 802 may be thereafter removed and insulating sidewalls may be formed with respect to the gate materials and their underlying gate insulators; such sidewalls 951 through 954 are shown with dashed lines in Figure 4a. Additionally, various additional transistor aspects as readily ascertainable by one skilled in the art are not shown, but may be implemented with respect to the gate electrodes, including but not limited to n-wells or p-wells, source/drain regions, channel implants, isolation oxides, and the like. Moreover, some of these regions may be formed prior to the formation of the gate electrodes, such as the formation of isolation regions to later define boundaries for source/drain implants and a well of a given conductivity type such as an n-well for a PMOS transistor, while others of these regions may be formed after the formation of the gate electrodes, such as the formation of the source/drain regions. As a final observation, the preferred methodology as illustrated in Figures 1 through 4a demonstrates still another benefit arising with respect to the formation of gate insulators 301 and 302. Specifically, from the above, note that the etch down to dielectric layer 30 does not reach those portions of that layer that serve as gate insulators 301 and 302. Thus, the material properties of these gate insulators 301 and 302 are not affected by a direct exposure of these regions to the etch chemistry.Figure 4b illustrates a cross-sectional view of an alternative structure 10' which is created with alternative processing steps following Figure 3. Structure 10' shares various attributes with structure 10 of Figure 4a and, thus, like reference numerals are used in Figure 4b with respect to these attributes. As an alteration, however, prior to forming a photoresist layer and photoresist portions 803 and 804 from that layer, an additional clad layer is formed, and photoresist portions 803 and 804 are then formed and an etch down to dielectric layer 30 is performed; as a result, two gate electrodes 903 and 904 are formed, but each gate electrode includes an additional clad layer portion 1001 and 1002, respectively. The inclusion of the additional clad layer and the resulting portions 1001 and 1002 may be used for various purposes. For example, if the approach of Figure 4a would result in gate electrodes of an insufficient thickness, then the use of an additional clad layer to form portions 1001 and 1002 thereby increases the height of gate electrodes 903 and 904 as opposed to gate electrodes 901 and 902. As another example, if a lower sheet resistance is desired than that achieved by the approach of Figure 4a, then the Figure 4b approach may be implemented where the material for the clad layer forming portions 1001 and 1002 is selected to alter the sheet resistance. For example, various materials may be considered to reduce the overall sheet resistance of portions 1001 and 1002, such as various conductive layers including metals, and indeed preferably refractory metals, such as tungsten, tantalum, titanium, tungsten nitride, tantalum nitride, and titanium nitride, and also others as may be ascertained by one skilled in the art. In any event, once the etch to form gate electrodes 903 and 904 is complete, photoresist portions 801 and 802 may be removed and insulating sidewalls 955 through 958 may be formed, as shown with dashed lines in Figure 4b. Lastly, various other benefits realized by structure 10 of Figure 4a are also realized by structure 10' of Figure 4b.From the above, it may be appreciated that the above embodiments provide a set of transistor gates where one gate is formed from a metal and the other gate is formed from a corresponding metal-silicide. In the preferred embodiment, the metal is cobalt while the metal-silicide is either cobalt monosilicide or cobalt disilicide. Given these resulting structures, the preferred embodiment produces various benefits. For example, each transistor gate has a different work function, and indeed the metal gate structure proves useful as a PMOS gate electrode while the metal-silicide gate structure proves useful as an NMOS gate electrode. As another example, transistors may be formed using these resulting structures along with relatively thin gate dielectrics, and the overlying metal or metal-silicide gate will not include a substantial depletion region as is the case for contemporary polysilicon gate transistors. As still another example, while cobalt has been shown as a preferred metal, other metals may be used. As yet another example, the preferred embodiment contemplates additional variations described above. Thus, all of these examples further demonstrate that while the present embodiments have been described in detail, various substitutions, modifications or alterations could be made to the descriptions set forth above without departing from the inventive scope which is defined by the following claims.
Providing trusted time in a computing platform, while still supporting privacy, may be accomplished by having a trusted time device provide the trusted time to an application executing on the computing platform. The trusted time device may be reset by determining if a stored value has been set, and if not, waiting a period of time, generating a new random number, and storing the new random number. The trusted time random number is reset to zero whenever electrical power is first applied to the trusted time device upon power up of the computing platform, and whenever a battery powering the trusted time device is removed and reconnected. By limiting random number storage and waiting the specified period of time, attacks on the computing platform to determine the trusted time may be minimized, while deterring the computing platform from being uniquely identified.
A method of supporting privacy for a computing platform having a trusted time device to provide trusted time to an application executing on the computing platform comprising:resetting the trusted time device by determining if a stored value indicates whether power to the trusted time device has not been removed since a last power disruption to the trusted time device and that the trusted time device is to be initialized, and if not, waiting a period of time after determining that the stored value has not been set, generating a new random number after waiting the period of time, and storing the new random number; andinitializing the computing platform after storing the new random number by obtaining a current trusted time from a trusted time source, the trusted time source being external to the computing platform, obtaining the stored random number, storing an application offset value equal to the current trusted time minus a counter value into a secure storage within the computing system, and storing the random number into the secure storage.The method of claim 1, further comprising resetting the trusted time device whenever electrical power is applied at power up of the computing platform.The method of claim 1, wherein at least a portion of the trusted time device is powered by a battery within the trusted time device.The method of claim 1, wherein the period of time comprises a fixed amount of time.The method of claim 1, wherein the period of time comprises a variable amount of time.The method of claim 1, further comprising obtaining the current trusted time for use by an application program executing on the computing platform at a point in time after computing platform initialization by obtaining the random number and the application offset value from the secure storage, comparing the random number from the secure storage to current contents within the trusted time device, and when the random number from the secure storage matches the current contents, setting the current trusted time to the application offset plus the counter value from the trusted time device.An article comprising: a non-transitory computer readable storage medium containing instructions, which when executed, result in supporting privacy for a computing platform having a trusted time device to provide trusted time to an application executing on the computing platform by resetting the trusted time device by determining if a stored value has been set to indicate power to the trusted time device has not been removed since a last power disruption to the trusted time device and that the trusted time device is to be initialized, and if not, waiting a period of time after determining that the stored value has not been set, generating a new random number after waiting the period of time, and storing the new random number; and initializing the computing platform after storing the new random number by obtaining a current trusted time from a trusted time source, the trusted time source being external to the computing platform, storing an application offset value equal to the current trusted time minus a counter value into a secure storage within the computing system, and storing the random number read into the secure storage.The article of claim 7, further comprising instructions for resetting the trusted time device whenever electrical power is applied at power up of the computing platform.The article of claim 8, wherein the period of time comprises a fixed amount of time.A computing platform comprising:a trusted time device to provide trusted time to an application to be executed on the computing platform;a random number generator; anda secure storage,wherein the trusted time device is capable of being reset by determining if a stored value has been set to indicate power to the trusted time device has not been removed since a last power disruption to the trusted time device and that the trusted time device is to be initialized, and if not, waiting a period of time after determining that the stored value has not been set, generating a new random number after waiting the period of time by the random number generator, and storing the new random number; andwherein the computing platform is to be initialized after storing the new random number by obtaining a current trusted time from a trusted time source, the trusted time source being external to the computing platform, storing an application offset value equal to the current trusted time minus a counter value into the secure storage within the computing system, and storing the random number into the secure storage.The trusted time device of claim 10, wherein the trusted time device is reset whenever electrical power is applied at power up of the computing platform.The trusted time device of claim 11, further comprising a battery to power at least a portion of the trusted time device, wherein the battery is within the trusted time device, and wherein the trusted time device is incorporated into an input/output controller hub of the computing platform.The trusted time device of claim 10, wherein the period of time comprises a fixed amount of time.The trusted time device of claim 10, wherein the trusted time device does not uniquely identify the computing platform.
BACKGROUND1. FIELDThe present invention relates generally to computer security and, more specifically, to providing time in a computing platform that is trusted by executing applications.2. DESCRIPTIONObtaining a value for time that can be trusted in a computing platform is desirable. For example, trusted time may be used in conjunction with other processing to improve the robustness of content protection mechanisms to assure that premium content is available for the digital home. It may be used in a content protection environment to assure that the computing platform owner downloads a revocation list of compromised keys on a periodic basis. It may also be used to provide a secure way to enable content to be purchased for access during a temporary time window. However, if the time value can be modified by an unscrupulous user without detection by the computing platform, then computer security and content protection systems may be compromised.Existing solutions to providing trusted time require a battery contained in a tamper resistant hardware module that cannot be easily removed by the user (such as described in Trusted Platform Module (TPM) Main part 1 Design Principles, Specification Version 1.2, Revision 81, November 23, 2004, pp. 93-98, available from the Trusted Computing Group ). This may be problematic for continued operation of some computer systems as they age and the battery needs replacement. If the user can't change the battery without disrupting system operation, frustration with the system may ensue.Therefore, a better mechanism to provide a trusted time value in a computing platform would be useful.BRIEF DESCRIPTION OF THE DRAWINGSThe features and advantages of the present invention will become apparent from the following detailed description of the present invention in which:Figure 1 is a diagram of a trusted time architecture according to an embodiment of the present invention;Figure 2 is a flow diagram illustrating resetting a trusted time device according to an embodiment of the present invention;Figure 3 is a flow diagram illustrating application initialization according to an embodiment of the present invention; andFigure 4 is a flow diagram illustrating obtaining the current trusted time by an application according to an embodiment of the present invention.DETAILED DESCRIPTIONAn embodiment of the present invention is a method and apparatus for providing trusted time in a computing platform. One security requirement is that the user must not be able to modify the trusted time. In one embodiment, a battery may be used that provides electrical power to a small group of trusted time circuits. An initial connection to a trusted time source may be used to initialize the trusted time. The property is achieved that as long as the trusted time circuits are powered up, trusted time will be provided in the computing platform. If power to the trusted time circuits is ever removed, then the absence of power will be detected, thus requiring a connection to the trusted time source to reinitialize the trusted time mechanism.Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment.Figure 1 is a diagram of a trusted time architecture according to an embodiment of the present invention. A computing platform 100 includes an executing application program 102 and trusted time device 104. The computing platform may be any system having a processor for executing instructions, such as a personal computer, a server, a laptop or handheld computer, a personal digital assistant (PDA), a cell phone, a set-top box, and so on. Well known details of components of such a system have been omitted from Figure 1 for clarity. Application 102 may be any computer program for providing some functionality to a user of the computing platform that uses trusted time for some processing. The application wants to use trusted time and has functionality to store information securely so that the information cannot be easily modified by the user without detection.The application communicates with a trusted time source 106 for obtaining an initial trusted time. Trusted time source 106 comprises a source external to the computing platform that can provide a trusted time value. The trusted time source may be communicatively coupled with the application in any way (e.g., via a network such as the Internet, or an intranet).In embodiments of the present invention, the application wants to trust the time available on the computing platform, even if the user is an adversary. Further, the mechanism for providing trusted time should be privacy-friendly. That is, the trusted time mechanism should not uniquely identify the computing platform, which might raise privacy concerns. In the present invention, trusted time device 104 provides trusted time unless power is removed. If power is removed, the application can detect this event.Trusted time device 104 comprises a hardware computing device which contains trusted battery well 110 powered by trusted time battery 108, and other circuits (not shown) that are not powered by the trusted time battery. In one embodiment, the trusted time device may be integrated into the Input/Output (I/O) Controller Hub (ICH) of a computer system's chipset. Trusted time device 104 includes battery 108, which comprises a conventional replaceable power source to provide small amounts of electrical power for a very long time. In one embodiment, the battery is the same as the Real Time Clock battery existing in many computing platforms. Trusted time battery well 110 comprises a small set of circuits that are powered by battery 108, and remain powered up as long as the trusted time battery 108 is operational and not removed.The trusted time battery well includes at least three other components. Crystal 112 comprises a circuit that produces a clock pulse at a constant and known frequency. In one embodiment, the crystal may be outside of the trusted time batter well. Trusted time (TT) Random (Rand) register 114 comprises a register to store a random number. TT Counter register 116 comprises a register that, in one embodiment, increments by one with a fixed frequency. The frequency may be once for each tick of crystal 112, or once per second. In one embodiment, the size of the TT Counter may be set such that the computing platform could operate for 20 years, for example, before rolling over the counter. For the TT Rand and TT Counter registers, when power is first provided to these registers, they are set to all zeros. If power is ever removed and then restored, the registers are set to all zeros.Trusted time device 104 also includes a random number generator (RNG) 118. RNG comprises a circuit that provides a random number as needed.Computing platform 100 also includes secure storage 120. Secure storage is a component used to store data in a secure manner that is not easily tampered with by the user or any other party. In one embodiment, secure storage 120 comprises a trusted platform module (TPM) as described by specifications available from the Trusted Computing Group. In another embodiment, secure storage may be provided using known tamper resistant software techniques. Application 102 has the ability to securely store at least two values: Application Random value (Rand) 122, and Application Offset 124.Figure 2 is a flow diagram illustrating resetting 200 a trusted time device according to an embodiment of the present invention. Resetting may be performed whenever power is applied to the computing platform (i.e., at power on of the system). At block 202, the computing platform checks TT Rand 114 to determine if the value currently stored in the TT Rand register is zero. If TT Rand is not zero, this means that the battery has continually powered the trusted time device since the last time the battery was replaced, and the computing platform may proceed with initialization processing at block 208. If TT Rand is zero, this means that battery 108 has been disconnected and reconnected. The computing platform then waits a period of time at block 204. In one embodiment, this period of time is a fixed amount of one minute. In other embodiments, the period of time may be a different fixed amount of time, such as 30 seconds, two minutes, three minutes, and so on. In still further embodiments, the period of time may be variable over successive resets. At block 206, a new TT Rand may be generated using random number generator (RNG) 108 and stored in TT Rand 114 before continuing with initialization processing at block 208.Because the value of TT Rand is a random number, there is a potential concern that it could be used to identify the computing platform. In an embodiment of the present invention, this is solved by carefully picking the size of TT Rand, and by modifying the behavior of the population of TT Rand. First, the size of TT Rand may be chosen small enough such that it will not be a unique identifier of the computing platform. Second, the only time that TT Rand will be populated anew upon a system reset is after the power has been removed to the Trusted Time Battery Well 110 (i.e., the battery has been disconnected). The time delay prior to repopulating the TT Rand register will only occur when the battery has been disconnected, and not during typical resets of the computing platform. Thus, a substantial time delay between reset and the time that TT Rand is populated with a new random value during reset of the trusted time device may be used for the case when TT Rand is all zeros.Taking these requirements into account, in one embodiment, the TT Rand register comprises a 16 bit register. If a one minute time delay is used at block 204, then it would take an expected number of 216trials (taking approximately 45 days) of continual attempts in a brute force attack before the value of TT Rand matches the value of App Rand 122 stored in secure storage 120. But since there are hundreds of millions of computing platforms in service worldwide, 216(65,536) is a small enough number so that it would not be construed as a unique identifier of the computing platform, thereby supporting user privacy. Other sizes for the TT Rand register (e.g., 20 bits) and the time delay may be used depending on system implementation requirements without departing from the present invention.Figure 3 is a flow diagram illustrating application initialization 300 according to an embodiment of the present invention. At block 302, the application contacts a trusted time source 106 to obtain the current trusted time. In one embodiment, this may be accomplished in a secure manner by the application sending a nonce to the trusted time source, the trusted time source digitally signing the current trusted time and the nonce with its private key, and the trusted time source sending the signed current trusted time and nonce to the application. If the application has the public key of the trusted time source, the application can decrypt the signed current trusted time and nonce, and check to make sure the received nonce matches the nonce sent to the trusted time source. At block 304, the application obtains values for TT Rand and TT Counter from the appropriate registers 114, 116 in the battery well of the trusted time device 104. Next, at block 306, the application optionally converts the TT Counter obtained from the trusted time device and current trusted time obtained from the trusted time source to application time units, if necessary. At block 308, the application sets the application offset to the current trusted time minus TT Counter. This acts as a baseline value for later measurement of elapsed time since initialization. At block 310, the application sets the application random value to TT Rand 114. At block 312, the application securely stores the modified application offset 124 and application random value 122 in secure storage 120.Figure 4 is a flow diagram illustrating obtaining the current trusted time by an application 400 according to an embodiment of the present invention. These actions may be performed when an executing application needs to locally access the current trusted time during application processing subsequent to initialization. At block 402, the application obtains application random value 122 and application offset 124 from secure storage 120. At block 404, the application obtains TT Rand 114 and TT Counter 116 from the battery well of the trusted time device 104. At block 406, if TT Rand does not match the application random value, then an error may be reported at block 408, and application initialization may be performed again as represented in Figure 4 at block 410. If TT Rand does match the application random value, then the application optionally converts TT Counter to application time units (if necessary) at block 412, and sets the current trusted time to the application offset + TT Counter at block 414. The current trusted time may then be used by the application for further processing.Although the operations detailed herein may be described as a sequential process, some of the operations may in fact be performed in parallel or concurrently. In addition, in some embodiments the order of the operations may be rearranged.The techniques described herein are not limited to any particular hardware or software configuration; they may find applicability in any computing or processing environment. The techniques may be implemented in hardware, software, or a combination of the two. The techniques may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, that each include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code is applied to the data entered using the input device to perform the functions described and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that the invention can be practiced with various computer system configurations, including multiprocessor systems, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks may be performed by remote processing devices that are linked through a communications network.Each program may be implemented in a high level procedural or object oriented programming language to communicate with a processing .system. However, programs may be implemented in assembly or machine language, if desired. In any case, the language may be compiled or interpreted.Program instructions may be used to cause a general-purpose or special-purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include a machine accessible medium having stored thereon instructions that may be used to program a processing system or other electronic device to perform the methods. The term "machine accessible medium" used herein shall include any medium that is capable of storing or encoding a sequence of instructions for execution by a machine and that cause the machine to perform any one of the methods described herein. The term "machine accessible medium" shall accordingly include, but not be limited to, solid-state memories, optical and magnetic disks, and a carrier wave that encodes a data signal. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating the execution of the software by a processing system cause the processor to perform an action of produce a result.1. A method of supporting privacy for a computing platform having a trusted time device to provide trusted time to an application executing on the computing platform comprising:resetting the trusted time device by determining if a value in a register has been set, and if not, waiting a period of time, generating a new random number, and storing the new random number in the register.2. The method of clause 1, further comprising resetting the trusted time device whenever electrical power is applied at power up of the computing platform.3. The method of clause 1, further comprising setting the register to zero whenever electrical power is first provided to the trusted time device, and whenever electrical power is removed and then restored to the trusted time device.4. The method of clause 1, wherein at least a portion of the trusted time device is powered by a battery.5. The method of clause 1, further comprising proceeding with initializing the computing platform after storing the new random number in the register.6. The method of clause 1, wherein the period of time comprises a variable amount of time.7. An article comprising: a machine accessible medium containing instructions, which when executed, result in supporting privacy for a computing platform having a trusted time device to provide trusted time to an application executing on the computing platform by resetting the trusted time device by determining if a value in a register has been set, and if not, waiting a period of time, generating a new random number, and storing the new random number in the register.8. The article of clause 7, further comprising instructions for resetting the trusted time device whenever electrical power is applied at power up of the computing platform.9. The article of clause 8, further comprising instructions for proceeding with initializing the computing platform after storing the new random number in the register.10. A trusted time device to provide trusted time to an application executing on a computing platform without uniquely identifying the computing platform comprising:a random number generator; anda register;wherein the trusted time device is capable of being reset by determining if a value in the register has been set, and if not, waiting a period of time, generating a new random number by the random number generator, and storing the new random number in the register.11. The trusted time device of clause 10, wherein the trusted time device is reset whenever electrical power is applied at power up of the computing platform.12. The trusted time device of clause 10, wherein the trusted time device is capable of resetting the register to zero whenever electrical power is first provided to the trusted time device, and whenever electrical power is removed and then restored to the trusted time device.13. The trusted time device of clause 10, further comprising a battery to power at least a portion of the trusted time device, and wherein the trusted time device is incorporated into an input/output controller hub of the computing platform.14. The trusted time device of clause 10, wherein the computing platform is initialized after storing the new random number in the register of the trusted time device.15. The trusted time device of clause 10, wherein the register comprises less than or equal to 20 bits.
The present application provides an electronic system, comprising: a data source configured to generate a data signal and a timing signal; a data destination; means for analyzing the timing signal from the data source, wherein the timing signal analyzing means identifies a data valid window (DVW) in the data signal according to the timing signal; means for capturing data in the DVW in the data signal from the data source; means for transferring the captured data to the data destination; and means for adjusting the data capturing means according to the timing signal analyzing means.
An electronic system, comprising:a data source configured to generate a data signal and a timing signal;a data destination;means for analyzing the timing signal from the data source, wherein the timing signal analyzing means identifies a data valid window (DVW) in the data signal according to the timing signal;means for capturing data in the DVW in the data signal from the data source; means for transferring the captured data to the data destination; and means for adjusting the data capturing means according to the timing signal analyzing means.A electronic system according to claim 1, wherein the data capturing means comprises a delay circuit configured to generate delay clock signals for a nominal edge and a nominal midpoint of a DVW in the timing signal.A electronic system according to claim 2, wherein the timing signal analyzing means comprises a compare circuit configured to generate a comparison signal according to a difference between a latched signal corresponding to the nominal edge and a latched signal corresponding to the nominal midpoint.A electronic system according to claim 2, wherein the delay circuit comprises a multi-tap delay line.A electronic system according to claim 2, wherein the delay circuit is configured to generate the plurality of delay clock signals in conjunction with a free-running clock signal.A data transfer system for transferring data from a data source to a data destination, comprising:a sampler configured to sample a timing signal from the data source at a plurality of times; anda compare circuit configured to analyze the samples from the sampler to identify a leading edge, a trailing edge, and a midpoint of a data valid window (DVW) in the timing signal.A data transfer system according to claim 6, wherein the compare circuit is further configured to adjust the plurality of times at which the sampler is configured to sample the timing signal.A data transfer system according to claim 6, wherein the compare circuit is further configured to adjust at least one of the plurality of times at which the sampler is configured to sample the data signal to correspond to at least one of the identified leading edge, trailing edge, and midpoint of the DVW.A data transfer system according to claim 6, wherein:the sampler is configured to sample the timing signal at a nominal leading edge, a nominal trailing edge, and a nominal midpoint of the DVW; andthe compare circuit is configured to compare the samples from the nominal leading edge and the nominal trailing edge to the sample from the nominal midpoint.A data transfer system according to claim 6, wherein the sampler comprises a multi-tap delay line.A data transfer system according to claim 6, wherein the sampler is configured to sample a timing signal from the data source at a plurality of times in conjunction with a free-running clock signal.A method of transferring data from a data source to a data destination, comprising:sampling a timing signal from the data source;identifying a data valid window (DVW) in a data signal according to the sampled timing signal; andcapturing the data in the identified DVW.A method of transferring data according to claim 12, wherein sampling the timing signal comprises sampling at a nominal leading edge and a nominal trailing edge of a DVW in the timing signal.A method of transferring data according to claim 13, wherein identifying the DVW further comprises sampling at a nominal midpoint of the DVW in the timing signal and comparing the nominal midpoint sample to the nominal leading edge sample and the nominal trailing edge sample.A method of transferring data according to claim 12, wherein capturing the data comprises capturing data at an approximate midpoint of the identified DVW.A method of transferring data according to claim 12, further comprising adjusting the sampling of the timing signal according to the identified DVW in the data signal.A method of transferring data according to claim 15 wherein the data source is a memory, comprising the further steps of:requesting the timing signal from the memory;and the step of identifying a data valid window comprises identifying at least one of a leading edge and a trailing edge of a data valid window (DVW) in the timing signal;and the step of capturing the data comprises: calculating an approximate midpoint of the DVW based on the at least one of the leading edge and the trailing edge; receiving a data signal from the memory; and capturing a datum from the data signal at an approximate midpoint of a DVW of the data signal corresponding to the approximate midpoint of the DVW of the timing signal.A method of reading data according to claim 17, wherein sampling the timing signal comprises sampling the timing signal at a nominal midpoint and at least one of a nominal leading edge and a nominal trailing edge of the DVW of the timing signal.A method of reading data according to claim 18, comprising comparing the nominal midpoint sample to the at least one of the nominal leading edge and the nominal trailing edge sample.A method of reading data according to claim 17, comprising adjusting the sampling of the timing sample according to the identified at least one of the leading edge and the trailing edge of the data valid window (DVW) in the timing signal.
Filed of the Invention The invention relates generally to memory devices, methods, and systems, and more particularly, to timing for memory accesses. Background of the Invention Many electronic systems and virtually every computer include a memory to store information. For temporary storage, many systems use random access memory (RAM) for high access speed and low cost. Several types of RAM and other memory devices have been and continue to be developed as computers and other electronic systems evolve.To store and retrieve information using a memory, data is asserted on multiple data lines by a data source device. In a purely synchronous system, data output and capture are referenced to a common free-running system clock. The maximum data rate for such as system, however, is reached when the sum of output access time and flight time approaches the bit time (the reciprocal of the data rate). Although generating delayed clocks for early data launch and/or late data capture allows for increased data rates, such techniques do not account for movement of the data valid window (DVW, or data eye) relative to any fixed clock signal, for example due to changes in temperature, voltage, or loading.Many memories, such as various double data rate synchronous dynamic RAM (DDR SDRAM), operate in conjunction with a data strobe to perform the memory access when data on the data lines is most likely to be valid. Data strobes are non-free-running signals driven by the device that is driving the data signals (the memory controller for WRITE operations, the memory for READ operations). For READ operations, the data strobe signals are edge-aligned with the data signals such that all data and the data strobes are to be asserted by the memory using the same internal clock signal. Consequently, the data signals and the data strobe signals are generated at nominally the same time.A typical memory, however, does not generate data strobes in the middle of the DVW. Consequently, an external system reading the memory typically delays reading the data lines until valid data is present on the data lines. The memory controller is typically configured to delay the received strobe to the center of the DVW. Many memory systems synchronize memory accesses using delay locked loop (DLL) circuits to generate an appropriate delay following the data strobe. DLL circuits, however, consume considerable area in an already crowded integrated circuit. Using strobes and DLL circuits also presents difficulties in testing components for quality control. Further, many systems use memory controllers that control several different and independent memory modules.In addition, to insert appropriate delays for each of the memory modules, memory controllers often include slave DLL circuits dedicated to each memory module and a master DLL circuit for controlling operation of the slave DLL circuits. Each additional DLL circuit requires additional area in the integrated circuit, thus tending to increase the size, cost, power consumption, and complexity of the memory system. The problems are exacerbated by the addition of multiple master DLL circuits, each associated with one or more bytes on a bus. Summary of the Invention A memory system and method according to various aspects of the present invention includes a memory and an adaptive timing system for controlling access to the memory. The adaptive timing system captures data in a data valid window (DVW) in a data signal. In one embodiment, the adaptive timing system includes a delay circuit for sampling the data signal at a midpoint of the DVW. The adaptive timing system may also include an identifying circuit for identifying whether the midpoint of the DVW corresponds to an actual midpoint of the DVW and adjusting the delay circuit accordingly. Brief Description of the Drawings The present invention is illustrated by way of example and not limitation in the accompanying figures, in which like references indicate similar elements, and in which:Figure 1 is a block diagram of an electronic system according to various aspects of the present invention;Figure 2 is a block diagram of a memory system;Figure 3 represents signal waveforms for a clock signal, a complementary clock signal, and a plurality of data signals;Figure 4 is a block diagram of an adaptive timing system;Figure 5 is a flow diagram of a calibration process; andFigure 6 is a flow diagram of a timing adjustment process.Elements and connections in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. Detailed Description of Exemplary Embodiments The subject matter of the present invention is particularly suited for use in connection with electronic systems using memory components, such as SDRAMs. As a result, the preferred exemplary embodiment of the present invention is described in that context. It should be recognized, however, that such description is not a limitation on the use or applicability of the present invention, but is instead provided to enable a full and complete description of an exemplary embodiment.Referring to Figure 1, an electronic system 100 according to various aspects of the present invention may include a processor 102, a memory system 104, and a data source and/or destination 106. The electronic system 100 comprises a system using a memory, such as a conventional personal computer system. The electronic system 100 may comprise, however, any suitable electronic system, such as a communication system, computing system, entertainment system, control system, portable electronic device, audio component, or factory control system, and the various components may differ according to the particular system and environment. The processor 102 generally controls operation of the electronic system, and may comprise any appropriate processor or controller, such as an Intel, Texas Instruments, or Advanced Micro Devices microprocessor. The data sources and/or destinations 106 may comprise any suitable components in the electronic system 100 for sending and/or receiving data, including conventional peripherals such as a hard drive, optical storage system, tape storage system, printer, display, keyboard, tracking device, or the like. The data source/destination 106 is an illustrative component that may be primarily a data source (such as a keyboard or sensor), a data destination (such as a display or speaker), or both (such as a hard drive or transceiver).The memory system 104 comprises a storage system for storing data. The memory system 104 may comprise any appropriate memory system for storing data and transferring data between the memory system 104 and the data source/destination 106 or processor 102. Referring to Figure 2, in the present embodiment, the memory system 104 includes one or more memory modules 210A, B and a memory controller 212. The memory modules 210 may comprise any system for storing data, such as a conventional ROM, SRAM, DRAM, SDRAM, or any other suitable storage system. In the present embodiment, the memory modules 210 comprise DDR SDRAMs from Micron, such as Micron MT46V64M4 256Mb DDR SDRAMs.The memory controller 212 controls access to, including data transfers to and from, the memory module 210, and may perform further functions and operations as well. Data may be exchanged between the memory system 104 and the data source/destination 106 along a set of n data lines according to any appropriate method or technique. In the present embodiment, a conventional data transfer process transfers data by capturing data in a data valid window (DVW) of a data signal. For example, referring to Figure 3, in a source synchronous system according to the present embodiment, data is suitably asserted on the data lines upon the crossing of a clock signal (CK) and a complementary clock signal (CK#). A first period of time (tAC(MAX)) passes before all of the data bits (DQs) are valid, which defines a leading edge 310 of the DVW 300. The data bits remain valid during the DVW 300 until a second period of time (tAC(MIN)) before the next clock signal crossing, which defines the trailing edge 312 of the DVW 300. The duration of the DVW 300 may change, for example due to load, temperature, and/or voltage variations. Similarly, the positions of the trailing and leading edges 310, 312 of the DVW 300 may change relative to the clock signals.The memory controller 212, among other things, controls the timing of access operations to the memory modules 210, such as to enhance the capture of accurate data. To optimize data capture, the memory controller 212 of the present embodiment captures data at the approximate midpoint of the DVW 300. The memory controller 212 further suitably identifies changes in the duration and relative position of the DVW 300.Referring again to Figures 2 and 3, in the present embodiment, the memory controller 212 includes an adaptive timing system 214 for controlling access to the memory modules 210. Generally, the adaptive timing system 214 controls the time at which the data is latched for transfer to or from the memory modules 210. The timing is suitably controlled to latch data at a time when the asserted data is most likely to be valid. Accordingly, the adaptive timing system 214 identifies the location of the DVW 300 in the data signal. In addition, the adaptive timing system 214 may track changes in the DVW 300.The DVW 300 and changes in its characteristics may be identified in any suitable manner. For example, the adaptive timing system 214 suitably identifies and tracks changes in the leading and trailing edges 310, 312 of the DVW 300. By identifying the leading and trailing edges 310, 312 of the DVW 300 and changes in positions of the respective edges 310, 312, the midpoint of the DVW 300 may be approximated and the optimal access time may be adjusted. Further, by oversampling and tracking multiple points in a timing signal, other characteristics, such as the rate at which the midpoint and the respective edges 310, 312 change, may be tracked as well.In addition, the memory controller 212 may use different operating characteristics for different memory modules 210. For example, a first module 210A near a heat source may heat up and change its DVW 300 faster than another memory module 210B. The memory controller 212 suitably uses different DVW 300 characteristics for each module 210A, B, such as different midpoints and DVW edges 310, 312. Further, the memory controller 212 may include multiple adaptive timing systems 214. For example, multiple adaptive timing systems 214 are suitably dedicated to each bit, nibble, byte, or other set of data presented on the data lines.To identify the leading and trailing edges 310, 312 of the DVW 300, the adaptive timing system 214 of one embodiment compares signal values at nominal leading and trailing edges 310, 312 of the DVW 300 to a signal value at a nominal midpoint. If the adaptive timing system 214 samples a toggling signal at the approximate actual midpoint of the DVW 300, then the samples at the nominal leading and trailing edges 310, 312 of the DVW 300 tend to be substantially identical to the sample at the approximate actual midpoint of the DVW 300. Samples beyond the leading and trailing edges 310, 312, however, tend to differ from the samples within the DVW 300.Referring to Figure 4, in the present embodiment, the adaptive timing system 214 includes a delay circuit 410, a plurality of latch circuits 412, and at least one compare circuit 414. Generally, the delay circuit 410 asserts multiple delay clock signals at different times with respect to a timing signal and/or data signal. The latch circuit 412 receives the timing signal and/or data signal from the data source 106 and delay clock signals from the delay circuit 410 to latch data at the time of the delay clock signal, and provides the latched signal to the compare circuit 414 and the data destination 106. The compare circuit 414 receives latched signals from the latch circuits 412 sampled at different times, compares the latched signals to identify differences among them, and may adjust the timing of the delay clock signals generated by the delay circuit 410 accordingly.In particular, the delay circuit 410 of the present embodiment asserts multiple signals at different times. The delay circuit 410 may comprise any appropriate system for generating signals at different times, such as a programmable multi-tap delay line. The delays programmed into the taps may correspond to any appropriate intervals and any appropriate DVW 300 size. For example, the delay circuit 410 may comprise a three-tap delay line having a center tap corresponding to the nominal approximate midpoint of the DVW 300. The other two taps suitably correspond to a setup guardband and a hold guardband, respectively, on either side of the DVW 300 nominal midpoint. The delay circuit 410 also receives an internal clock signal 416, for example a general free-running memory controller 212 clock signal, that suitably operates at a higher frequency than the data signal to facilitate multiple sampling of the timing and/or data signal in the DVW 300.The guardband intervals are suitably separated from the DVW 300 nominal midpoint by any duration selected to identify variation in the DVW 300 characteristics and correspond a desired DVW 300 duration. In the present embodiment, the guardbands are set approximately, or slightly less than, half the expected duration of the DVW 300 from the nominal midpoint. Consequently, the first tap corresponds to a delay immediately after the leading edge 310 of the DVW 300 (the nominal leading edge), and the third tap similarly corresponds to a delay immediately before the trailing edge 312 of the DVW 300 (the nominal trailing edge). The delay associated with each tap may be adjustably programmed, such as to correspond to an adjusted midpoint of the DVW 300 as it moves, for example due to temperature and/or voltage variations.The latch circuit 412 receives data from the data source 106 and latches input data at its output upon receipt of a delay clock signal from the delay circuit 410. The latch circuit 412 may comprise any suitable system for asserting and holding data upon receipt of a delay clock signal. In the present embodiment, each output of the delay circuit 410 is connected to a corresponding latch circuit 412. Each latch circuit 412 comprises a circuit for latching an input value at an output upon assertion of a latch signal. Each latch circuit 412 may comprise a circuit having a data input, a clock input for the latch signal, and an output, such as a flip-flop. The data input is connected to the data source 206, for example via a buffer 418. In the present embodiment, the data source 106 is the memory module 210. The clock input is connected to the corresponding tap outputs of the delay circuit 410, and the latch circuit output is connected to the compare circuit 414. The output of the center latch circuit is also connected to the data destination 106. When the various taps of the delay circuit 410 assert their respective delay clock signals, each latch circuit 412 is activated to capture the input data received by the latch circuit 412 when the delay clock signal is asserted. Thus, each latch circuit 412 captures data received from the data source 106 at different times, such as the midpoint and the leading and trailing edges 310, 312 of the timing and/or data signal.The compare circuit 414 receives latched data from at least two of the latch circuits 412 and compares the data to generate an output signal. The compare circuit 414 may comprise any system for determining whether signals are substantially identical or different. In the present embodiment, the compare circuit 414 comprises a conventional compare circuit receiving input signals from the center latch circuit 412B and one of the other latch circuits 412A, C. The compare circuit 414 compares the signals and determines whether a difference between the signals exceeds a selected threshold. If so, the compare circuit 414 generates a first comparison signal (such as a logic HIGH signal); if not, the compare circuit 414 generates a second comparison signal (such as a logic LOW signal).The memory system 104 is suitably configured to respond to the signals from the compare circuit 414 in any appropriate manner, such as to determine whether and how much to adjust the delays associated with one or more of the delay circuit 410 taps. By responding to the compare circuit 414 signals, the delay circuit 410 may adjust the delays associated with the delay circuit 410 taps to a desired position relative to the data signal. When a compare circuit 414 indicates that the signals received from the latch circuits 412 are substantially identical, then the signal near the nominal edge (leading edge 310 or trailing edge 312) matches the signal at the nominal midpoint. Therefore, the signal at the nominal edge is within the DVW 300. If the signals do not substantially match, then the signal associated with the nominal edge is outside the DVW 300, thus indicating a change in the DVW 300. Accordingly, the delays for the various delay circuit 410 taps may be adjusted to shift the center tap to the approximate midpoint of the DVW 300.In the present embodiment, the memory controller 212 adjusts the delays associated with the three delay taps in accordance any appropriate method or algorithm. For example, when the compare circuit 414 indicates that the DVW 300 has moved, the delay associated with each tap may be changed to shift the delays associated with the various taps to move the nominal approximate midpoint closer to the actual midpoint of the DVW 300. The delays associated with the outer taps may be similarly adjusted to place the nominal approximate edges associated with the outer taps closer to the actual leading and trailing edges 310, 312 of the DVW 300. For example, one or more cycles or half-cycles of the memory controller 212 clock may be added to or subtracted from the current delay values of the various taps.The adjustments to the delay circuit may be made in any appropriate manner. For example, the particular technique for adjusting the delays may be selecting to decrease the effects of noise or other short term effects on the system. In one embodiment, the memory controller may require two or more consecutive indications from the compare circuit 414 that the DVW 300 has moved. Further, the memory controller may have adjustment limits so that the delays associated with the taps may be adjusted a limited number of times during a particular time interval or up to a limited magnitude of adjustment. The type and value of such limits may be selected according to any criteria for a particular system or application.The memory system 104 may initially calibrate the adaptive timing system 214. Calibration provides initial values for the nominal midpoint and leading and trailing edges 310, 312. The initial values may be provided in any appropriate manner, such as by using preselected default values or testing for DVW 300 information. For example, referring to Figure 5, for a calibration process of the present embodiment, the memory controller 212 initially requests a known timing signal from the relevant memory module 210 (step 510). The timing signal may be any suitable signal, such as a predetemnined timing signal, a conventional strobe signal, a WRITE and READ operation to generate a known signal, or the data signal itself. In one embodiment, the timing signal is a toggling signal alternating between binary high and low signals.When the timing signal is asserted, the memory controller 212 samples the timing signal at several points in the timing signal (step 512), for example using the adaptive timing circuit. In the present embodiment, the memory controller suitably samples the timing signal over several points within one or more cycles of the timing signal to conduct a sweep of the timing signal. The samples may then be analyzed to identify the approximate leading and trailing edges 310, 312 of the signal's DVW 300 (steps 514, 516) and calculate the approximate midpoint relative to the free-running clock (step 518). For example, the memory controller 212 may identify a first and a last sample following a data strobe that achieve a threshold value known to be in the timing signal. The delay circuit 410 is then suitably programmed to place the center tap delay at the approximate midpoint of the DVW 300 and the outer taps near the approximate leading and trailing edges 310, 312 (step 520). The memory system 104 may then proceed with normal operation, using the center tap as the latch circuit signal to capture data. The calibration process may be repeated at any time, such as at periodic intervals.After the memory system 104 has been calibrated, the system may be adjusted at any desired time. While the memory system 104 operates, the adaptive timing system 214 may check the DVW 300 to determine whether the midpoint of the DVW 300 has drifted. The adaptive timing system 214 may check the DVW 300 at any time, for example continuously, at periodic intervals, or upon expiration of a timer. Further, the adaptive timing system 214 may adjust the nominal midpoint and leading and trailing edges 310, 312 in the event of drift. If the memory controller 212 operates with multiple memory modules 210 or sections, the adaptive timing system 214 may perform an adjustment process for each memory module 210A, B or section of memory.For example, as the memory module 210 heats up, the DVW 300 may move. The memory system 104 may be configured to occasionally check the DVW 300, such as in accordance with a thermal and/or voltage time constant of the system. For example, the memory controller 212 may provide a CALIBRATE command to the memory to request the timing signal at regular intervals no longer than the thermal and/or voltage time constant. In another embodiment, the memory controller 212 may include a time constant timer to trigger the adjustment process. If the memory controller 212 reads a toggling pattern (such as using the data signal) in normal operation sufficient to verify the characteristics of the DVW 300, the time constant timer may be reset. If the time constant timer expires, the adjustment process may then be initiated. Thus, the adaptive timing system 214 may continuously sample the strobes on READ operations and update the delay circuit 410 opportunistically when no READ operations are occurring. Consequently, the full adjustment process is performed only when a sufficient pattern has not been received and the time constant timer has expired.Referring to Figure 6, the memory controller 212 of the present embodiment performs a timing adjustment process by receiving the timing signal, which may be any appropriate signal for identifying shifts in the DVW 300, such as a predetermined signal generated by the memory module 210, the conventional strobe signal, or the data signal itself. When the timing signal is received, the delay circuit 410 taps generate signals that cause the latch circuits 412 to capture the signal at various times (step 610), such as at the nominal leading and trailing edges 310, 312 and midpoint. The output signals from the latch circuits 412 are provided to the compare circuits 414 that compare the various signals to determine whether the leading and/or trailing edges 310, 312 of the data signal have shifted. For example, the compare circuit 414A may compare the leading edge 310 data to the midpoint (step 612). If the data are the same (step 614), then the nominal leading edge 310 is still within the DVW 300, and no adjustment is necessary. If the data are not the same, then the DVW 300 has moved. Accordingly, the nominal leading and trailing edges 310, 312 and midpoint may be increased a selected amount (step 616) or according to any selected criteria or algorithm.Similarly, the compare circuit 414B may compare the trailing edge 312 data to the midpoint data (step 618). If the data are the same (step 620), then the nominal trailing edge 312 is still within the DVW 300, and no adjustment is necessary. If the data are not the same, then the DVW 300 has moved. Accordingly, the nominal leading and trailing edges 310, 312 and midpoint may be decreased a selected amount (step 622) or according to any selected criteria or algorithm. Thus, the delay circuit 410 is suitably programmed to shift the various delays associated with the taps so that the center tap is repositioned to an adjusted midpoint and adjusted leading and trailing edges 310, 312.The present embodiment is described in conjunction with a delay circuit 410 having three taps, one for the nominal midpoint and two for the nominal leading and trailing edges 310, 312 of the DVW 300. Additional taps may be provided, however, to collect data about other portions of the data signal. For example, additional taps may assigned to intervals between the midpoint and the edges 310, 312 of the DVW 300 and may be similarly connected to compare circuits 414. The data collected by latch circuits 412 connected to the additional taps may be used to identify changes in the DVW 300 as well as the rate at which the changes in the DVW 300 are occurring.Benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The terms "comprises," "comprising," or any other variation, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.In the foregoing specification, the invention has been described with reference to specific embodiments. However, various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present invention as claimed.
Embodiments of the disclosure are drawn to apparatuses and methods for testing the resistance of through silicon vias (TSVs) which may be used, for example, to couple multiple memory dies of a semiconductor memory device. A force amplifier may selectively provide a known current along a mesh wiring structure and through the TSV to be tested. The force amplifier may be positioned on a vacant area of the memory device, while the mesh wiring structure may be positioned in an area beneath the TSVs of the layers of the device. A chopper instrumentation amplifier may be selectively coupled to the TSV to be tested to amplify a voltage across the TSV generated by the current passing through the TSV. The chopper instrumentation amplifier may be capable of determining small resistance values of the TSV.
CLAIMSWhat is claimed is:1. An apparatus comprising:an interface (IF) die; andat least one memory die, wherein the at least one memory die is stacked over the IF die through at least one through silicon/substrate via (TSV) so that current flows through the at least one TSV between the IF die and the at least one memory die;wherein the IF die comprises:a first conductive line through which current flows, andan amplifier coupled to the at least one TSV and configured to output a signal related to a voltage across the at least one TSV,wherein the at least one memory die comprises a second conductive line through which the current flows; andwherein the first conductive line comprises a mesh wiring structure.2. The apparatus of claim 1, wherein the second conductive line comprises a mesh wiring structure.3. The apparatus of claim 1 ,wherein the at least one memory die is stacked over the IF die further through an additional TSV,wherein the second conductive line is coupled between the at least one TSV and the additional TSV; andwherein the IF die further comprises a current source coupled between the first conductive line and the additional TSV.4. The apparatus of claim 1,wherein the at least one memory die further comprises a first power line coupled to the second conductive line; andwherein the IF die further comprises a current source and a second power line, the current source coupled between the first conductive line and the second conductive line.5. The apparatus of claim 1, wherein the amplifier comprises a chopper instrumentation amplifier.6. The apparatus of claim 1, wherein the mesh wiring structure comprises a first wiring layer and a second wiring layer.7. An apparatus comprising:a memory die comprising a through silicon/substrate via (TSV) block comprising a plurality of TSVs, the memory die comprising a current supply circuit coupled to a first side of the TSV; andan IF die coupled to the memory die, the IF die comprising:a force amplifier coupled to a second side of the TS V; and a chopper instrumentation amplifier comprising a first input coupled to the first side of the TSV and a second input coupled to the second side of the TSV.8. The apparatus of claim 7, wherein the IF die is positioned underneath the memory die, and wherein the force amplifier and the chopper instrumentation amplifier are positioned on an area of the IF die outside an area underneath the TSV block.9. The apparatus of claim 7, wherein the force amplifier is coupled to the second side of the TSV along a mesh wiring structure.10. The apparatus of claim 9, wherein the mesh wiring structure is positioned in an area of the IF die underneath the TSV block.1 1. The apparatus of claim 9, wherein the mesh wiring structure comprises conductive lines in a first wiring layer and conductive lines in a second wiring layer, wherein the conductive lines in the second wiring layer are positioned underneath an area of the TSV block including the current supply circuit.12. The apparatus of claim 7, wherein the chopper instrumentation amplifier is coupled to a first power supply voltage and the current supply circuit is coupled to a second power supply voltage.13. The apparatus of claim 12, wherein the first power supply voltage is greater than the second power supply voltage.14. The apparatus of claim 7, wherein the memory die comprises a first TSV buffer circuit coupled to the first side of the TSV15. The apparatus of claim 14, wherein the memory die comprises a shift register coupled to the first TSV buffer circuit to control selective coupling of the TSV to the current supply circuit and the chopper instrumentation amplifier.16. The apparatus of claim 14, wherein the IF die comprises a second TSV buffer circuit coupled to the second side of the TSV.17. The apparatus of claim 14, wherein the chopper instrumentation amplifier is coupled to a first power supply voltage, and the first TSV buffer circuit selectively couples a second power supply voltage to the TSV, wherein the first power supply voltage is greater than the second power supply voltage.18. The apparatus of claim 7, wherein the current supply circuit comprises a transistor to selectively couple the first side of the TSV to a power supply voltage.19. An apparatus comprising:an amplifier;a plurality of through silicon vias (TSVs); anda mesh wiring structure comprising a grid of conductive elements, wherein the amplifier is coupled to the TSVs via the mesh wiring structure.20. The apparatus of claim 19, wherein the grid of conductive elements comprises a plurality of conductive lines in a first metal layer, and a plurality of conductive lines in a second metal layer.21 The apparatus of claim 19, further comprising a second mesh wiring structure coupled to the TSVs, wherein the mesh wiring structure is coupled to a first side of the TSVs and the second mesh wiring structure is coupled to a second side of the TSVs.22. The apparatus of claim 19, wherein the amplifier comprises a chopper instrumentation amplifier.
APPARATUSES AND METHODS FOR HIGH SENSITIVITY TSV RESISTANCEMEASUREMENT CIRCUITCROSS-REFERENCE TO RELATED APPLICATIONS)[001] This application claims priority to U.S. Application No. 16/121,377 filed September 4, 2018, which is incorporated herein by reference, in its entirety, for any purpose.BACKGROUND[002] Semiconductor devices may be used for a variety of applications, such as semiconductor memory devices used to store and retrieve information in computer systems. Modem semiconductor devices may contain multiple chips (or dies) which are stacked on top of one another. In order to provide communication between the layers of the stack, it may be necessary to provide conductive elements, such as through silicon/substrate vias (TSVs), to couple the layers. Test circuitry may be provided in the semiconductor device to determine the resistance of the TSVs in order to test for, for example, manufacturing defects, damage, etc.[003] Figure 1 shows a prior art memory device 100. The memory device 100 is an example of a semiconductor device which includes a plurality of layers l06a-d. The memory device 100 may be a dynamic random access memory device (DRAM). In particular, the memory device 100 may be high bandwidth memory (HBM) in which multiple memory,dies are stacked on top of one another as each layer 106. The memory device 100 may store data across a plurality of memory cells (not shown) which may be contained in each layer 106. The layers K)6a-d may all be physically identical to each other. The layers 106a-d may be stacked on top of one another. Although the example memory device 100 of Figure 1 shows four layers 106a-d, it is to be understood that more or less layers 106 could be provided in other examples.[004] The layers !06a-d may be DRAM core dies of the memory device 100. The bottom layer 106a may be stacked on an interface die (IF die 108), which may include an amplifier 104 and an amplifier 102. The amplifier 104 may be generally referred to as a force amplifier and the amplifier 102 may be generally referred to as an instrumentation amplifier. Each of the layers 106a-d and the IF die 108 may be coupled together with through silicon/substrate vias (TSVs). In particular, each of the layers 106a-d may be coupled to an adjacent layer 106a~d (and the IF die 108) by one or more force TSVs 112, sense TSVs 1 14, and/or signal TSVs 110 (e.g., an upper surface of layer 106b may be coupled to layer 106c and a lower surface of layer 106b may be coupled to layer 106a). The lowest layer 106a may have a lower surface coupled to an upper surface of the IF die 108.[005] The layers l06a-d are stacked on top of the IF die 108. The IF die 108 includes an integrated I/O circuit (not shown) of the memory device 100. The IF die 108 may receive commands from outside the memory device 100 and provide them to the layers 106a-d along the signal TSVs 110, and may receive data from the layers 106a-d along the signal TSVs 110 and provide it outside the memory device 100. The IF die 108 may also include test circuitry, such as the instrumentation amplifier 102 and the force amplifier 104. The test circuitry may be used to measure resistance along TSVs of the memory device 100.[006] Each layer 106 is coupled to adjacent layers 106 (and to the IF die 108) by TSVs.The TSVs may be conductive elements which extend through the thickness of the layer 106a- d and may be coupled to a TSV running through an adjacent layer 106a-d. It may be necessary to test the TSVs to ensure they have sufficient conductance (e.g , low resistance) to couple the layers 106 to each other (and to the IF die 108). In particular, the memory device 100, as shown, includes three types of TSVs, signal TSVs 1 10, force TSVs 112, and sense TSVs 114. Each of the types of TSV may be physically identical to each other. The TSVs may be organized in columns, with the TSVs aligned vertically (e.g., along a normal to a surface of each of the layers 106) Each column of TSVs may be coupled together in series to form a conductive path from the IF die 108 through the layers 106a-d. Each TSV may include a conductive path which runs from a top surface of the layer l06a-d to a bottom surface of the layer l06a-d. Each TSV may include an upper portion positioned on an upper surface of the layer 106a-d and a lower portion positioned on a lower surface of the layer 106a-d. The upper portion of a given TSV may couple to a corresponding lower portion of a TSV in a next layer up, and the lower portion of the TSV may couple to a corresponding upper portion of a TSV in a next layer down in the memory device 100. Because each of the layers l06a-d may be physically identical, the top layer l06d (e.g., layer 3) may have upper portions of TSVs that are not coupled to any corresponding lower portions. Since the IF die 108 is the bottom layer of the memory devi ce 100, the IF die 108 may only have TSV upper portions on an upper surface thereof to couple with the TS Vs along the bottom surface of layer 106a, and may not have TSVs along a bottom surface of the IF die 108.[007] The IF die 108 includes test circuitry for measuring the resistance of the TSVs A force amplifier 104 is coupled to the force TSVs 112. The force amplifier 104 may provide a current through the force TSVs 1 12. The force amplifier 104 includes a reference voltage Vref coupled to a differential amplifier. The differential amplifier provides a voltage along a line Force+ to the force TSVs 1 12 coupling the IF die 108 to the lowest layer 106 (e.g , layer 0). The differential amplifier also has an inverting input which is coupled to a feedback line Force-. The feedback line Force- is selectively coupled in parallel to four signal TSVs 110 coupled between the lowest layer 106a and the IF die 108. The feedback line Force- is also coupled to ground via a resistor R0.[008] The force TSVs 1 12 are arranged in a column through the memory device 100 and are coupled in series. In each layer 106, a force line Force<i>, where i is the number of the layer 106 (e.g., 0-3), is selectively coupled from between the upper and lower portions of the force TSV 112 of that layer 106a-d to the lower portion of four signal TSVs 110 in parallel along the bottom of that layer 106a-d. A transistor Force MQS may act as a switch between Force<i> and the signal TSV 110. The lower portion of each of the four signal TSVs 110 is also selectively coupled, in parallel, to a sense line Sense<i>, where i is the number of the layer 106a-d. A transistor Sense MOS may act as a switch between the signal TSV 110 and Sense<i>. The sense line Sense<i> is coupled to the column of sense TSVs 114. The sense TSVs 1 14 are coupled in series to a positive sense line Sense+, which is provided as an input to the instrumentation amplifier 102.[009] The signal TSVs 110 are coupled together in series such that each signal TSV 110 is coupled in series with the signal TSVs 110 of other layers 106a-d in a column. The IF die 108 may include an upper portion of a signal TSV (coupled to the signal TSVs 110 of the layers 106a-d) which is selectively coupled between the feedback line Force- and to a negative sense line Sense-. A similar transistor Sense MOS and transistor Force MOS act as switches to selectively couple the lower portion of the signal TSV 110 to Sense- and Force- respectively. Four columns of signal TSVs 110 are selectively coupled in parallel to the negative sense line Sense-, which is provided to a second input of the instrumentation amplifier 102. The instrumentation amplifier 102 is a differential amplifier that amplifies a difference between voltages on Sense+ and Sense-. Since Sense-*- is selectively coupled to Force<n> through a lower portion of each signal TSV 110, and Sense- is selectively coupled to Force- through an upper portion of each signal TSV 110, the difference between Sense+ and Sense- may be used to determine the resistance of the signal TSV 110 as will be described in regard to Figure 2.[010] In an example operation, the resistance of the signal TSV 110 from the top layer 106 (e.g., layer 3) may be determined. The transistors sense MOS and Force MOS of a particular column in top layer 106d may be activated to couple Force<3> and Sense<3> to the signal TSV 110. The transistors sense MOS and force MOS in the IF die 108 along the corresponding column of signal TSVs 1 10 may also be activated to couple that column of signal TSVs 110 to Force - and Sense-. All of the other transistors sense MOS's and Force MOS's may remain deactivated. The selective activation of the transistors may create a circuit path, the operation of which will be described in Figure 2.[Oil] Figure 2 shows a prior art simplified circuit path 200 of the memory device 100 of Figure 1. The circuit path 200 illustrates the operation of the force amplifier 104 and instrumentation amplifier 102 of Figure 1 for measuring the resistance of a selected signal TSV 210. Figure 2 shows only the circuit path 200 through a single measured signal TSV 210 where Sense MOS and Force MOS have been activated, however it is to be understood that the actual circuit path 200 may be coupled to multiple signal TSVs 210 is parallel and multiple layers of TSVs in series along different layers of the memory device (e.g., as in memory device 100 of Figure 1) that have deactivated Sense and Force MOSs. The circuit path 200 may be selectively moved between different signal TSV 210 by activation of switches, such as the Sense MOS and Force MOS of Figure 1. [012] The force amplifier 204 provides a voltage Via to establish a reference current Iref. The reference current Iref passes through the force TSV 212 and through the signal TSV 210. The force amplifier 204 may use feedback to keep Iref constant. The reference current Iref generates a voltage Vx across the signal TSV 210. The instrumentation amplifier 202 amplifies the differential voltage Vx across one or more of the signal TSVs 210 in a column. The voltage Vx may be coupled to the instrumentation amplifier 202 via sense TSV(s) 214. Since the current Iref is known and the voltage Vx is measured, the resistance may be determined[013] As an example operation, if a constant current Iref of 200mA is used, and the amplification of the instrumentation amplifier 202 is 25 fold, then the output voltage from the instrumentation amplifier 202 is proportional to the signal TSV 210 resistance at a scale of 200£W. If the output from instrumentation amplifier 202 is used to judge a pass (low enough resistance of the signal TSV 210) or failure of the circuit with a cutoff point of 0.5 V, then the circuit path 200 is sensitive to resistances of 100W or more.[014] The circuit path 200 may include a resistor Rfi__core to model the wire resistance between the force TSV 212 and the upper portion of the signal TSV 210 and a resistor Rfl if to model the wire resistance between the lower portion of the signal TSV 210 and the force amplifier 204. With these resistances, along with the reference voltage Vref input to the force amplifier 204 the input voltages to the instrumentation amplifier 202 may be calculated by equations 1 and 2 below :Vin(—) « Vref + Rfljf * Iref Eqn. 1Vinf+ ) « Vref + Rfljf * Iref + Vx Eqn. 2[015] From this, an output voltage Vfa of the force amplifier 204 may be calculated by equation 3 below:Vfa « Vref + Rflif * Iref + Vx + Rflcore * Iref« Vref + 2 * Rfl * Iref < Vdd Eqn. 3[016] Equation 3 assumes that the Rfl__core is nearly equal to Rfl_if and that Rfl_core = Rfl if:::Rfl. Vdd is the power supply voltage provided to the amplifiers. In order to maintain a linear relationship between the input and the output of the amplifiers 202, 204 without distorting the amplified waveform, the outputs must be kept below the power supply voltages Vdd. From the above it may be seen that Rfl has an upper limit Rflrnax, which may be calculated by equation 4, below:[017] Similarly, assuming that the signal TSV 210 to be measured is close to the force TSV212, then the lower limit of Rfl, Rflmin, may be calculated by equation 5 below and may be shown to be roughly 0.Rflmin » 0 Eqn. 5[018] As well as a signal difference between the input voltages Vin(+) and Vin(-) of the instrumentation amplifier 202, there may be a common mode potential Vcom. Assuming that Vx is small enough, Vcom may be calculated by equation 6 below:Vcom « Vref + Rfl * Iref Eqn. 6[019] By substituting equations 4 and 5 into equation 6, a maximum common voltageVcom max and minimum common voltage Vcom min may be calculated by equations 7 and 8, respectively, below:Vcomjnax (Vdd + Vref)/2 Eqn. 7 Vcomjnin ~ Vref Eqn.8 |Ό20] The common voltage Vcom needs to lie in a range (e.g., between Vcom_min and Vcom max) such that the instrumentation amplifier 202 provides a linear output over a full range of the power supply voltages (e.g., 0V to Vdd) provided to the instrumentation amplifier 202.[021] The instrumentation amplifier 202 (which may be the same as the instrumentation amplifier 102 of Figure 1) has two stages, each of which has a gain, A1 and A2, respectively. The gain of these stages may be determined by the values of the resistors R1-R4 which are coupled between the component differential amplifiers of the instrumentation amplifier 202. In particular, the gains Ai and A2 may be calculated by equations 9 and 10, respectively, below:A 1 = 1 + 2ifi2/sl) Eqn. 9[022] If we assume that the differential input voltage Vx produced by flowing current Iref through the signal TSV 210 is split evenly between the two input terminals, then the voltage of the two input terminals may be calculated by equations 11 and 12, below:[023] From equations 1 1 and 12, the highest and lowest voltages Va and Vb within the instrumentation amplifier 202 can be calculated by equations 13 and 14, below :[024] From these equations, in order to maintain the linearity of the output voltage of the instrumentation amplifier 202, three conditions must be met. As expressed by equation 15, below, the gain of the second stage on the difference between the highest and lowest internal voltages V a and Vb must not be greater than the power supply voltage Vdd. From equation 15, the maximum and minimum common voltages Vcomj ax and Vcom_min may be determined with equations 16 and 17, respectively, below:A2(Va— Vb) < Vdd Eqn. 15Vcommin£Vdd/(2 * A2)Eq 17[025] Figure 3 shows a prior art TSV block 400 of the IF die. The IF die 300 is shown as both a block diagram representation 316 and a layout image representation 317. The IF die 300 may include a TSV block 318 and a TSV test block 319 which may include the test circuitry (e.g., the instrumentation amplifier 102 and 202 and the force amplifier 104 and 204 of Figures 1 and 2). The TSV test block 319 may be located underneath the TSV block 318. Multiple TSV test blocks 319 may be arranged underneath the TSV block 318. The TSV test blocks 319 may repeat at intervals based on how many individual TSV s each TSV test block 319 is capable of testing.[026] It may be necessary to locate the TSV test block 319 directly underneath the TSV blocks 318 in order to reduce resistance of the current force wiring by reducing the length of the current force wiring (e.g., minimize Rfl core and Rfl if). As determined in Equations 4 and 5, there are maximum (and minimum) limits to the allowable current force resistance to maintain linearity of the output. As may be seen from Figure 3, the TSV test blocks 319 may be bulky components which may increase the area of the TSV block 300 of the IF die (and in turn, the memory device 100 of Figure 1). It may be desirable to provide a high sensitivity test circuit to determine the resistance of the TSVs which may also occupy less area compared to conventional TSV test blocks. SUMMARY[027] In at least one aspect, the present disclosure relates to an apparatus which includes an interface (IF) die and at least one memory die. The at least one memory die may be stacked over the IF die through at least one through silicon/ substrate via (TSV) so that current flows through the at least one TSV between the IF die and the at least one memory die. The IF die may include a first conductive line through which current flows and an amplifier coupled to the at least one TSV which may output a signal related to a voltage across the at least one TSV The at least one memory die may include a second conductive line through which the current flows. The first conductive line may include a mesh wiring structure.[028] The second conductive line may include a mesh wiring structure. The at least one memory die may be stacked over the IF die further through an additional TSV. The second conductive line may be coupled between the at least one TSV and the additional TSV. The IF die may further include a current source coupled between the first conductive line and the additional TSV.[029] The at least one memory'die may further include a first power line coupled to the second conductive line. The IF die may further include a current source and a second power line, the current source coupled between the first conductive line and the second conductive line. The amplifier may include a chopper instrumentation amplifier. The mesh wiring structure may include a first wiring layer and a second wiring layer[030] In at least one aspect, the present disclosure relates to an apparatus which includes a memory' die and an IF die coupled to the memory die. The memory die includes a through silicon/substrate via (TSV) block comprising a plurality of TSVs, the memory die including a current supply circuit coupled to a first side of the TSV. The IF die includes a force amplifier coupled to a second side of the TSV and a chopper instrumentation amplifier. The chopper instrumentation amplifier includes a first input coupled to the first side of the TSV and a second input coupled to the second side of the TSV.[031] The IF die may be positioned underneath the memory'- die, and the force amplifier and the chopper instrumentation amplifier may be positi oned on an area of the IF die outside an area underneath the TSV block. The force amplifier may be coupled to the second side of the TSV along a mesh wiring structure. The mesh wiring structure may positioned in an area of the IF die underneath the TSV block. The mesh wiring structure may include conductive lines in a first wiring layer and conductive lines in a second wiring layer. The conductive lines in the second wiring layer may be positioned underneath an area of the TSV block including the current supply circuit.[032] The chopper instrumentation amplifier may be coupled to a first power supply voltage and the current supply circuit is coupled to a second power supply voltage. The first power supply voltage may be greater than the second power supply voltage.[033] The memory die may include a first TSV buffer circuit coupled to the first side of the TSV. The memory die may include a shift register coupled to the first TSV buffer circuit to control selective coupling of the TSV to the current supply circuit and the chopper instrumentation amplifier. The IF die may include a second TSV buffer circuit coupled to the second side of the TSV. The chopper instrumentation amplifier may be coupled to a first power supply voltage, and the first TSV buffer circuit selectively couples a second power supply voltage to the TSV. The first power supply voltage may be greater than the second power supply voltage. The current supply circuit may include a transistor to selectively couple the first side of the TSV to a power supply voltage.[034] In at least one aspect, the present disclosure may relate to an apparatus which includes an amplifier, a plurality of through silicon vias (TS Vs), and a mesh wiring structure. The mesh wiring structure includes a grid of conductive elements. The amplifier is coupled to the TSVs via the mesh wiring structure.[035] The grid of conductive elements may include a plurality of conductive lines in a first metal layer, and a plurality of conductive lines in a second metal layer. The apparatus may include a second mesh wiring structure coupled to the TSVs. The mesh wiring structure may be coupled to a first side of the TSVs and the second mesh wiring structure may be coupled to a second side of the TSVs. The amplifier may include a chopper instrumentation amplifier. BRIEF DESCRIPTION OF THE DRAWINGS[036] FIG. 1 is a schematic diagram of a prior art memory device.[037] FIG. 2 is a schematic diagram of a prior art circuit path.[038] FIG. 3 is a block diagram of a prior art TSV block of an IF die.[039] FIG. 4 is an operating characteristics diagram for the prior art instrumentation amplifier of Figure 1.[040] FIG. 5 is a schematic diagram of a memory device according to an embodiment of the present disclosure[041] FIG. 6 is a schematic diagram of a TSV buffer circuit according to an embodiment of the present disclosure.[042] FIG. 7 is a schematic diagram of a chopper instrumentation amplifier according to an embodiment of the present disclosure[043] FIG. 8 is a schematic diagram of a non-chopper amplifier according to an embodiment of the present disclosure.[044] FIG. 9 is a schematic diagram of a chopper amplifier according to an embodiment of the present disclosure.[045] FIG. 10 is a schematic diagram of a mesh wiring structure according to an embodiment of the present disclosure.[046] FIG 11 is a schematic diagram of a mesh wiring structure according to an embodiment of the present disclosure.[047] FIG. 12 is a schematic diagram of a portion of the mesh wiring structure of Figure 11 according to an embodiment of the present disclosure.[048] FIG. 13 is a schematic diagram of a mesh wiring structure according to an embodimen t of the present disclosure[049] FIG. 14 is a schematic diagram of a portion of the mesh wiring structure of Figure 13 according to an embodiment of the present disclosure.[050] FIG. 15 is a schematic diagram of an IF die according to an embodiment of the present disclosure. [051] FIG 16 is a schematic diagram of an IF die according to an embodiment of the present disclosure.[052] FIG. 17 is a schematic diagram of an IF die according to an embodiment of the present disclosure.DETAILED DESCRIPTION[053] The following description of certain embodiments is merely exemplary in nature and is in no way intended to limit the disclosure or applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the disclosure. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of embodiments of the disclosure. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the disclosure is defined only by the appended claims.[054] Layers of a semiconductor device, such as a memory device, may be coupled together by conductive elements, such as through silicon vias (TSVs). The TSVs may need to be tested in order to ensure adequate coupling (e.g., a low enough resistance) between the layers. Test circuitry may be provided in an interface die that the layers of the semiconductor device are stacked on. Since the test circuitry (e.g., the instrumentation amplifier) may be a bulky component, it may be desirable to move the test circuits to vacant areas of the memory chip. Further, it may be desirable to provide a high gain amplifier such that even small changes in resistance of the TSV can be measured.[055] Figure 4 is an operating characteristics diagram 400 for the instrumentation amplifier 102 of Figure 1 and 202 of Figure 2. The diagram 400 shows the output VOUT of the instrumentation amplifier along the x-axis and the common voltage Vcom at the two inputs (e.g., the common voltage between Vin(+) and Vin(-) of Figure 1) along the y-axis. Both axes extend from 0V to the power supply voltage VDD.[056] The diagram 400 show's lines to represent the allowable Vcom_max and Vcom_min to maintain linearity (e.g., following equations 16 and 17 above) for two different conditions of the second gain A2. The solid line represents a situation in which A2 ==:1. The dashed lines represent a situation in which A2 > 1. From the diagram 400, it can be seen that when the output voltage is increased to VDD, the only allowable common voltage Vcom is VDD/2 when the gain of the second stage is 1. However, if the gain is increased, then even when the output voltage Vout is VDD, the allowable Vcom values are between VDD{ 1-1/(2*A2)} and VDD/(2*A2). Accordingly, an amplifier may be provided which allows for a high gain as well as a broad range of allowable V corn values.[057] Figure 5 shows a memory device 500 according to an embodiment of the present disclosure. The memory device 500 may include layers 506a-d which are stacked on top of an IF die 508. The IF die 508 includes an amplifier 502 and an amplifier 504. In some embodiments of the disclosure, the amplifier 502 is a chopper instrumentation amplifier, and amplifier 502 will be referred to as such, and the amplifier 504 may be referred to as a force amplifier. The layers 506a-d are coupled to each other and to the IF die 508 with TSVs which may include sense TSVs 512, power TSVs 520a-b, and signal TSVs 510. The signal TSVs 510 may be coupled to TSV buffer circuits 522. Each of the power TSVs 520a-b is coupled to a current supply circuit 521. Each layer 506a-d and the IF die 508 may include shift registers 523, which may be used to control the TSV buffer circuits 522 and/or the current supply circuits 522.[058] While reference will be made to the memory device 500 with regard to certain descriptions of orientation (e.g., upper and lower portions, top and bottom surfaces, etc.) it is to be understood that these are intended for descriptive purposes only, and only imply relative placement of components in the memory' device 500. The memory device 500 may assume any spatial orientation. [059] The memoiy device 500 may be a dynamic random access memory device (DRAM), in which memoiy dies are stacked together in layers 5Q6a-d. Each of the layers 5Q6a-d may be a die of the memoiy, such as a memory core die. In some embodiments of the disclosure, the DRAM device is a high bandwidth memory (HBM). Each of the layers 506a-d may include internal circuitry (not shown) such as memory' cells. The IF die 508 may contain interface components, such as an I/O circuit of the memory device 500 The signal TSVs 510 may selectively couple the internal circuitry' of the layers 506a-d to the interface components of the IF die 508. Although a particular configuration of memory device 500 is shown in Figure 5, it is to be understood that various arrangements may be made. For example, more or less layers may be provided in a different arrangement of the memory device 500. Similarly, although a memory device 500 is described, it is to be understood that the TSVs and TSV resistance testing methods and apparatuses of the present disclosure may he used with any number of integrated circuit devices involving stacked elements.[060] Similar to the TSVs described in regards to Figure 1, the TSVs 510, 512, and 520a- b may each be composed of an upper portion and a lower portion. The upper portion may be positioned along an upper surface of each of the layers 506a-d, while the lower portion may be positioned along an lower surface of each of the layers 506a-d and the IF die 508 The upper and low'er portions may be coupled together by a conductive path running through the layer 506a-d. The upper and lower portions of a given TSV may be contacts along the upper and low'er surfaces, respectively, of the layers 506a-d. The TSVs may be arranged in columns such that when the layers 5Q6a-d are stacked on each other and the IF die 508, the upper and lower portions of each TSV are coupled to lower and upper portions respectively of corresponding TSVs in adjacent layers to form a conductive path between the layers 506a- d and the IF die 508. Each layer is coupled via the TSVs to the adjacent layers (e.g., layer 506c is coupled to layer 506d and to layer 506b). The bottom layer 506a is coupled to the IF die 508. Each of the layers 506a-d may be physically identical to one another. Thus, the top layer (e.g., layer 506d) may include lower portions of TSVs that are not coupled to any corresponding upper portions. [061] The IF die 508 may contain test circuitry for determining the resistance of the signal TSVs 510. The test circuitry may include a force amplifier 504 and a chopper instrumentation amplifier 502. The force amplifier 504 may provide a current Iref along a line Force_if to the TSV buffer circuits 522b of the IF die 508. The force amplifier 504 may include a differential amplifier (e.g., an operational amplifier), which includes two inputs and an output. The first input is coupled to a reference voltage Vref. The force amplifier may provide an output to the gate of a transistor. This transistor may be of an N-channel type. The source of the transistor may be coupled to ground through a feedback resistor RO The second input of the differential amplifier may be coupled between the source of the transistor and R0. A drain of the transistor may be coupled to the line Force if. In some embodiments, the line Force_if may be a mesh wiring structure, as described in Figure 10.[062] The TSV buffer circuits 522b may be coupled to the upper portions of the signal TSVs 510 positioned along the lower surface of the IF die 508. One (or more) buffer circuits of the TSV buffer circuits 522b may be selectively activated to couple the line Force If to one (or more) columns of signal TSVs 510. The TSV buffer circuits 522b may also be coupled to a negative input Sense- of the chopper instrumentation amplifier (CIA) 502.[063] The power TSVs 520a-b may be a pair of columns of TSVs positioned on either side of a group of columns of signal TSVs 510. As shown in the example layout of Figure 5, the power TSVs 520a-b are positioned on either side of a group of four columns of signal TSVs 510. More or less signal TSV s may be grouped between the power TSVs in other examples. The power TSVs 520a-b may selectively provide the power supply voltage Vdd along a line Force<i> in each layer 506a-d where i is the number of the layer (e.g , 0-3). Force<i> may be selectively coupled to the power TSVs 520a-b via current supply circuit (FD) 521.[064] The lower portion of each of the signal TSVs 510 may be coupled to a TSV buffer circuit 522a, which may selectively couple the lower portion of the signal TSV 510 to Force<n>. The TSV buffer circuits 522a may also selectively couple the lower portion of the signal TS Vs 510 to a line Sense<i> where i is the number of the layer 506a-d. Each of the layers 506a-d sense lines Sense<i> is coupled to the sense TSVs 512. The sense TSVs 512 provide a positive signal Sense+ to the IF die 508 6[065] The CIA 502 amplifies a difference between the voltage on Sense-'- and the voltage on Sense-. When the current supply circuit 521 and the TSV buffer circuit 522a-b are active, the reference current Iref may flow from the layer 506a-d with the activated current supply circuit 521 through the column of signal TSVs 510 coupled to the activated TSV buffer circuits 522a-b. A voltage difference may be generated across the signal TSV 510 that current is flowing through, which may be coupled to Sense+ and Sense-, and amplified by the CIA 502. The CIA 502 may provide an output voltage OUT based on the difference between Sense and Sense- The components and operation of the CIA 502 are described in more detail in regard to Figures 7-9.[066] Each of the layers 506a-d and the IF die 508 may include shift registers 523 The shift registers 523 may be used to control the selective coupling of the signal and power TSVs 510, 520a-b by controlling the switches of the TSV buffer circuits 522a-b and the current supply circuits 521. The IF die 508 may have the same shift registers 523 as the layers 506a-d, or may have more or less shift registers 523. For example, as shown the IF die 508 does not have the shift register FDCtrl_core since there are no current supply circuits 521 in the IF die 508.[067] The TSVs (e.g., signal TSVs 510, sense TSVs 512, and power TSVs 520a-b) may be arranged together in a TS V block. The TS V block may be an area of the layers 506a-d (and IF die 508) where one or more of the TSVs are located. Corresponding TSVs may be arranged in columns such that, for example, the signal TSV' 510 in layer 506d is vertically aligned with corresponding signal TSVs in layers 506a-c and IF die 508. Accordingly, the TSV block may be a volume within the memory device 500 formed by the TSV blocks in each of the layers 506a-d when they are stacked on top of one another. The IF die 508 may have an area corresponding to the TSV biock(s) of the layers 506a~d, as indicated by the dotted lines in the IF die 508. The area in the IF die 508 may be bounded by a perimeter of the vertically aligned TSVs of the stacked layers. The CIA 502 and the force amplifier 504 may be located outside the area of the IF die 508 which is underneath the TSV block(s) of the stacked layers 506a~d (e.g., outside the dotted lines of the IF die 508 of Figure 5). In some embodiments, the CIA 502 and/or force amplifier 504 may he located in regions of the IF die 508 which do not contain other components of the IF die 508 and may generally be referred to as vacant regions of the IF die 508.[068] In an example TSV resistance measurement operation, the FDCtri core shift register 523 may activate the current supply circuit 521 of the layer 506a-d that the selected TSV 510 is in. The FDCtrl core shift register 523 may provide a signal to a transistor of the current supply circuit 521 to couple the Force<i> line to a power supply voltage Vdd. The XSR core and YSR core shift registers 523 of the layer 5Q6a-d containing the selected TSV 510 and the IF die 508 may activate the TSV buffer circuits 522a, b along the column containing the selected TSV 510. XSR__core and YSR_core may provide signals X and Y respectively to the TSV buffer circuits 522a, b. The signals X and Y may activate transistors Force MOS and Sense MQS of the TSV buffer circuit 522a, b, which may couple the signal TSV 510 to the force amplifier 504 and the CIA 502. The force amplifier 504 may provide a constant current iref based on the reference voltage Vref and feedback resistor R0 (Iref = Vref/RO) coupled to the force amplifier 504. The current Iref may flow along the vertical column of signal TSVs 510 between the layer 506a-d containing the selected signal TSV 510 and the IF die 508. The current Iref may generate a voltage drop Vx across the selected TSV 510 based on the resistance of the TSV 510. The CIA 502 may amplify a voltage difference (e.g., Vx) across the selected TSV(s) 510 (e.g., the voltage difference betweenSense+ and Sense-). Since the current Iref is known, the resistance of the selected TSV 510 may be determined based on the measured voltage Vx (e.g., by Ohm’s law).Figure 6 shows a TSV buffer circuit 600 according to an embodiment of the present disclosure. The TSV buffer circuit 600 may be used, in some embodiments, to implement the TSV buffer circuits 522a-b of Figure 5. The TSV buffer circuit may include a TSV 610, a transistor sense MOS 624, a transistor force MOS 625, a NAND gate 626, and a buffer circuit 627. Each of the transistors sense MOS 624 and force MOS 625 may be P-channel type transistors. The source of the sense MOS 624 is coupled to the TSV 610, and the drain of the sense MOS 624 is coupled to Sense<i>. The drain of the force MOS 625 is coupled to Force<i> and the source of the force MOS 625 is coupled to the TSV 610 The gates of the sense MOS 624 and force MOS 625 are coupled to the output of the NAND gate 626. The inputs of the NAND gate 626 are control signals X and Y, which may be provided by shift registers (e.g., shift registers 523 of Figure 5). The TSV 610 is also coupled to internal circuitry of the memory device via buffer circuit 627, which may be a hi state buffer. Buffer circuit 627 is coupled to a buffer control signal BufOff[070] The buffer circuit 627 may be used to selectively couple the TSV 610 to the internal circuitry (e.g., memory cells) of the layer. The signal BufOff may be used to deactivate the buffer circuit 627 to prevent coupling between the TSV 834 and the internal circuits. The signal BufOff may cause the buffer circuit 627 to enter a high impedance state. The signal BufOff may be provided by logic within the memory layer (e.g., by shift register 523 of Figure 5) or provided by external components (e.g., a memory controller) coupled to the memory device.[071] The NAND gate 626 may be used to selectively couple the TSV 610 to Force<i> and Sense<i> via the force MOS 625 and the sense MOS 624, respectively. The NAND gate 626 may receive control signals X and Y from the shift registers (e.g., shift registers 523 of Figure 5) of that layer. The control signal X may be provided by the shift register XSR, and the command signal Y may be provided by the shift register YSR. The NAND gate 626 will output a low voltage (e.g., a low logic level, 0V) when both of the command signals X and Y are a high voltage (e.g., a high logic level, Vdd). The output of the NAND gate 626 is provided to the gates of the transistors 624, 625, which may become conductive when the voltage on their gates is a low voltage. Thus, the TSV 610 may only be coupled to Force<i> and Sense<i> when the command signals X and Y are both at a high level.[072] FIG. 7 shows a chopper instrumentation amplifier according to an embodiment of the present disclosure. The CIA 700 may, in some embodiments, be an implementation of the chopper instrumentation amplifier 502 of Figure 5 The chopper instrumentation amplifier (CIA) 700 may receive a differential input INp and INn and provide an output OUT. The input INp may be coupled to Sense+ and the input INn may be coupled to Sense- . The chopper instrumentation amplifier 700 may include a first amplifier 730, a second amplifi er 731, a third amplifier 732, and a fourth amplifi er 733. The chopper instrumentation amplifier 700 also may include a buffer circuit 734 which couples a bias voltage Vbiascom to the second, third, and fourth amplifiers 731-733. The first second, and third amplifiers 730-732 may be chopper amplifiers, while the fourth amplifier 733 may he a non-chopper amplifier.[073] The first amplifier 730 receives input voltages INp and INn and provides a differential output LPlp and LPln to a first intermediate node. The second amplifier 731 is coupled to the first intermediate node and provides an output LP2out to a second intermediate node. The third amplifier 732 is coupled to the second intermediate node and provides an output LP3out to a third intermediate node. The fourth amplifier 733 is coupled to the third intermediate node and provides the output voltage OUT.[074] Each of the amplifiers 730-733 may include one or more sub-amplifier. The fourth amplifier 733 and the buffer circuit 734 may include symmetric operation-point self-biased amplifiers (SOS amplifiers). The SOS amplifiers are differential amplifiers similar to op- amps, and will he described in more detail in Figure 8. The first, second, and third amplifiers 730-732 may include chopper-switch SOS amplifiers (CSOS amplifiers) which may generally be similar to the SOS amplifiers, but include chopper circuits. The CSOS amplifiers are described in more detail in Figure 9. The CSOS amplifiers may function similarly to the SOS amplifiers, however the chopper circuits may be used to cancel an input offset voltage input to the CSOS amplifier. The CIA 700 may also include low-pass filters (LPFs), which may remove the rectangular wave superimposed by the chopper circuits of the CSOS amplifiers.[075] The first amplifier 730 is coupled to the input voltages INp and INn. The input voltage INp is coupled to the non-inverting input of a CSOS amplifier CSOS1, while the input voltage INn is coupled to the non-inverting input of an amplifier CSOS2. The outputs of CSOS1 and CSOS2 may be coupled together by resistors R21, Rl, and R22 which are coupled in series between the outputs. The inverting input of CSQS1 may be coupled between resistors R21 and Rl, while the inverting input of CSOS2 may be coupled between Rl and R22. The output of CSOS1 is coupled in series to LPF1 and to the non-inverting input of CSOS3. The output of CSOS2 is coupled in series to LPF2 and to the non-inverting input of CSOS4. CSOS3 provides an output voltage LPlp, which is coupled to the inverting input of CSOS3 and provided as an output of the amplifier 730. CSOS4 provides an output voltage LPln, which is coupled to the inverting input of CSOS4 and provided as an output voltage of the first amplifi er 730.[076] The first amplifier 730 amplifies the differential input potential (e.g., the difference between INp and INn) with a gain Aid that can be calculated by equation 18, below:[077] The resistors R21 and R22 are assumed to have equal resistances to each other, which is equal to R2. The first amplifier has a common mode amplification A1C (e.g., amplification of a voltage shared across INp and INn) of unity (e.g., A1 C = I). Because the amplifiers CSOS1 and CSOS2 include chopper circuits, the input offset voltages on CSOS1 and CSOS2 are canceled. However, a rectangular wave with an amplitude equal to the input offset voltage multiplied by Aid (see equation 18) is superimposed on the output of CSOS! and CSOS2. The low pass filters LPFi and LPF2 may be used to reduce or remove the rectangular wave from the outputs of CSOS1 and CSOS2. The amplifiers CSOS3 and CSGS4 are used as voltage followers in order to boost the current along outputs LPlp and LPln. Accordingly, the rectangular wave which uses input offset voltage of CSOS3 and CSGS4 as part of its amplitude remains superimposed on the outputs LPln and LPlp, but the rectangular wave is not amplified by the first amplifier, and so may have a negligible impact on the second amplifier 731.[078] The second amplifier 731 receives the outputs LPlp and LPln provided by the first amplifier 730 as inputs. The second amplifier 731 also receives bias voltage Vbiascom provided by buffer circuit 734. The buffer circuit 734 may act as a voltage follower, and may be an SOS amplifier, SOS2. A non-inverting input of SOS2 is coupled to the voltage Vbiascom, and the output Vbiascom is coupled to the inverting input of SOS2. The input LPlp is coupled to Vbiascom through two resistors R31 and R41 coupled in series. A non inverting input of an amplifier CSGS5 is coupled between R31 and R41. The input LPln is coupled to a low-pass filter LPF3 via two resistors R32 and R42 in series. The inverting input of CSOS5 is coupled between R32 and R42. The output of CSOS5 is coupled between R42 and LPF3. LPF3 provides a filtered output to a non-inverting input of amplifier CSOS6, which outputs a voltage LP2out which is coupled back to the inverting input of CSOS6. The second amplifier 731 provides the voltage Vbiascom as a first output voltage and LP2out as a second output voltage.[079] The second amplifier 731 amplifiers the differential between inputs LPlp and LPln by a gain A2d given by equation 19 below:[080] The resistors R41 and R42 have an equal value of R4, and the resistors R31 and R32 have an equal value of R3. If there is no differential voltage between the inputs (e.g., LPlp - LPln = 0) then the output of the chopper amplifier CSOS5 is the bias voltage Vbiascom. In that scenario, the input offset voltage of the CSOS5 is amplified by a gain of A2os, which is given by equation 20, below:[081] The amplified input offset voltage is superimposed on the output of CSOS5 as a rectangular wave. The low-pass filter LPF3 may remove the amplified input offset voltage. The amplifier CSOS6 may be used as a voltage follower to drive current on the output. The rectangular wave applied by the chopper circuit of CSOS6 is superimposed on the output voltage LP2out. However, since this wave was not amplified, its effect on the third amplifier 732 may be negligible.[082] The third amplifier 732 receives the outputs of the second amplifier 731, Vbiascom and LP2out as inputs. The third amplifier 732 may couple the Vbiascom to LP2out with two resistors R61 and R51 coupled in series. A chopper amplifier CSOS7 may have a non inverting input which is coupled between R61 and R51. The amplifier CSOS7 may provide an output which is coupled to a low-pass filter LPF4 and also coupled to Vbiascom via two resistors R62 and R52 which are coupled in series. An inverting input of CSOS7 is coupled between resistors R52 and R62. The filtered output of the low-pass filter LPF4 is coupled to a non-inverting input of a chopper amplifier CSOS8, which provides an output voltage LP3out. The inverting input of CSOS8 is coupled to the output LP3out. The third amplifier 732 provides the bias voltage Vbiascom and LP3out as outputs[083] The third amplifier 732 may be generally similar to the second amplifier 731 , except that the third amplifier 732 has a chopper amplifier CSOS7 in which both the inverting and non-inverting inputs are coupled to the bias voltage Vbiascom (via resistors R52 and R61 respectively). The input of the chopper amplifier CSOS7 is the differential between Vbiascom and LP2out, and the output is expressed by the differential gain A3d given by equation 21 below:[084] In the third amplifier 732, the resistors R61 and R62 may have values equal to R6, while the resistors R51 and R52 may have values equal to R5. The common mode voltage that is output by the third amplifier may be Vbiascom. The chopper amplifier CSOS7 provides an output with a superimposed rectangular wave with an amplitude determined by the input offset voltage of CSOS7 and amplified by a gain of A3 os given by equation 22 below:[085] In order to remove this amplified rectangular wave, the output of chopper amplifier CSOS7 is provided to a low-pass filter LP4, which strips the rectangular wave from the amplified signal. The low-pass filter LP4 then provides the stripped amplified signal to a second chopper amplifier CSOS8, which acts as a voltage follower. The chopper amplifier CSOS8 also applies a rectangular wave on the output signal, however it is not amplified, and so may have a negligible impact on the fourth amplifier 733 [086] The fourth amplifier 733 receives the outputs of the third amplifier, Vbiasco and LP3out, as inputs. The fourth amplifier 504 may couple the input voltage LP3out to ground through resistors R72 and R82 coupled in series. A non-inverting input of an ampl ifier SOS 1 is coupled between the resistors R72 and R82. The amplifier SOS 1 provides an output OUT, is coupled to the input Vbiascom via two resistors R81 and R71 which are coupled in series. An inverting input of the amplifier i s coupled between the resistors R71 and R81 The output voltage OUT is provided as the output of the fourth amplifier 733.[087] The fourth amplifier 733 may have a differential gain of Ad4 which is applied to the potential difference between LP3out and Vbiascom, and given by equation 23 below:Eqn. 23[088] In the fourth amplifier 733, the resistors R71 and R72 may have a value equal to R7, and the resistors R81 and R82 may have a value of R8. The fourth amplifier may provide a common mode voltage of 0V. The amplifier SOS1 may not include a chopper circuit and may increase the input offset by a gain of A4os, given by equation 24 below:[089] The amplified offset voltage provided by the amplifier SOS 1 may remain as an error of the output voltage OUT. The superimposed (unamplified) rectangular wave that ^as superimposed on LP3out by voltage follower CSOS8 of the third amplifier 732 may remain on the output voltage OUT. However, the superimposed rectangular wave may be small in magnitude compared to the signal on the output voltage OUT, and may be a‘ripple’ on the signal .[090] Accordingly, overall the amplifier 700 receives a differential input INp and INn and amplifies it by an overall differential gain A to provide the output voltage OUT. The overall differential gain A may be found by multiplying the differential gains of each of the amplifiers 730-733, and is given by equations 25 and 26, below:[091] From equation 26, it may be seen that the overall amplification of the differential signal is determined by the resistor values R1-R8 which are chosen. These values may be selected based on the desired application and the desired operating characteristics of the amplifier. In some embodiments, the gain of each of the stages (amplifiers 730-733) may be kept low. By keeping the gain of each stage low, the amplitude of the rectangular wave resulting from the input offset voltage is also kept relatively low. This may prevent the waveforms of the amplifier 700 from reaching the rail voltages (e.g., Vdd and Vss) and distorting the waveform. The low pass filters (e.g., LPF1-LPF4) and the voltage followers (e.g., CSOS3, CSOS4, CSGS6, and CSOS8) may reduce the output ripple imposed by the first three stages 730-732 to an amount equivalent to the input offset of each voltage follower. This may reduce a risk of the waveform being deformed by subsequent stages of the chopper instrumentation amplifier 700.[092] The allowable common mode voltage (e.g., the range of common mode voltages that do not lead to any distortion of the waveform due to clipping on the rail voltages) of the chopper instrumentation amplifier 700 may span almost the full range of the rail voltages (e.g., from approximately Vdd to approximately ground). Low gain at the first stage 730 may be dispersed by low gain of the subsequent stages 731-733.[093] In one example, the total gain A may be about 125. Higher or lower total gains are possible in other examples. In the example where the total gain A is 125, the resistor values R1-R8 may be set such that Aid = 5, A2d = 5, A3d =5, and A4d = 1. Other gains and other distributions of gains between the amplifiers 730-733 may be used in other examples. The gain of the final stage may be set to unity (e.g., A4d = 1) which allows error voltages to be close to OV, since the output waveform and its deformation are not increased by the fourth amplifier 733. In this example, the input common mode voltage may be between Vdd*(I/50) and V dd*(l~l/50). Thus in this configuration the range of allowable common mode voltages is between 98% of Vdd and 2% of Vdd. In other examples, the gain of the final stage (e.g., fourth amplifier 733) may be greater than unity[094] As an example application of the chopper instrumentation amplifier 700, the chopper instrumentation amplifier 700 may be used to amplify a voltage Vx from TSV testing. The chopper instrumentation amplifier 700 may be configured as previously described such that the gains of amplifiers 730-732 are 5, and the gain of amplifier 733 is 1. If a maximum input offset of the CIA 700 is assumed to be l OmV, then the input offset which appears at the output is 20m V (lOmV * A4os = lOmV * 2). The 20m V offset is the result of input offset of amplifier SOS1 of the fourth amplifier 733. The offset error equivalent can be found to be IόOmU (20m V / 125) when the offset is converted to the input. Only a rectangular wave with the amplitude of !OmV at maximum is superimposed, so that the output ripple containing the error resulting from that superimposition is an offset of 240pV (30 V/125) converted to the input. In the TSV testing scenario, a current of 200mA may be driven through the TSV. If the TSV has a resistance of 10W, then the voltage Vx i s 2m V. The error caused by the 240pV of offset is thus 1.2W. Thus, the CL4 700 may be sensitive enough to measure small (e.g., -10W) resistances in a TSV.[095] Although a specific implementation of the chopper instrumentation amplifier 700 was described in Figure 7, it is to be understood that the configuration of the amplifier 700 may be changed in other implementations. For example, the third amplifier 732 may be omitted from the chopper instrumentation amplifier 700. In that example the second amplifier 731 may provide voltages Vbiascom and LP2out to the resistors R71 and R72 (respectively) of the fourth amplifier 733.[096] Figure 8 shows a non-chopper amplifier (an SOS amplifier) according to an embodiment of the present disclosure. The SOS amplifier 800 may, in some embodiments, be used as the amplifi ers SOS1-SOS2 of Figure 7. The SOS amplifier 800 may function as a differential amplifier and amplify a potential difference across two inputs INp and INn. The SOS amplifier 800 may provide an output voltage OUT which is based on the inputs INp and INn. The SOS amplifier 800 includes a main amplifier 834, a sub amplifier 836, and an output stage 838. The main amplifier 834 is coupled to the input voltages INp and INn and provides an output to the output stage 838, which provides the output voltage OUT. The main amplifier 834 may be coupled to the sub amplifier 836, which may provide feedback to regulate the voltages on the main amplifier 834. While Figure 6 may show a specific implementation of the SOS amplifier 600, it is to be understood that variations may be made to the layout and components of the SOS amplifier 600 without departing from the present disclosure.[097] The main amplifier 834 includes transistors MN 1 , MN2, MP1, MP2, and MP5 as well as a capacitor Cl. The gate of transistor MP1 is coupled to INp while the gate of transistor MP2 is coupled to INn. The sources of the transistors MP1 and MP2 may be coupled together by a tail voltage Tail. Tail is coupled to a source voltage Vdd through capacitor C 1 and to the drain of transistor MP5. The drain of transistor MP1 is coupled to a voltage ODn, and the drain of transistor MP2 is coupled to a voltage ODp. The gates of transistors MN2 and MN1 are coupled together, and are coupled to the voltage ODp. The drain of transistor MN2 is coupled to the voltage ODp and the voltage ODn is coupled to the drain of the transistor MN1. The sources of both transistors MN 1 and MN2 are both coupled to ground. The voltages ODn and ODp are provided as outputs of the main amplifier 834 and as inputs of the sub amplifier 836. The voltage ODn is additionally provided as an input to the output stage 838.[098] The sub amplifier 836 may maintain a relationship of the voltages ODp and ODn within the main amplifier 834. The sub amplifier includes transistors MN3, MN4, MP3, and MP4 as well as a capacitor C2. The gates of transistors MN3 and MN4 are coupled to input voltages ODp and ODn. The sources of transistors MN3 and MN4 are both coupled to ground, while their drains are coupled to voltages CPp and CPn respectively. The voltages CPp and CPn are coupled to the drains of transistors MP3 and MP4 respectively. The sources of the transistors MP3 and MP4 are coupled to a source voltage Vdd. The gates of the transistors MP3 and MP4 are coupled together, are coupled to the source voltage Vdd through capacitor C2, and are coupled to the voltage CPn. The voltage CPp is also coupled to the gate of transistor MP5 of the main amplifier 834 [099] The sub amplifier 836 may control the gate bias of transistor MP5 in order to keep the voltage of ODp and ODn equal to each other. Keeping the voltages ODp and QDn equal also leads to the voltages CPp and CPn being equal. Thus, the transistor pairs MN1 and MN2 and MP1 and MP2 operate at the same operating point. In this manner the system offset voltage is minimized. In some examples, the system offset may be ± 50 pV or less, even over a wide range of input voltages in a state where device dispersion does not exist.[0100] The output stage 838 is provided the voltage ODn as an input, and amplifies it to an output voltage OUT. The output stage 838 may include transistors MN5, MP6, MP7, and MP8, resistors III and R2, and capacitors C3, C4, and C5. The input voltage ODn is coupled to the gate of transistor MN5, which has a source coupled to ground, and a drain coupled to the output voltage OUT. The input voltage ODn is also coupled to the output voltage OUT via a resistor R1 and capacitor C3 coupled in series. The output voltage OUT is also coupled to ground via a capacitor C4 and coupled to ground via a resistor R2 and capacitor C5 coupled in series. A series of transistors MP8 to MP6 is coupled such that a drain of MP8 is coupled to the source of MP7 and a drain of MP7 is coupled to a source of MP6. The drain of MP6 is coupled to the output voltage OUT. The source of MP8 is coupled to the source voltage Vdd. The gates of MP8 and MP7 are coupled to ground. The gate of MP6 is coupled between the resistor R1 and the capacitor C3.[0101] Figure 9 shows a chopper amplifier (a CSQS amplifier) according to embodiments of the present disclosure. The chopper amplifier 900 may be used, in some embodiments, to implement the chopper amplifiers CSOS1-CSOS8 of the chopper instrumentation amplifier 500 of Figure 5. The CSOS amplifier 900 includes a main amplifier 934, a sub amplifier 936, and an output stage 938. In addition, the CSOS amplifier 700 also includes chopper circuits 940a-e. The chopper amplifier 900 may be generally similar to the SOS amplifier 800 of Figure 8 except for the addition of chopper circuits 940a-c between the input and the main amplifier 934 and in the main amplifier 934 and sub amplifier 936. For the sake of brevity, features and components that were previously described in regards to Figure 8 will not be described again. [0102] The CSGS amplifier 900 includes a first chopper circuit 940a which is positioned between the input voltages INp and INn and the main amplifier 934. The first chopper circuit 940a receives the inputs INp and INn and provides outputs INpx and INnx to the respective gates of transistors MP1 and MP2 of the main amplifier 934. The second chopper circuit 940b is inserted in the main amplifier 934 and is coupled to voltages ODn and ODp as inputs. The second chopper circuit provides a voltage ODnx which is coupled to the gates of transistors MN1 and MN2, and also provides a voltage ODpx which is provided to the output stage 938 as an output of the main amplifier 934. The third chopper circuit 940c is coupled to the sub amplifier 936 and receives the voltages CPp and CPn as inputs. The third chopper circuit 940c provides an output voltage CPnx which is coupled to the gates of transistors MP3 and MP4 and also coupled to a source voltage Vdd via capacitor C2. The third chopper circuit 940c also provides a voltage CPpx which is coupled to the gate of transistor MP5 of the main amplifier 934[0103] The chopper circuits 940a-c may receive a first and second input, and provide a first and second output. The chopper circuits 940a-c may vary in time such that they alternate providing the first input voltage as the first input and the second input voltage as the second output with providing the first input voltage as the second output and the second input voltage as the first output. The coupling of the inputs to the outputs may vary in time at a rate based on clock signals fΐ and f2. The clock signals f! and f2 may be provided from outside the CSOS amplifier 900. Because the CSGS amplifier 900 is a differential amplifier, the use of the chopper circuits to alternate inputs may reduce or cancel the input voltage offset by making it appear on both input terminals of the CSOS amplifier 900.[0104] The first chopper circuit 940a may act as a modulation switch, while the second and third chopper circuits 940b, 940c may act as demodulation switches. The chopper circuits 94Qa-c may each receive clock signals fΐ and f2. The clock signals may have a frequency which may be used by the chopper circuit to up-converge the input offset and the input signal to the odd-order frequency of the clock signal. The demodulation switches may then remove these frequency components by down-converging the voltages that have been amplified by the CSOS amplifier 900. The demodulation chopper circuits 940b and 940c are coupled to QDn and ODp and to CPp and CPn respectively. As with the amplifier 800 of Figure 8, the voltages ODn and ODp are kept equal to each other, and the voltages CPp and CPn are kept equal to each other. In this manner, if an input offset is zero, the voltages do not change before and after the chopper circuit i s activated by the clock signals. By keeping the voltages equal at the chopper circuits 940b and 940c, a transient output error from switching of the chopper circuit is reduced[0105] The characteristics of the test circuitry' of the IF die (e.g., the force amplifier 504 and CIA 502 of Figure 5) may depend, at least in part, on the resistance of the coupling between the test circuitry'components and the TSV to be tested (e.g., similar to Rfl_core and Rfl if of Figure 2). The resistance may be at least partially dependent on the structure of the conductive elements (e.g., wiring) which couples the test components to the signal TSVs.[0106] Embodiments of the present disclosure may employ a wiring structure with a reduced resistance compared to the prior art discussed in Figures 1, 2 and 4. Figures 10-14 show example wiring structures of the present disclosure. The wiring structure may couple the amplifiers (e.g., the force amplifier 504 and/or CIA 502 of Figure 5) to the TSV buffer circuits. The reduced resistance of the wiring structure may allow'the amplifiers to be placed further away from the TSVs while maintaining the operating characteristics of the test circuit. In some embodiments, the wiring structure may be a mesh wiring structure, where multiple TSV buffer circuits (e.g., TSV buffer circuits 522a~b of Figure 5) are coupled together by a grid of conductive elements wiiich is also coupled to one or more of the amplifiers of the IF die. Figures 10-14 show' example mesh wiring structures, one or more of which may be used to the couple the amplifiers of Figure 5 (and/or Figures 15-17) to the TSV buffer circuits.[0107] Figure 10 shows an example mesh wiring structure 1000 according to an embodiment of the present disclosure. The mesh wiring structure 1000 may be a part of a memory device (e.g., memory device 500 of Figure 5) and may be used to couple a current from the force amplifier 1004 to signal TSV regions 1046. The mesh wiring structure 1000 may be part of the IF die (e.g., IF die 508 of Figure 5). Figure 10 may be a representation of a‘top down’ view of the memory device (e.g looking towards a layer of the memory device, as opposed to a cross-section of the layers of the memory device). The mesh wiring structure 1000 includes a first wiring layer (horizontal solid lines of the mesh) 1040, and a second wiring layer (vertical dotted lines of the mesh) 1042. The mesh wiring structure is shown overlaid on a layer of signal TSV regions 1046, which are depicted as open boxes, and current supply circuits 1021, which are shown as shaded boxes.[0108] The mesh wiring structure 1000 includes conductive lines in a first wiring layer 1040 and conductive lines in a second wiring layer 1042. The conductive lines in the first and second wiring layers 1040, 1042 may be coupled together by via holes 1044. The conductive lines may be arranged in a grid pattern, with the conductive lines in the first layer 1040 being roughly parallel to each other, the conductive lines in the second layer 1042 being roughly parallel to each other, and the lines in the first wiring layer 1040 being roughly perpendicular to the lines in the second wiring layer 1042. The via holes 1044 may be placed at the intersection of the lines in the first and second wiring layers 1040, 1042 to couple the first wiring layer 1040 to the second wiring layer 1042. The lines in the first wiring layer 1040 may correspond to the Force<i> wiring of Figure 6. The first wiring layer 1040 may be formed from a first metal layer of the die. For example, in some embodiments, the first wiring layer is an M3 metal layer of the memory device. The lines of the second wiring layer 1042 may align with the current supply circuits (FD) 1021 as part of a power source TSV area of the memory die. The second wiring layer 1042 may be formed from a second metal layer of the die. For example, in some embodiments, the second wiring layer 1042 may be an M2 metal layer of the memory' device. In some embodiments, the via holes 1044 may be the Via3 between M3 and M2 metal layers of the memory device.[0109] The force amplifier 1004 may be located outside (e.g., not underneath) the area of the TSV block. The force amplifier 1004 may provide a current (e.g Force_if) to the mesh wiring structure 1000. The current Force_if may be coupled along a line of the first wiring layer 1040 and may be coupl ed along via holes 1044 to the line of the second wiring layer 1042 and through additional via holes 1044 to the other lines of first wiring layer 1040. The lines of the first wiring layer 1040 may be coupled (not shown in Fi gure 10) to the signal TSV regions 1046. As described herein, the signal TSV regions 1046 include both the signal TSVs (e.g., signal TSV 510 of Figure 5 or TSV 610 of Figure 6) and the TSV buffer circuits (e.g., TSV buffer circuits 522 of Figure 5 or TSV buffer circuit 600 of Figure 6).[0110] The mesh wiring structure 1000 may have a relatively low resistance between the force amplifier 1004 and any given point of the mesh wiring structure 1000. The reduced resistance may allow the force amplifier 1004 (and the CIA, not shown) to be placed further from the signal TSV that are measuring the resistance of. The force amplifier 1004 (and the CIA) may be located in a vacant region of the memory device chip, and may not be located underneath the TSV blocks of the memory device. The force amplifier 1004 may be located in a region of the chip away from a center of the chip. The placement of the force amplifier 1004 outside the TSV block footprint may make it possible to negl ect the chip size overhead to the TS V resistance measurement circuit.[0111] An example may be considered using a memory device with a layout and components as described in Figures 5-10. The chopper instrumentation amplifier may obtain an equivalent input offset of 240pVm and allow an input common mode voltage of 2% of Vdd to 98% of Vdd The force amplifier may provide a constant current on the IF die of the memory device while a current is supplied by current supply circuits on a core die of the memory device. This may reduce the resistance between the current source and the signal TSVs (Rfl_core) since the current supply circuits may be adjacent to the signal TSVs. The memory device may also have a low resistance between the signal TSV and the force amplifier (Rfl_if) due to the mesh wiring structure. The Rfl_if between the farthest point marked in Figure 10 and the force amplifier may be about 278W.[0112] Considering the characteristics of this example for testing the resistance of a TSV, in one example the current Iref may be about 800mA, while the gain of the chopper instrumentation amplifier is about 125 From this, an output scale of IOW/V can be determined, which if the test circuitry is sensitive to 0.5 V, allows for rejection of TSVs with 5W or more of resistance. The input conversion offset of 240pV yields an offset of 30m V at the output, which translates to an error of 0.3W. The input common mode voltage lies at about 0.7V, with a voltage drop of Vdd-0.24V-FD device, where FD device is the voltage drop of the current supply circuit. In some embodiments, it may be possible to further increase the current (Iref), and thus measure a smaller resistance, if the current supply capacity (MOS size) of the current supply circuit is sufficient.[0113] Figures 11-14 show mesh wiling structures (1100-1400, respectively) in accordance with embodiments of the present disclosure. In the embodiments of Figures 11-14, the force amplifier 1104-1404 may provide current along both the core and IF side of the TSVs. The mesh wiring structures 1 100, 1200 of Figures 1 1 -12 may be used to couple the force amplifier 1104, 1204 to the upper portion of a signal TSV in the IF die, while the mesh wiring structures 1300, 1400 of Figures 13-14 may be used to couple force TSVs in each layer of the memory dies to the low?er portion of signal TS Vs in each layer of the memory dies. Thus, the memory device utilizing the mesh wiring structures 1100-1400 may generally be similar to the memory device 100 of Figure 1, except that the mesh wiring structure 1100/1200 may replace the Force- line in the IF die 108 and/or the mesh wiring structures 1300/1400 may replace one or more of the Force<i> lines in the layers 106a-d.[0114] Figure 11 shows a schematic diagram of a mesh wiring structure 1100 according to an embodiment of the present disclosure. The wire mesh structure 1 100 may generally be similar in layout to the wire mesh structure 1000 of Figure 10. The wire mesh structure 1100 includes conductive lines in a first wiring layer 1140 and conductive lines in a second wiring layer 1 142 arranged in a grid. The intersections of the conductive lines in the first and second wiring layers 1140, 1142 may be coupled by via holes (not shown). In contrast to the mesh wiring structure 1000 of Figure 10, in the mesh wiring structure 1100, the first wiring layer 1140 may be the M2 metal layer while the second wiring layer 1142 may be the M3 metal layer. The mesh wiring structure 1100 may be part of the IF die and may lie underneath the TSV block.[0115] Figure 12 is a schematic diagram of a portion 1200 of the mesh wiring structure 1 100 of Figure 11 according to an embodiment of the present disclosure. The view of Figure 12 may be enlarged compared to the view of Figure 11. The portion 1200 includes conductive lines in the first and second wiring layer, 1140, 1 140 coupled to the force amplifier 1104. The conductive lines may be overlaid on TSV regions 1246. The TSV regions 1246 (also shown as an inset) may include the lower portion of the signal TSV 1210 and the TSV buffer circuit 1222. The TSV buffer circuit 1222 includes transistors Sense MQS and Force MOS, which have sources coupled to the signal TSV 1210. Sense MOS has a drain which is coupled to a signal line (not shown). Force MOS has a drain which couples to the conductive lines in the first wiring layer 1140. In this manner the force amplifier 1104 may be selectively coupled to the signal TSVs 1210 along the mesh wiring structure 1100.[0116] Figure 13 shows a schematic diagram of a mesh wiring structure 1300 according to an embodiment of the present disclosure. The mesh wiring structure 1300 may generally be similar to the mesh wiring structure 1 100 of Figure 1 1, except that the mesh wiring structure 1300 is included in the layers of the memory die rather than in the IF die. Accordingly, the conductive lines in the first wiring layer 1340 may be coupled to a force TSV 1312 (which in turn is coupled to the force amplifier).[0117] FIG. 14 is a schematic di agram of a portion 1400 of the mesh wiring structure 1300 of Figure 13 according to an embodiment of the present disclosure. The portion 1400 may generally be similar to the portion 1200 of Figure 12. The TSV portion 1446 may include an lower portion of a signal TSV 1410 coupled to the TSV buffer circuit 1422. The TSV buffer circuit may include transistors Force MGS and Sense MOS which have drains coupled to the signal TSV 1410. The source of Sense MOS may be coupled to a sense line of the layer of the memory die (not shown), while the source of the Force MOS may be coupled to the conductive lines of the first wiring layer 1340. In this manner the force TSV 1412 may be selectively coupled to the signal TSV' 1410 through the mesh wiring structure 1300.[0118] Figure 15 shows a memory device 1500 in accordance with an embodiment of the present disclosure. The memory device 1500 may be generally similar to the memory device 500 of Figure 5. The memory'device 1500 includes a plurality of layers 15Q6a-d stacked on top of an IF die 1508. The layers 1506a-d and the IF die 1508 are coupled via TSVs including signal TSVs 1510, sense TSVs 1512 and power TSVs 1520a-b. The signal TSVs 1510 are coupled to TSV buffer circuits 1522a-b, and the power TSVs 1520a-b are coupled to current supply circuits 1521. Each layer I 506a-d and the IF die 1508 includes shift register circuits 1523 to control the TSV buffer circuits 1522a-b and the current supply circuits 1521. The IF die 1508 includes a force amplifier 1504 and a chopper instrumentation amplifier 1502 In the interest of brevity, features and components similar to those described with respect to the memory device 500 will not be repeated here[0119] The memory device 1500 includes a chopper instrumentation amplifier 1502 which is coupled to separate power supply voltages than the rest of the components of the memory device 1500. In some embodiments, the CIA 1502 may be coupled to a power supply voltage Vpp which is different than the power supply voltage Vdd that the force amplifier 1504 and the current supply circuits 1521 are coupled to. The power supply voltage Vpp may be higher than Vdd. The chopper instrumentation amplifier 1502 may share a common ground voltage (e.g., 0V) with the other components of the memory device 1500.[0120] The increased power supply voltage (Vpp > Vdd) of the CIA 1502 may eliminate the restriction of the upper limit of the input common mode voltage Vcom_max (see equation 16, above). The transistor size of the current supply circuit 1521 may be increased (relative to the current supply circuit 521 of Figure 5) to increase a. power supply capacity. This may cause a common mode voltage to be Vdd when the resistance of a signal TSV 1510 near the power supply circuit 1521 is measured. Since the common mode voltage is Vdd, but the upper limit of the common mode voltage is dependent on Vpp > Vdd, the upper limit will not be exceeded in this example[0121] Figure 16 shows a memory device 1600 in accordance with an embodiment of the present disclosure. The memory device 1600 may generally be similar to the memory device 1500 of Figure 15. For the sake of brevity, components and features similar to those described previously are not described again.[0122] Similar to the memory device 1500 of Figure 15, the memory device 1600 includes a CIA 1602 which is coupled to a separate power supply voltage Vpp The power supply voltage Vpp may be higher than the power supply voltage Vdd provided to the other components of the memory device 1600. As described with respect to Figure 15, increasing the power supply voltage of the CIA 1602 to Vpp may allow' for increased power in the current supply circuits 1621 up to Vdd.[0123] In the memory' device 1600, the current supply circuits 1621 may directly couple the voltage Vdd to the force lines Foree<i> in each of the layers 16Q6a~d. Since there is no transistor as part of the current supply circuit 1621, there is no need to increase the size of the transistor to accommodate the increased power, and layout of the components in each layer l606a-d may be easier.[0124] Figure 17 shows a memory device 1700 in accordance with an embodiment of the present disclosure. The memory' device 1700 may generally be similar to the memory device 1600 of Figure 16. For the sake of brevity, components and features similar to those described previously are not described again.[0125] As with the memory device 1600, the CIA 1602 is coupled to a power supply voltage Vpp. Unlike the memory device 1600, in the memory device 1700, the power supply circuits have been eliminated, and thus the lines Force<i> have also been eliminated from the memory device 1700.[0126] In the memory device 1700, the TSV buffer circuits 1722a of the layers 1706a-d are coupled directly to the power supply voltage Vdd. In particular, a force MOS of the TSV buffer circuit 1722a has a source which is coupled to Vdd and a drain which is coupled to the lower portion of a signal TSV 1710. The elimination of the Force<i> wiring may be useful in cases where the wiring tracks in the area around the TSVs are short.[0127] Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.[0128] Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow' .Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
A method of thermally testing for whether an integrated circuit die is attached to a die pad, is provided. Heat is applied from an external heat source to a first side of the die pad. The die is attached to a second side of the die pad. The second side of the die pad is opposite the first side of the die pad. A temperature of the die is measured at a first die location on the die. The die pad is located between the external heat source and the first die location. This method may be performed using a thermal test apparatus having a socket and the external heat source. The socket is adapted to receive and retain one or more leads of a die package. The apparatus may further include a mechanism adapted to move the heat source toward the first side of the die pad.
1. A method of thermally testing an integrated circuit die attached to a die pad, the method comprising:applying heat from an external heat source to a first side of the die pad, wherein the die is attached to a second side of the die pad, the second side of the die pad being opposite the first side of the die pad; andmeasuring a temperature of the die at a first die location on the die, wherein the die pad is located between the external heat source and the first die location.2. The method of claim 1, wherein the applying heat comprises touching the first side of the die pad with a heated object.3. The method of claim 1, wherein the applying heat comprises shining a laser beam on the first side of the die pad.4. The method of claim 1, wherein the applying heat comprises bringing a heated object proximate to the first side of the die pad.5. The method of claim 1, wherein the external heat source is a resistive heater, and wherein the applying heat comprises generating heat in the resistive heater using electricity.6. The method of claim 5, wherein the resistive heater is in physical contact with the first side of the die pad.7. The method of claim 1, wherein the applying heat is performed intermittently.8. The method of claim 1, wherein the temperature measuring is performed by measuring a leakage of current in a diode on the die.9. The method of claim 1, wherein the temperature measuring is performed by measuring a change in impedance of a device on the die.10. The method of claim 1, wherein the temperature measuring is performed by measuring a change in a property of a component on the die.11. The method of claim 1, wherein the die is a production die.12. The method of claim 1, wherein the die is a test die.13. A method of thermally testing an integrated circuit die attached to a die pad, the method comprising:applying heat from an external heat source to a first die pad location on a first side of the die pad, wherein the die is attached to a second side of the die pad, the second side of the die pad being opposite the first side of the die pad;measuring a first temperature of the die at a first die location on the die resulting from the applying heat to the first die pad location, wherein the die pad is located between the external heat source and the first die location;applying heat from the external heat source to a second die pad location on the first side of the die pad; andmeasuring a second temperature of the die at a second die location on the die resulting from the applying heat to the second die pad location, wherein the die pad is located between the external heat source and the second die location.14. The method of claim 13, wherein the second die location is the same as the first die location.15. The method of claim 13, wherein the second die location is different from the first die location.16. The method of claim 13, wherein the second die pad location is the same as the first die pad location.17. The method of claim 13, wherein the second die pad location is different from the first die pad location.18. A thermal testing apparatus comprising:a socket adapted to receive and retain one or more leads of a die package, wherein the die package further comprises a die and a die pad, the die pad having a first side and a second side, the first side of the die pad facing an opposite direction than the second side of the die pad, and wherein the die is attached to the second side of the die pad;a heat source being distinct and external relative to the die package, the heat source being adapted to apply heat to the first side of the die pad while the leads of the die package are retained by the socket; anda testing apparatus coupled to the die for testing the temperature of the die.19. The thermal testing apparatus of claim 18, wherein the heat source is an object adapted to be heated during use of the apparatus.20. The thermal testing apparatus of claim 18, further comprising a mechanism adapted to move the heat source toward the first side of the die pad.21. The thermal testing apparatus of claim 18, wherein the heat source is a laser emitter device.22. The thermal testing apparatus of claim 18, wherein the heat source is a resistive heater adapted to generate heat using a resistance to electricity.
TECHNICAL FIELDThe present invention relates generally to thermal testing of integrated circuit chips and packages. In one aspect it relates more particularly to thermally testing a die attachment to a die pad or a lead frame.BACKGROUNDHeat dissipation from an integrated circuit (IC) die or chip during operation is typically an important issue, especially as the density of IC devices on a die continues to increase. Also, many devices now have combinations of high-power transistors and low-power transistors formed on a same die. Such high-power transistors tend to produce more heat than low-power transistors. Further, more system-on-chip configurations are being used. Thus, there are often a wide variety of IC devices on a same die. Some of the IC devices can handle and/or put out much more heat than nearby or neighboring devices on the same die. Hence, the reliability and effectiveness of heat dissipation for a packaged IC chip may greatly affect the reliability and/or performance of an IC chip during operation.FIG. 1 shows a cross-section view of a typical die package 20 attached to a printed circuit board 22. Many IC chips 24 are housed in a package 20 having a die pad 26 (or die paddle) with an exposed side 28 (i.e., side 28 of the die pad 26 not being covered by the package plastic 30), as shown in FIG. 1 for example. Often a die pad 26 is an integral part of the lead frame structure (see e.g., lead frame 32 in FIG. 1). Generally, a die pad also may include any component that provides a thermal extension of the die pad 26 (e.g., a heat spreader or a slug). In such packaging configurations having an exposed die pad 26, the die 24 is usually adhered directly to the die pad 26 (see e.g., FIG. 1). An exposed die pad 26 is sometimes adhered to a printed circuit board (PCB) 22 to dissipate heat to the PCB 22 (see e.g., FIG. 1). Having the die 24 adhered to the die pad 26 increases the amount of heat transferred from the die 24 to the die pad 26. When a die 24 is not properly adhered to a die pad 26 or when part or all of the die 24 is not adhered to the die pad 26, the amount of heat transferred to the die pad 26 may be significantly reduced and less efficient. This is especially true when the die pad 26 is intended to be along the primary thermal path for heat dissipation from the die 24.Thermal tests may be performed to determine whether the heat from a die 24 is being dissipated efficiently or sufficiently. In a package configuration where the die pad 26 is one of the primary heat sinks for transferring heat from a die 24 (see e.g., FIG. 1), the results of a thermal test may indicate whether a die 24 is sufficiently adhered to the die pad 26. In the past, thermal-impedance tests were performed by generating heat with the circuitry of the die 24. For example, a K-factor die or a production die that has high-power devices (e.g., motor drivers) was used to generate heat on the die 24. K-factor dies are typically used specifically for testing, and often include temperature sensing elements and resistor networks that cover most of the die surface. In production dies, the temperature of the die 24 may be derived from measuring the leakage of any parasitic diode or other silicon-based IC device because the leakage often has a linear relationship to the temperature of the die 24. Hence, certain output pins may be used as temperature sensors based upon the inherent behavior of the devices as temperature varies.In these prior thermal testing methods (using a production die or a special testing die), the heat is generated on the die surface by internal components formed in the die 24 and the temperature is sensed (directly or indirectly) by internal components of the die 24, which are essentially at the same location (i.e., on the die 24). Using such tests, the die 24 is often driven with a relatively high power to generate enough heat for the test. Then, the amount of heat remaining on the die surface is used as an indication of the amount of heat dissipated from the die 24; presumably via the die pad 26 as a primary thermal path for some cases. However, there are many possible heat paths for dissipating heat from the die 24, other than via the die pad 26 (e.g., through leads 32, through package plastic 30). Thus, such prior testing methods may not accurately test the heat path between the die 24 and the die pad 26.SUMMARY OF THE INVENTIONThe problems and needs outlined above may be addressed by embodiments of the present invention. In accordance with one aspect of the present invention, a method of thermally testing for whether an integrated circuit die is attached to a die pad, is provided. This method includes the following steps described in this paragraph. The order of the steps may vary, may be sequential, may overlap, may be in parallel, and combinations thereof. Heat is applied from an external heat source to a first side of the die pad. The die is attached to a second side of the die pad. The second side of the die pad is opposite the first side of the die pad. A temperature of the die is measured at a first die location on the die. The die pad is located between the external heat source and the first die location.This paragraph describes some variations or alternatives for the method described in the immediately preceding paragraph, any of which may be applied in any suitable combination. The applying of heat may be by touching the first side of the die pad with a heated object, shining a laser beam on the first side of the die pad, bringing a heated object proximate to the first side of the die pad, or combinations thereof. The external heat source may be a resistive heater, and the applying of heat may include generating heat in the resistive heater using electricity. The resistive heater may be in physical contact with the first side of the die pad. The applying of heat may be performed intermittently. The measuring of temperature may be performed by measuring a leakage of current in a parasitic capacitor on the die. The measuring of temperature may be performed by measuring a change in impedance of a device on the die. The measuring of temperature may be performed by measuring a change in a property of a device that is part of an integrated circuit on the die. The die may be a production die. The die may be a test die.In accordance with another aspect of the present invention, a method of thermally testing for whether an integrated circuit die is attached to a die pad, is provided. This method includes the following steps described in this paragraph. The order of the steps may vary, may be sequential, may overlap, may be in parallel, and combinations thereof. Heat from an external heat source is applied to a first die pad location on a first side of the die pad. The die is attached to a second side of the die pad. The second side of the die pad being opposite the first side of the die pad. A first temperature of the die is measured at a first die location on the die resulting from the applying heat to the first die pad location. The die pad is located between the external heat source and the first die location. Heat from the external heat source is applied to a second die pad location on the first side of the die pad. A second temperature of the die is measured at a second die location on the die resulting from the applying heat to the second die pad location. The die pad is located between the external heat source and the second die location. The second die location may be the same as or different from the first die location. The second die pad location may be the same as or different from the first die pad location.In accordance with another aspect of the present invention, a thermal testing apparatus is provided, which includes a socket and a heat source. The socket is adapted to receive and retain one or more leads of a die package. The die package further includes a die and a die pad. The die pad has a first side and a second side. The first side of the die pad faces an opposite direction than the second side of the die pad. The die is attached to a second side of the die pad. The heat source is distinct and external relative to the die package. The heat source is adapted to apply heat to a first side of the die pad while the lead(s) of the die package are retained by the socket. The heat source may be an object adapted to be heated during use of the apparatus, for example. The apparatus may further include a mechanism adapted to move the heat source toward the first side of the die pad. The heat source may be a laser emitter device. The heat source may be a resistive heater adapted to generate heat using a resistance to electricity.The foregoing has outlined rather broadly features of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.BRIEF DESCRIPTION OF THE DRAWINGSThe following is a brief description of the drawings, which illustrate exemplary embodiments of the present invention and in which:FIG. 1 is a cross-section view of a typical die package attached to a printed circuit board;FIG. 2 is a cross-section view of a die package being tested using a method and apparatus of a first illustrative embodiment of the present invention;FIG. 3 is a cross-section view of a die package being tested using a method and apparatus of a second illustrative embodiment of the present invention; andFIG. 4 is a cross-section view of a die package being tested using a method and apparatus of a third illustrative embodiment of the present invention.DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTSReferring now to the drawings, wherein like reference numbers are used herein to designate like or similar elements throughout the various views, illustrative embodiments of the present invention are shown and described. The figures are not necessarily drawn to scale, and in some instances the drawings have been exaggerated and/or simplified in places for illustrative purposes only. One of ordinary skill in the art will appreciate the many possible applications and variations of the present invention based on the following illustrative embodiments of the present invention.Thermal testing of a packaged IC chip is often performed for package characterization. For example, a new package configuration and/or new package materials (e.g., mold compound, lead frame) may be tested to determine heat dissipation and/or other characteristics of a package. Often such tests are performed with test dies to determine how much heat may be dissipated from a package. This information may then be used to determine which production dies may be used with the tested package.Sometimes, however, it is desirable to test a production part that has been returned by a customer to determine the source of failure. Many production parts cannot dissipate enough power on the die to get a meaningful measurement to evaluate the die attach interface. Because some devices are low power devices (e.g., CMOS logic devices), there may be no way to drive it at the power level needed to develop a desired temperature on the die surface for performing a prior thermal test. Also, sometimes a returned part is not functioning electrically or there is a question about the electrical functionality, which may preclude generating heat on the die surface using the IC circuitry on the die.As discussed briefly above regarding FIG. 1, some IC chip packages 20 have a die pad 26 with one of its sides 28 exposed. In such packages 20, the die pad 26 may be used as a primary heat transfer path from a die 24 to a PCB 22, another heat sink, and/or the surrounding environment, during operation of the IC chip 24.In prior thermal testing methods (e.g., thermal-impedance test), heating the packaged die 24 is performed by internally heating the die 24 using circuits or devices formed on the die 24, and the temperature measuring is performed on the surface of the die 24. Measuring how hot the die surface gets with a given amount of power may indicate how much heat is dissipated to the PCB 22. But in such tests, the heat is measured at the same surface where the heat is generated, even though the heat path of interest is typically from the die 24 to the PCB 22 via the die pad 26. It is often desirable to know about the adhesion or die attach interface between the die 24 and die pad 26 when the die pad 26 is intended to be a primary heat sink for the die 24. In such prior tests, it often takes a relatively large amount of power going through the circuitry of the die 24 to generate enough heat to effectively measure the heat transfer at the interface between the die 24 and the die pad 26.Other tests may use an acoustic test to evaluate the die attach interface using acoustic transmission (assuming that the acoustic transmission correlates with thermal transmission). It is often preferred to perform a thermal transmission test when the interest is in the thermal dissipation performance from the die 24 through the die pad 26.Generally, an embodiment of the present invention provides a method of thermally testing for whether an integrated circuit chip or die is attached to a die pad using an external heat source. Illustrative embodiments of the present invention are shown in FIGS. 2-4, in cross-section views. In a preferred embodiment of the present invention, heat is applied to a first side 28 of the die pad 26 (e.g., exposed side of the die pad 26), where the die 24 is attached to a second side 34 of the die pad 26 (the second side 34 being opposite the first side 28 of the die pad 26). The external heat source 35 used to apply the heat to the first side 28 of the die pad 26 may vary for embodiments of the present invention, and the external heat source 35 may be any suitable way of generating heat on the first side 28 of the die pad 26. Preferably, the temperature measurements are derived from measuring leakage(s) of any parasitic diode or other temperature affected structure or devices (e.g., silicon-based IC devices, resistance in metal wire) on the die 24 (as in prior methods). However, other ways of obtaining temperature measurements of the die 24 may be used as well in an embodiment of the present invention.Thus, by applying heat to the first side 28 of the die pad 26 and measuring the temperature on the die surface, the die attachment interface 38 and the die pad 26 are located along the thermal path between the external heat source 35 and the measurement location (e.g., on die 24). This provides a much more accurate thermal indication about whether the die 24 is sufficiently attached to the die pad 26 than prior methods. It is a more direct method of measuring the heat flow through the die attachment interface 38 because the die attachment interface 38 is directly along the thermal path between the heat source and the measurement location. Also, using a method of testing of the present invention requires much less power and heat to obtain sensitive and meaningful measurements of the die attachment interface 38 than prior thermal methods discussed above.In a first embodiment of the present invention, the external heat source 35 is a heated object 36 or heated mass. As illustrated in FIG. 2, the heat may be applied to the first side 28 of the die pad 26 by touching the heated object 36 to the first side 28 of the die 24. Alternatively, the heated object 36 may not actually touch or physically contact the first side 28 of the die pad 26 during a thermal test. It may be sufficient to place the heated object 36 near or proximate to the first side 28 of the die pad 26. Also, it may not be practical or possible to touch the heated object 36 to the first side 28 of the die pad 26 in some cases. As yet another alternative, one or more intermediate materials (not shown) may be placed between the heated object 36 and the first side 28 of the die pad 26. In such case, the heated object 36 may press against the first side of the die pad 26 with the intermediate material(s) located there between.The heated object 36 may be made from any of a variety of suitable materials, including (but not limited to): metal, ceramic, and combinations thereof, for example. Also, the size and shape of the heated object 36 may vary to provide a suitable shape and configuration for testing one, several, or many package configurations. The heated object 36 may be heated by any suitable source of energy, including (but not limited to): fuel, electricity, steam, fluid, and combinations thereof, for example.In a preferred thermal testing machine 40 for a first embodiment, the thermal testing machine 40 may include sockets (not shown) adapted to receive and retain one or more leads 32 of a die package 20. The sockets may be electrically connected to appropriate leads 32 of a die package 20 for obtaining leakage measurements from the IC devices therein, for example, while also retaining the die package 20 during testing. In such case the thermal testing machine 40 may be adapted to move the heated object 36 towards the first side 28 of the die pad 26 (see e.g., FIG. 2). It may be preferred to apply heat continuously for a period of time or intermittently (at regular or irregular intervals).In the first embodiment illustrated in FIG. 2, the heating area where the tip 42 of the heated object 36 touches or comes close to the first side 28 of the die pad 26 is about the same as the die-to-die pad interface area. In a variation of the first embodiment (not shown), the tip 42 of the heated object 36 may have a smaller heating area. For example, the tip 42 of the heated object 36 may have a heating area that is much smaller (e.g., about 5-25%) than the die-to-die pad interface area. In such cases, the heated object 36 may provide localized heating of the first side 28 of the die pad 26. The application location of the heating may be moved or scanned across the first side 28 of the die pad 26 (in any suitable pattern) to map out delamination regions for the die attachment interface 38, for example.In a second embodiment of the present invention, a beam 44 of light (e.g., laser) or heat may be projected onto the first side 28 of the die pad 26 for the heating, as illustrated in FIG. 3. The beam 44 of light or heat may have any suitable beam cross-section area. For beam cross-section areas that are much smaller than the die pad 26 (see e.g., FIG. 3), the beam 44 may be moved or scanned about the die pad 26 (in any suitable pattern) for testing the entire die attach interface 38. For some packages, there may be one or a few known locations where delamination is more likely to occur. In such cases, it may be sufficient to test one or a few locations on the die pad 26 to assess the die attachment interface 38. An advantage of using a laser is that the heat may be pulsed or applied intermittently with more precision than applying a heated object 36. Applying heat with a laser or a focused beam of heat also may allow a large temperature differential, relatively quickly. Such heating and rapid changes in temperature differential may be used to cycle or intermittently heat the die pad 26. This may provide for transient or dynamic temperature measurements. However, an advantage of a heated object 36 is that it may be easier to apply heat uniformly or substantially uniformly across the entire surface of or a majority of the first side 28 of the die pad 26. The heated object 36 may also be used to provide dynamic or impulse testing (e.g., intermittently touching the die pad 26 with the heated object 36).Advantages of performing a thermal test with heat applied dynamically or intermittently is that the test may be performed more quickly and/or more accurately by observing the transient or dynamic heating. Also, by applying the heat momentarily rather than for a long period of time, larger temperature differentials may be obtained and larger heat power applied for a short period of time without damaging the components inside the packaged chip. A larger temperature differential may allow for more accurate measurements and inspection.A third embodiment of the present invention is illustrated in FIG. 4. In a third embodiment of the present invention, a resistive heating element 50 may be applied to or place near the first side 28 of the die pad 26.An embodiment or method of the present invention is advantageous for exposed die pad packages. However, an embodiment or method of the present invention may also be used to thermally test a package (not shown) where the first side 28 of the die pad 26 is not exposed. Hence, a layer of package plastic 30 may be located between the external heat source 35 and the first side 28 of the die pad 26 in such case, for example. However, in such packages where die pad 26 is enclosed in plastic, the primary heat dissipation path is typically through the leads 32. Thus, the external heat source 35 may be applied to the leads 32 rather than the first side 28 of the die pad 26 in such cases to provide a more direct thermal path for testing (as another alternative embodiment).Still another advantage of a method or embodiment of the present invention is that any or most all production chips may be tested. Production chips may be tested prior to completing the packaging (e.g., not encapsulated in plastic 30 yet) or after completing the package 20, and prior to shipping the product. For example, a thermal test method of the present invention may be used for quality control testing during production. If the thermal test is a dynamic test, the die pad 26 may be heated for a short duration (e.g., a few milliseconds) or until the heat is detected to know whether there is delamination between the die 24 and the die pad 26. By heating the die pad 26 for only a short period of time, it is unlikely to damage the circuitry on the die 24. In a method of the present invention, the heat may be applied until a certain temperature is measured. Then, the length of time that the heat was applied may be used as an indication about whether there is delamination. Hence, a short heat-up time would mean that the thermal path between the first side 28 of the die pad 26 and the die 24 is good, which would indicate that there is probably no delamination. Whereas, a longer heat-up time would indicate a poor thermal path between the first side 28 of the die pad 26 and the die 24, which may be due to delamination at the die attachment interface 38. If a certain heat level is not detected within a predetermined period of time, the heat may be discontinued to prevent damage to the circuitry on the die 24. Also, as a dynamic test, cool-down time performance may be used as a measurement index. The rate of temperature change (positive and/or negative) indicates the thermal capacity of the measurement point with respect to the heat source.Another advantage of a thermal testing method of the present invention is that it is faster than prior thermal testing methods. Yet another advantage for an embodiment of the present invention may be that a die attachment interface 38 may be thermally tested without exceeding 100 degrees Celsius in the die 24. In prior thermal testing methods where the heat was generated using components on the die 24, often the temperature of the die 24 would exceed 100 degrees Celsius to provide a great enough temperature differential at the die attachment interface 38 for getting a meaningful measurement. But in generating such high temperatures on the die 24, it often generates failures in the device or the thermal path under evaluation (e.g., die attachment interface 38) while trying to obtain measurements. Sometimes exceeding 100degrees Celsius in the die 24 causes trapped water vapor or air to burst or break chip components or layers to relieve the pressure, which causes damage to the die 24.Using an external heat source 35 in accordance with an embodiment of the present invention, the heat source may be much greater than 100 degrees Celsius while still providing a large temperature differential across the die attachment interface 38 and while not exceeding 100 degrees Celsius inside the package 20. The highest temperature is located on the outside and at the die pad 26 rather than inside the die 24 during thermal testing using an embodiment of the present invention. Thus, a more sensitive thermal test may be performed across the die attachment interface 38 without introducing internal damage to the die 24.Although embodiments of the present invention and at least some of its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods, and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Slave initiated interrupts for a communication bus are disclosed. In one aspect, the communication bus is a radio frequency front end (RFFE) bus, and a slave is allowed to indicate to a master on theRFFE bus that the slave has an interrupt condition. On receipt of a slave initiated interrupt, the master may initiate a polling sequence to determine which of a plurality of slaves associated with the RFFE bus initiated the interrupt and process the interrupt accordingly. Continuing the exemplary aspect, the slave may indicate the interrupt condition to the master by driving a clock line of the RFFE bus to a non-idle state. The master may detect this manipulation of the clock line and initiate the polling sequence.
1.A method for detecting an interrupt from a slave device on a radio frequency front end (RFFE) bus, the method comprising:Keeping the clock line in the RFFE bus at a logic low when the RFFE bus is idle;Detecting a logic high on the clock line with a detection circuit at a master device associated with the RFFE bus when the RFFE bus is idle;An interrupt inquiry is initiated from the master device.2.The method of claim 1 wherein initiating the interrupt query comprises polling a slave device associated with the RFFE bus.3.The method of claim 1 wherein initiating the interrupt query comprises performing a weighted polling of a slave device associated with the RFFE bus.4.The method of claim 1 wherein initiating the interrupt query comprises polling only slave devices associated with the RFFE bus that are authorized to provide an interrupt.5.The method of claim 1 further comprising: driving said clock line with a clock signal when said RFFE bus is not idle.6.The method of claim 1 wherein initiating the interrupt query comprises initiating the interrupt query before a time period for polling has elapsed.7.The method of claim 1 wherein initiating the interrupt query comprises one of: polling a slave device having an even address, then polling a slave device having an odd address; or polling for an odd address The slave device then polls the slave device with an even address.8.The method of claim 1 wherein initiating the interrupt query comprises using a lookup table to define an order in which to poll the slave devices.9.The method of claim 1 wherein initiating the interrupt query comprises polling the slave device based on the ascending sequence of addresses.10.The method of claim 1 wherein initiating the interrupt query comprises polling the slave device based on the descending sequence of addresses.11.A master control device, the master control device includes:a radio frequency front end (RFFE) interface, the RFFE interface being configured to be coupled to the RFFE bus;a clock source coupled to the RFFE interface;a transceiver coupled to the RFFE interface;a detection circuit coupled to the RFFE interface and configured to:Detecting when a clock line of the RFFE bus is pulled high by a slave device associated with the RFFE bus;An interrupt inquiry is initiated by the transceiver.12.The master device of claim 11 wherein said master device is integrated into an integrated circuit (IC).13.The master device according to claim 11, wherein said master device is integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communication device; Data unit; mobile location data unit; mobile phone; cellular phone; smart phone; tablet device; tablet phone; server; computer; portable computer; desktop computer; personal digital assistant (PDA); monitor; computer monitor; Tuner; radio; satellite radio; music player; digital music player; portable music player; digital video player; video player; digital video disc (DVD) player; portable digital video player;14.A method for a slave device to signal an interrupt on a radio frequency front end (RFFE) bus, the method comprising:Detecting an interrupt condition in the slave device at a slave device coupled to the RFFE bus;At the slave device, driving a clock line of the RFFE bus from an idle state to a modified state to indicate to the master device the interrupt condition at the slave device;The interrupt inquiry from the master device is then responded to.15.The method of claim 14 wherein driving the clock line to the modified state comprises pulling the clock line to a logic high.16.The method of claim 14 wherein driving the clock line comprises waiting for the clock line to be idle before driving the clock line.17.A slave device, the slave device comprising:a radio frequency front end (RFFE) interface, the RFFE interface being configured to be coupled to the RFFE bus;a transceiver coupled to the RFFE interface;An interrupt circuit coupled to the RFFE interface and configured to:Receiving an indication that the slave device has an interrupt condition;The clock line in the RFFE bus is driven from an idle state to a modified state.
Slave-initiated interrupt for the communication busPriority applicationThis application claims priority to U.S. Patent Application Serial No. S/N.15/220,077, filed on Jul. 26, 2016, entitled "SLAVE INITIATED INTERRUPTS FOR ACOMMUNICATION BUS" This application is hereby incorporated by reference in its entirety.backgroundI. Public domainThe techniques of the present disclosure generally relate to interrupt signaling on a communication bus.II. Background TechnologyComputing devices are becoming more common in contemporary society. Mobile phones are one of the more common computing devices. While these devices may initially appear as simple devices that allow audio communications over the Public Land Mobile Network (PLMN) to the Public Standard Telephone Network (PSTN), they have evolved to support a full multimedia experience and support multiple wireless protocols. Smart phone. Even within cellular wireless protocols, mobile phone radios have evolved into highly complex, multi-band, and multi-standard designs that often have multiple radio frequency (RF) signal chains. Each component in the RF signal chain must be in the desired configuration at any given time or the system will fail. Therefore, accurate timing, triggering, and speed are all necessary.As further explained on the MIPI Alliancewebsite, the "MIPI Alliance Specification for RF Front End Control Interface (RFFE)" was developed to provide a common and widely popular method for controlling RF front end equipment. There are a variety of front-end devices, including power amplifiers (PAs), low noise amplifiers (LNAs), filters, switches, power management modules, antenna tuners, and sensors. These features can be located in separate devices or integrated into a single device, depending on the application. The trend in mobile radio communications is towards complex multi-radio systems that include several parallel transceivers. This implies a leap in the complexity of RF front-end design. Therefore, the RFFE bus must be able to operate efficiently in a variety of configurations, from the simplest configuration of one master device and one slave device to a multi-master device configuration potentially with dozens of slave devices.In devices with an RFFE bus, the RFFE protocol specifies that the master device periodically polls the slave device on the RFFE bus to determine if the slave device has an interrupt condition. An exemplary slave device includes an antenna switch and a low noise amplifier. In a typical implementation, this poll occurs every millisecond. Cellular protocols are becoming more and more strict for latency issues, and if the master device waits for the entire millisecond to poll the antenna switch, the mobile device may not follow a particular cellular protocol. If polling occurs only more frequently, polling may result in unwanted power draws due to numerous polling cycles leading to negative acknowledgments from the slave device. Accordingly, cellular protocol compliance and power savings can be achieved through better interrupt techniques for the RFFE bus.Public overviewAspects disclosed in the detailed description include slave device initiated interrupts for a communication bus. In an exemplary aspect, the communication bus is a radio frequency front end (RFFE) bus and allows the slave device to indicate to the master device on the RFFE bus that the slave device has an interrupt condition. Upon receiving an interrupt initiated by the slave device, the master device can initiate a polling sequence to determine which of the plurality of slave devices associated with the RFFE bus initiated the interrupt and process the interrupt accordingly. Continuing with the exemplary aspect, the slave device can indicate an interrupt condition to the master device by driving the clock line of the RFFE bus to a non-idle state. The master device can detect this manipulation of the clock line and initiate a polling sequence. By relying on the slave device to initiate an indication of the interruption, the polling can begin before the periodic polling activity, which in turn can reduce latency and allow for increasingly strict cellular protocols. In addition, power savings can be achieved since unneeded periodic polling or increased cycles can be eliminated.In this regard, in one aspect, a method for detecting an interrupt from a slave device on an RFFE bus is disclosed. The method includes maintaining the clock line within the RFFE bus at a logic low when the RFFE bus is idle. The method also includes detecting a logic high on the clock line with a detection circuit at the master device associated with the RFFE bus when the RFFE bus is idle. The method also includes initiating an interrupt challenge from the master device.In another aspect, a master device is disclosed. The master device includes an RFFE interface configured to couple to an RFFE bus. The master device also includes a clock source coupled to the RFFE interface. The master device also includes a transceiver coupled to the RFFE interface. The master device also includes a detection circuit coupled to the RFFE interface. The detection circuit is configured to detect when the clock line of the RFFE bus is pulled high by the slave device associated with the RFFE bus. The detection circuit is also configured to initiate an interrupt challenge through the transceiver.In another aspect, a method for a slave device to signal an interrupt on an RFFE bus is disclosed. The method includes detecting an interrupt condition within the slave device at a slave device coupled to the RFFE bus. The method also includes driving, at the slave device, a clock line of the RFFE bus from an idle state to a modified state to indicate to the master device an interrupt condition at the slave device. The method also includes subsequently responding to an interrupt request from the master device.In another aspect, a driven device is disclosed. The slave device includes an RFFE interface configured to be coupled to an RFFE bus. The slave device also includes a transceiver coupled to the RFFE interface. The slave device also includes an interrupt circuit coupled to the RFFE interface. The interrupt circuit is configured to receive an indication that the slave device has an interrupt condition. The interrupt circuit is also configured to drive a clock line in the RFFE bus from an idle state to a modified state.BRIEF DESCRIPTION OF THE DRAWINGS1 is a system level block diagram of an exemplary mobile terminal configured to communicate based on an architecture defined by MIPI Alliance(MIPI);2 is a simplified block diagram of master and slave devices on a radio frequency front end (RFFE) bus that can be used for slave device initiated interrupts, in accordance with an exemplary aspect of the disclosure;3 is a flow diagram illustrating an exemplary process performed by a slave device for initiating an interrupt on an RFFE bus;4 is a flow diagram illustrating an exemplary process performed by a master device for detecting a slave initiated interrupt on an RFFE bus.Detailed DescriptionSeveral exemplary aspects of the present disclosure are now described with reference to the drawings. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous.Aspects disclosed in the detailed description include slave device initiated interrupts for a communication bus. In an exemplary aspect, the communication bus is a radio frequency front end (RFFE) bus and allows the slave device to indicate to the master device on the RFFE bus that the slave device has an interrupt condition. Upon receiving an interrupt initiated by the slave device, the master device can initiate a polling sequence to determine which of the plurality of slave devices associated with the RFFE bus initiated the interrupt and process the interrupt accordingly. Continuing with the exemplary aspect, the slave device can indicate an interrupt condition to the master device by driving the clock line of the RFFE bus to a non-idle state. The master device can detect this manipulation of the clock line and initiate a polling sequence. By relying on the slave device to initiate an indication of the interruption, the polling can begin before the periodic polling activity, which in turn can reduce latency and allow for increasingly strict cellular protocols. In addition, power savings can be achieved since unneeded periodic polling or increased cycles can be eliminated.Before discussing an exemplary aspect of a slave-initiated interrupt for a communication bus that includes a particular aspect of the present disclosure, a brief summary of the mobile terminal configured based on the MIPI Alliance(MIPI) defined architecture is first provided in FIG. Overview. A discussion of certain exemplary aspects of interrupts initiated by slave devices for a communication bus begins with reference to FIG.In this regard, FIG. 1 is a system level block diagram of an exemplary mobile terminal 100, such as a smart phone, mobile computing device, tablet device, and the like. While it is specifically contemplated that a mobile terminal can benefit from various exemplary aspects of the present disclosure, it should be appreciated that the present disclosure is not limited thereto and is based on a bus having multiple master devices and requires priority-based latency. It can be useful in any system with bus access. For illustrative purposes, it is assumed that the RFFE bus 102 within the mobile terminal 100 is one of a plurality of communication buses configured to support slave device initiated interrupts in accordance with the present disclosure.With continued reference to FIG. 1, mobile terminal 100 includes an application processor 104 (sometimes referred to as a host) that communicates with mass storage element 106 via a universal flash (UFS) bus 108. Application processor 104 may be further coupled to display 110 via display serial interface (DSI) bus 112 and to camera 114 via camera serial interface (CSI) bus 116. Various audio components, such as microphone 118, speaker 120, and audio codec 122, may be coupled to application processor 104 via a serial low power inter-chip multimedia bus (SLIMbus) 124. Additionally, these audio components can communicate with one another via the SOUNDWIRETM bus 126. Modem 128 can also be coupled to SLIMbus 124. Modem 128 may be further coupled to application processor 104 via a Peripheral Component Interconnect (PCI) or Fast PCI (PCIe) bus 130 and/or System Power Management Interface (SPMI) bus 132.With continued reference to FIG. 1, SPMI bus 132 can also be coupled to a wireless local area network (WLAN) integrated circuit (IC) (WLAN IC) 134, a power management integrated circuit (PMIC) 136, a companion integrated circuit (sometimes referred to as a bridge chip) 138, and Radio Frequency Integrated Circuit (RFIC) 140. It should be appreciated that separate PCI buses 142 and 144 can also couple application processor 104 to companion integrated circuit 138 and WLAN IC 134. Application processor 104 may also be further coupled to sensor 146 via sensor bus 148. Modem 128 and RFIC 140 can communicate using bus 150.With continued reference to FIG. 1, and of particular interest to the present disclosure, RFIC 140 can be coupled to one or more RFFE elements (such as antenna tuner 152, switch 154, and power amplifier 156) via RFFE bus 102. Additionally, RFIC 140 can be coupled to Envelope Tracking Power Supply (ETPS) 158 via bus 160, and ETPS 158 can be in communication with power amplifier 156. Collectively, RFFE elements (including RFIC 140) may be considered to be RFFE system 162.There is at least one master device and typically at least one slave device within the RFFE system 162. The RFFE protocol contemplates a master device with up to fifteen slave devices. In the absence of the present disclosure, the master device will periodically poll the slave device to see if any slave devices have an interrupt condition that needs to be processed. The period between polling events increases the latency of the system. Furthermore, if the master device polls and there are no interrupt conditions, power may have been consumed unnecessarily. While there may be devices that are not concerned with power consumption (because these devices may be connected to wall outlets and continuous power supplies), other devices, such as battery powered mobile terminals, attempt to limit power consumption as much as possible to extend battery life. To alleviate such latency and power consumption, exemplary aspects of the present disclosure allow a slave device to initiate an interrupt indication on the RFFE bus 102 to the master device. In this regard, as better illustrated in Figure 2, both the master device and the slave device have been modified.2 illustrates the RFFE system 162 of FIG. 1 having a master device 200 and a slave device 202 communicatively coupled by an RFFE bus 102. In various exemplary aspects, a typical master device is a modem baseband processor or modem radio frequency integrated circuit that primarily includes digital logic. Additionally, typical slave devices may include antenna tuners, power amplifiers, low noise amplifiers, and the like. Although only one slave device is illustrated, it should be appreciated that up to fifteen slave devices can be coupled to the RFFE bus 102. It should be appreciated that there may be multiple master devices in the RFFE system, and as determined by the bus arbitration mechanism, the slave devices may be controlled by multiple master devices. The RFFE bus 102 is a two-wire bus having a data line 204 and a clock line 206. The master device 200 can include a transceiver 208 that transmits and receives data on the data line 204. The master device 200 can further include a clock source 210 that selectively provides a clock signal 212 on the clock line 206. The master device 200 can further include a detection circuit 214 that detects a signal on the clock line 206. The master device 200 can further include an interface 216 that is configured to be coupled to the RFFE bus 102. Similarly, slave device 202 can include a transceiver 218 that transmits and receives data on data line 204. The slave device 202 can further include a delay phase locked loop (DLL) 220 that receives the clock signal 212 and generates a local clock signal for the slave device 202. The slave device 202 can also have an interrupt circuit 222 that is designed to provide an interrupt signal 224 on the clock line 206, as explained in more detail below. Slave device 202 is coupled to RFFE bus 102 via interface 226. According to the RFFE protocol, data lines 204 and clock lines 206 remain at logic low when lines 204 and 206 are idle. When the interrupt circuit 222 detects that the slave device 202 has an interrupt condition, the interrupt circuit 222 pulls the clock line 206 to a logic high with the interrupt signal 224. In an exemplary aspect, an interrupt condition may occur when the slave device determines an error condition of data received at the slave device or the slave device desires the master device to support, such as updating the configuration register to change the low noise amplifier (LNA) The gain (for example, the slave device determines the required signal level and has an interrupt condition for the master device to issue a change in the LNA gain) and the like. Detection circuit 214 detects a logic high from interrupt signal 224 and determines that one of the slave devices (e.g., slave device 202) has an interrupt condition, and may then initiate polling of the slave devices to determine which slave device has an interrupt condition.In this regard, FIG. 3 illustrates process 300 whereby slave device 202 initiates an interrupt instead of reacting to interrupt polling from master device 200. Initially, the slave device 202 detects an interrupt condition within the slave device 202 (block 302). For example, if the slave device 202 is an antenna switch, the interrupt condition may be an error condition in the nominal data transmission or a change RF condition that needs to be resolved by the master device. The slave device 202 verifies that the clock line 206 of the RFFE bus 102 is idle (block 304). Once clock line 206 is free, slave device 202 uses interrupt circuit 222 to drive clock line 206 from the idle state to the modified state (block 306). In an exemplary aspect, the idle state is a logic low and the modified state is a logic high. After signaling the interrupt condition to the master device in this manner, the master device 200 will begin interrupting the inquiry, which will cause the slave device 202 to receive the interrupt challenge from the master device 200 (block 308). The slave device 202 will then respond to the interrupt inquiry (block 310), indicating the slave device identity and the nature of the interrupt so that the interrupt can be properly handled by the master device 200.While FIG. 3 is arranged to illustrate the process 300 for the slave device 202, FIG. 4 provides a flow diagram of a process 400 for the master device 200. In this regard, the master device 200 performs normal operations (block 402). When the operation reaches an interval, the master device 200 places the clock line 206 in an idle state (block 404). As noted above, in an exemplary aspect, the idle state of clock line 206 is a logic low. When the slave device 202 has an interrupt condition, the slave device 202 pulls the clock line 206 to the modified state, and the detection circuit 214 detects that the clock line 206 has been pulled to the modified state (block 406). In an exemplary aspect, the modified state is a logic high. The detection circuit 214 reports the interrupt initiated by the slave device and the master device 200 initiates an interrupt challenge (block 408).The master device 200 can perform an interrupt inquiry in many different forms. In an exemplary aspect, the interrupt query is a simple polling of the slave device on the RFFE bus 102. This polling can be done in ascending order of addresses or in descending order by address. In yet another exemplary aspect, the polling may first traverse the individual addresses using an odd address followed by an even address, or vice versa, such that the even address is polled first, followed by the odd address. In yet another exemplary aspect, the master device 200 can know that only a subset of the slave devices associated with the RFFE bus 102 are authorized to request an interrupt, and the master device 200 can only poll those authorized slave devices. In yet another exemplary aspect, the master device 200 can have a lookup table that indicates the order in which the slave devices are polled. In yet another exemplary aspect, the master device 200 can poll the slave device using a weighted sequence, wherein the slave device that is more likely to have an interrupt is polled before the slave device that is less likely to have an interrupt. Again, weighting can be based on quality of service requirements. For example, some slave devices 202 may have a higher priority in obtaining services. As a specific example, the antenna tuner can be serviced prior to the antenna switch. Such service weighting and ordering can have discernable and detectable effects on radio quality and thus on user experience.A slave device-initiated interrupt for a communication bus in accordance with aspects disclosed herein may be provided or integrated into any processor-based device. Examples that are not limited include: set top box, entertainment unit, navigation device, communication device, fixed location data unit, mobile location data unit, mobile phone, cellular phone, smart phone, tablet device, tablet phone, server, computer, portable computer, Desktop computer, personal digital assistant (PDA), monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, digital video player, video player , digital video disc (DVD) players, portable digital video players, and cars.Those skilled in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the various aspects disclosed herein can be implemented as electronic hardware, stored in a memory or in another computer readable medium and An instruction executed by a processor or other processing device, or a combination of the two. By way of example, the master and slave devices described herein can be used in any circuit, hardware component, integrated circuit (IC), or IC chip. The memory disclosed herein can be any type and size of memory and can be configured to store any type of information as desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in the form of their functionality. How such functionality is implemented depends on the particular application, design choices, and/or design constraints imposed on the overall system. The described functionality may be implemented by a skilled person in a different manner for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.The various illustrative logical blocks, modules, and circuits described in connection with the various aspects disclosed herein may be a processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field, designed to perform the functions described herein. A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof is implemented or executed. The processor can be a microprocessor, but in the alternative, the processor can be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in cooperation with a DSP core, or any other such configuration).Aspects disclosed herein may be embodied in hardware and instructions stored in hardware, and may reside in, for example, random access memory (RAM), flash memory, read only memory (ROM), electrically programmable ROM (EPROM), An electrically erasable programmable ROM (EEPROM), a register, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read and write information to/from the storage medium. Alternatively, the storage medium can be integrated into the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a remote station. In the alternative, the processor and the storage medium may reside as a discrete component in a remote station, base station or server.It is also noted that the operational steps described in any of the exemplary aspects herein are described in order to provide examples and discussion. The described operations may be performed in numerous different sequences than those illustrated. Moreover, the operations described in a single operational step can be performed in a number of different steps. Additionally, one or more of the operational steps discussed in the exemplary aspects can be combined. It will be appreciated that numerous modifications may be made to the operational steps illustrated in the flowcharts, as apparent to those skilled in the art. Those skilled in the art will also appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referred to throughout the above description may be by voltage, current, electromagnetic wave, magnetic field or magnetic particle, light field or light particle, or any combination thereof. To represent.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the present disclosure will be readily apparent to those skilled in the The present disclosure is not intended to be limited to the examples and the details described herein, but the scope of the invention is to be accorded
In one embodiment, the present invention includes a cache memory including cache lines that each have a tag field including a state portion to store a cache coherency state of data stored in the line and a weight portion to store a weight corresponding to a relative importance of the data. In various implementations, the weight can be based on the cache coherency state and a recency of usage of the data. Other embodiments are described and claimed.
An apparatus comprising:a cache memory including a plurality of cache lines each having a data field to store data and a tag field including a state portion to store a cache coherency state of the corresponding data and a weight portion to store a weight corresponding to a relative importance of the corresponding data.The apparatus of claim 1, wherein the weight is based at least in part on the cache coherency state.The apparatus of either of claims 1 and 2, wherein the weight is further based on recency of usage of the corresponding data.The apparatus of any preceding claim, wherein the weight is further based on attribute information associated with the corresponding data.The apparatus of any preceding claim, wherein the tag field further includes a weight field to store the weight and an attribute field to store the attribute information.The apparatus of any preceding claim, further comprising a cache controller to select a cache line of a plurality of cache lines for replacement based on the corresponding weights of the cache lines.The apparatus of claim 6, wherein the cache controller is to update a weight for a cache line when the corresponding data is accessed, wherein the update is to increment the weight to a value corresponding to the cache coherency state of the cache line and to decrement a weight of at least one other cache line.The apparatus of any preceding claim, wherein if at least one cache line of the plurality of cache lines has a weight corresponding to a minimum weight no decrement of the weight of the at least one other cache line occurs.The apparatus of any preceding claim, wherein the updated weight is less than a maximum weight value, and at least one other cache line of a set including the cache line that was less recently accessed has a greater weight than the updated weight.The apparatus of any preceding claim, wherein the cache memory comprises an adaptive shared cache memory comprising a plurality of banks each to be associated with a corresponding core and to provide private cache storage and shared cache storage.The apparatus of any preceding claim, wherein a first cache line in a shared state is to be given a higher weight than a second cache line in the shared state when a single copy of the first cache line is present in the adaptive shared cache memory and a plurality of copies of the second cache line is present in the adaptive shared cache memory, and a third cache line in a modified state is to be given a higher weight than the second cache line and a lower weight than the first cache line.A method comprising:selecting a line of a plurality of lines of a set of a cache memory having a lowest weight as a victim, wherein each line has a weight corresponding to a criticality of the line, the criticality based at least in part on an importance of data stored in the line and an access recency of the line;fetching data responsive to a request and storing the data in the selected line; anddetermining a weight for the selected line based on the importance of the data, and storing the weight in a weight field of the line.The method of claim 12, further comprising upon a cache hit to the line, restoring the weight if a cache coherency state of the line does not change; and/orfurther comprising determining if any lines of the set have a weight of zero; and/orfurther comprising decrementing non-accessed lines of the set if none of the lines have a zero weight, and not decrementing non-accessed lines of the set if at least one of the lines has a zero weight; and/orfurther comprising receiving attribute information with the request, the attribute information indicating the importance of the data; and/orfurther comprising determining the weight using a weight table based on the attribute information and a cache coherency state of the line, the weight table including a plurality of entries each associating a weight with an attribute and cache coherency state combination; and/orwherein the attribute information is obtained from a user-level instruction associated with the request.A system comprising:a multicore processor including a plurality of processor cores and a shared cache memory having a plurality of banks each associated with one of the processor cores, wherein each bank is to provide private cache storage and shared cache storage, and includes a plurality of cache lines each having a data field to store data and a tag field including a state portion to store a cache coherency state of the corresponding data, a weight portion to store a weight based on a relative importance of the corresponding data and a recency of access to the cache line.The system of claim 14, wherein the tag field further comprises a weight field to store the weight and an attribute field to store attribute information associated with the data, wherein the weight is further based on the attribute information and the cache coherency state; and/orwherein a first cache line in a shared state is to be given a higher weight than a second cache line in the shared state when a single copy of the first cache line is present in the shared cache memory and a plurality of copies of the second cache line is present in the shared cache memory, and a third cache line in a modified state is to be given a higher weight than the second cache line and a lower weight than the first cache line.
BackgroundA modern computer system typically has one or more processors or central processing units (CPUs) at the heart of the system. These processors execute instructions on data to perform requested operations. Processors operate at extremely high frequencies. To have data readily accessible to the processors, the data can be stored in a cache memory. Different implementations of cache memories exist. Oftentimes, a small cache memory may be located on the same semiconductor die as the processor, providing a close and fast source of data. Some memory architectures can have multiple levels of a memory hierarchy, with each higher level further away from the processor, until reaching a system memory and/or mass storage device.While these higher levels of a memory hierarchy can store large amounts of data, the access times are vastly slower than the access times for a lower level cache memory. Accordingly, a large latency is incurred when needed data is available at these higher levels. Thus, recently and/or frequently accessed data may be stored in a lower level of a memory hierarchy.Cache memories are typically implemented using a given replacement scheme. Many replacement schemes are according to a least recently used (LRU) policy in which a least recently used cache line can be selected as a victim cache line to be replaced with new data to be inserted into the cache. As larger processors including more cores on a single die and different cache architectures including shared cache architectures become available, a LRU replacement scheme may not accurately reflect the true value of the data, and thus it is possible for needed data to be unavailable, causing a long latency to obtain the data.Brief Description of the DrawingsFIG. 1 is a flow diagram of a method for accessing a cache in accordance with an embodiment of the present invention.FIG. 2 is a flow diagram of a method for handling a snoop request in accordance with an embodiment of the present invention.FIG. 3 is a block diagram of a replacement technique in accordance with one embodiment of the present invention.FIG. 4 is a block diagram of a chip multiprocessor (CMP) in accordance with one embodiment of the present invention.FIG. 5 is a block diagram of a CMP with an adaptive cache architecture in accordance with one embodiment of the present invention.FIG. 6 is a block diagram of a processor in accordance with one embodiment of the present invention.FIG. 7A is a block diagram of a cache memory in accordance with an embodiment of the present invention.FIG. 7B is a block diagram of a tag entry in accordance with one embodiment of the present invention.FIG. 7C is a block diagram of a weight table in accordance with one embodiment of the present invention.Detailed DescriptionIn various embodiments, a cache replacement technique may be used to age data stored in cache lines, based on criticality and recency of use. To realize this technique, a tag portion of each cache line may include weight and/or attribute information. This weight value may be stored in a weight field of the tag portion and the attribute value stored in an attribute field of the tag field portion. This information can be stored at the time of allocation and later updated as cache activity occurs. For purposes of discussion the term weight may be used generally to refer to both weight and attribute. In one embodiment, the assigned weight may be proportional to data criticality, as determined by the coherence state of the cache line, e.g., of a modified, exclusive, shared, and invalid (MESI) or other cache coherency protocol. In other embodiments, different hardware or software mechanisms may provide other information (generally attribute information) on which to base a criticality decision.Embodiments can be used in many different types of cache systems. As one example, a cache system that can benefit from an embodiment of the present invention may be an adaptive cache of a chip multiprocessor (CMP) such as a large scale CMP or a terascale system. Other embodiments may be used in connection with other cache architectures such as a last level cache (LLC) of an inclusive cache hierarchy. Other cache architectures both inclusive and otherwise may also benefit from an embodiment of the present invention.For explanation purposes, weighting of cache lines may be in accordance with the different states of a MESI protocol, although in many embodiments additional attribute information may be considered in determining a weight for a line. For purposes of discussion, understand that an adaptive cache may be a shared cache that includes banks each associated with a processor core and which can act as both private cache and shared cache. Details of an example adaptive cache will be described further below. The identity of a given line as being shared or private, and the number of cores including the cache line can be determined based on a state of a directory, which may also be part of the shared cache.In such an adaptive cache system, weighting of cache lines may be based on the cache coherency state of each cache line. More specifically, in one embodiment a highest weight may be assigned to a single data element shared by multiple cores (i.e., in the shared (S) state), since losing this data would have a large impact (e.g., multiple processor stalls). Modified (M) and exclusive (E) lines may be grouped next in the relative order of importance. These lines are single data elements used by one core, but losing this data requires a trip to main memory (which can result in performance loss, memory bandwidth demand increase, power consumption at analog input/output (I/O)) circuitry and so forth. Finally, in this replacement scheme, duplicate lines shared by multiple cores are given least importance, and hence can be biased for eviction. Note that such duplicate lines may be of the shared state, but located in multiple private caches. A core losing such a line from a private cache can fetch it from a remote private cache instead of going to memory. For instance, if accessing memory is 5 times more expensive (based on latency, power or any other metric) than accessing a remote level two (L2) cache, it may be prudent to keep five copies of more critical lines such as a single shared line or M or E lines than caching five copies of the same line. Since duplicate lines are biased toward eviction, eventually one copy remains on-die and it will inherit the highest importance. A similar weighting scheme may be applicable for other cache architectures such as an inclusive cache hierarchy. However, duplicate lines are generally not available in such architectures and thus may not be part of a weighting scheme.Thus in general, weight assignment can be done in a systematic way that reflects the relative cost of acquiring a line. For example, assume that the optimization metric is miss latency. Furthermore, assume that it takes 50 cycles to fetch block A and 150 cycles to fetch block B. In this case, avoiding one miss to block B is worth three times as much in terms of access latency impact as avoiding one miss to block A. Accordingly, the weight of block B can be set to be three times as high as the weight of block A to reflect the cost ratio of the two blocks.In some embodiments, cache access patterns can be monitored and adaptive adjustments may be made for optimal cache allocation. For simplicity, the examples described here use cache coherence states to define relative importance. Techniques in accordance with an embodiment of the present invention can be used to provide a cache quality of service (QoS) abstraction to software, to thus tailor cache allocation on an application-specific basis. As one example, software can provide a hint with a memory access request indicating the criticality of the associated data. For example, a priority can be set at a page level via a page attribute that is provided with a request to the cache, or user-level instructions of an instruction set architecture (ISA), e.g., a load such as a qualified load, may include information regarding a criticality of the data. For example, in QoS systems in a virtual machine architecture in which an application executed for a user having a higher priority (e.g., due to greater payments for system use), attribute information regarding this priority or criticality can be provided to thus enable weighting of cache lines for such application with a greater weight. Thus user-level control of criticality (e.g., by programmer or compiler) can provide attribute information.When a cache line is installed, the weight may be set according to the relative importance of the cache line. A higher weight implies longer residence, and thus allocation is based on cache line importance. On a cache hit, the weight of the accessed line may be restored and the weight of all other cache lines in the set decremented. This step combines recency (like LRU) with cache line importance, implying stale high priority lines will be flushed out naturally. When at least one line within a set has a weight decayed to zero, the decrementing may be temporarily suspended until this condition vanishes. In other words, as the least useful line (future victim) has already been identified, there is no need to continue with the aging process (although such decrementing is not precluded). Invalid cache lines (e.g., due to snoops) may have their corresponding weight set to 0, as these are the least useful lines. On a cache miss, the line with the lowest weight (e.g., least useful line) may be evicted. Note that a value of 0 being the lowest weight is merely a convention, and is the convention that highest weight corresponds to longest cache residence. Other conventions, for instance, lowest weight being more important can be used.Referring now to FIG. 1 , shown is a flow diagram of a method for accessing a cache in accordance with an embodiment of the present invention. As shown in FIG. 1 , method 10 may be executed on access to a cache for requesting data. In one embodiment, method 10 may be implemented at least in part using a cache controller or other logic of a cache memory. As will be further described below, embodiments may be implemented in different cache architectures such as a shared cache, an adaptive cache, an inclusive cache hierarchy or so forth. As seen in FIG. 1 , method 10 may begin by receiving a cache access request (block 15). Assume for purposes of discussion that this request is a read request from a processor core. It next may be determined whether a cache hit occurs (diamond 20). If no hit occurs, meaning that the requested data is not stored in a cache line of the cache memory, control passes to block 25, where a way having the lowest weight may be selected as a victim cache line. That is, a set may have N ways (e.g., N cache lines), and the one of these cache lines having a lowest weight as reflected in a weight portion of its tag field may be selected as the victim. If multiple ways or cache lines have the lowest weight, any one of these lines can be selected as a victim. Note further that it is possible for multiple cache lines to have the same weight.Because the access request missed in the cache, the requested line may be fetched from another portion of the memory hierarchy (block 30). This other portion may be another cache memory, or higher portions of the hierarchy, e.g., system memory or mass storage device. When the data is retrieved, different implementations are possible. In one implementation it is possible to directly return the data to the requesting core at block 45, to reduce latency, before loading the cache line (locally) and setting its state information. In other implementations it is possible to first insert the incoming data into the evicted line. To do so, a state/attribute of the fetched line may be set (block 35). This state/attribute may include a MESI coherence state and attribute information such as described above. The state/attribute of received line can be indicated as a part of the incoming response, or it can be generated by the receiving cache automatically, depending on a given embodiment and coherence protocol. Further, based on the identified state/attribute, a weight may be set for the cache line (block 40). As will be discussed further below, the weight for the line may be set with reference to information in a weight table, which may be a programmable weight table that associates weight values with each possible state/attribute combination. In the examples discussed above, the cache coherency state may indicate the attribute of the cache line. Then the data may be returned at block 45.Referring still to FIG. l, if instead a cache hit occurs at diamond 20, control passes to block 50, where the state/attribute of the cache line may be updated. Such an update may not occur if the state or attribute of the line does not change as a result of the access. In any event, the weight of the cache line may be restored (block 55). That is, because this line has now been accessed, its weight may be reset to the corresponding weight of the state/attribute of the line. Note that this restored weight may or may not be the same as its original weight when inserted into the cache memory, due to possible changes in its state/attribute during residency in the cache.Still referring to FIG.1 , it may then be determined whether any weight for the lines of the set are at a zero value (diamond 60). If so, the requested data may be returned directly at block 45. Otherwise, the weights for the non-accessed ways of the set may be decremented (block 65) before returning the data. While shown with this particular implementation in the embodiment of FIG. 1 , the scope of the present invention is not limited in this regard.Referring now to FIG. 2 , shown is a flow diagram of a method for handling a snoop request in accordance with an embodiment of the present invention. As shown in FIG. 2 , method 70 may be for handling incoming snoop requests in a cache memory, and may begin by receiving the snoop request (block 75). Method 70 may be implemented using cache controller logic, as discussed above. The incoming snoop request may include an identifier of a given cache line. Accordingly, it may be determined whether a cache hit occurs (diamond 78). If not, control passes to block 80 where the cache controller may provide a snoop response indicating no data. Otherwise if the snoop request results in a cache hit, control passes to block 82. At block 82, a state/attribute of the cache line may be updated. This updating may be based on a given cache coherency protocol. For example, in a MESI protocol a cache line may be invalidated if a snoop request is for an exclusive access to the line. Or if the request is simply to read the line, the state may be updated to shared, if it is not already in that state. Control then passes to diamond 85, where it may be determined whether the new state is the invalid state. If so, the weight value for that line may be set to zero (block 88) and a snoop response may be provided (block 90), with or without the data. Otherwise if the updated state is not invalid, the weight of the cache line may be updated if a state/attribute change has occurred (block 95). Control then passes to block 90 to provide the snoop response. While shown with this particular implementation in the embodiment of FIG. 2 , understand the scope of the present invention is not limited in this regard.By taking into account the relative importance of cache lines across several cores, techniques in accordance with an embodiment of the present invention may result in more optimal allocation of on-die cache resource. For instance, without an embodiment of the present invention, if the same group of cache lines is used by multiple cores actively, these cache lines are replicated in all caches, resulting in the reduction of effective cache capacity. Instead, weighting according to an embodiment of the present invention recognizes constructive sharing and biases duplicate cache copies toward eviction. The net result is that single copies, which if lost, require a trip to main memory, are retained for a longer period of time. As memory accesses are much more expensive (performance and power) than accessing a remote on-die cache, a cache allocation policy can be implemented accordingly. However, a static policy that only eliminates cache line duplication would end up storing stale data. To avoid this shortcoming, embodiments may further detect stale copies and mark them as less critical. In other words, embodiments may use a combination of both data criticality and recency to optimize cache resources.The operations described above with regard to the flow diagram of FIG. 1 can be visualized using a virtual scan chain. Referring now to FIG. 3 , shown is a virtual scan chain in accordance with one embodiment of the present invention. MRU indicates most recently used and LRU indicates least recently used. Rather than a conventional LRU-like scheme which replaces a cache line in the LRU position to make room for a newly accessed line, embodiments may provide for a so-called multi-level LRU scheme. In a single-level LRU scheme, after an eviction all remaining lines are shifted right (logically) and a newly fetched line is inserted in the MRU position. The newly inserted line has to make it all the way to the LRU position before it is evicted. If there is a cache hit somewhere along the chain, the line accessed will be moved back to the MRU position.In a replacement technique in accordance with one embodiment of the present invention, multiple logical MRU positions (MRUS, MRURetc. as shown in FIG. 3 ) may be provided. The weight field acts as a proxy for the position in the scan chain. That is, lines with higher weight occupy positions to the left of the chain. Highest priority lines will have a larger weight and hence they will be inserted at the head (towards left) of the scan chain (i.e., MRUS). Lines with intermediate priority may have a smaller weight and hence may be inserted somewhere in the middle of the chain (e.g., MRUR). The exact position is determined by assigned weight. Lines with least importance may be inserted close to or at the LRU position. Since cache residence is a function of position in the scan chain, lines with a higher weight naturally stay in the cache longer.For instance, if an intermediate priority line is accessed after it moves to the right of the MRURposition, it will be inserted back to the MRURposition instead of the MRUSposition. This guarantees that higher priority lines continue to maintain their relative importance. A highest priority line inserted at MRUS, if a stale line, may be moved to the right, towards the LRU position. In one embodiment, the weights of non-accessed lines may be decremented within a cache set. Hence a line in the MRUSposition will gradually be downgraded and after some time moves to the right of the MRURposition, making it relatively less important compared to intermediate priority lines. This recency and cache line relative importance may be combined to adaptively downgrade stale lines.Note also that invalid lines may have their weight set to 0, which is akin to moving invalid lines to the LRU position. Using an embodiment of the present invention, off-die memory bandwidth traffic (data) can be reduced. Further, for applications that have a high percentage of shared data which is replicated in multiple caches, an embodiment may enable controlled replication and bias duplicate lines for eviction, resulting in more efficient cache utilization.As described above, some embodiments may be used in an adaptive cache structure. A CMP may have a number of processors on a single chip each with one or more caches. These caches may be private caches, which store data exclusively for the associated core, or shared caches, which store data available to all cores. Referring now to FIG. 4 , shown is a block diagram of a CMP in accordance with one embodiment of the present invention. A CMP 100 may include a plurality of processor cores 102 on a single chip. A core 102 may be a processor, a coprocessor, a fixed function controller, or other type of processing core. Each core 102 may be coupled to a core cache 104 which may be a lowest level cache memory.Core 102 may further be coupled to a shared cache 108. The shared cache 108 may be accessible to all cores 102. Any core 102 may allocate a line in shared cache 108 for a subset of addresses. The shared cache 108 may have a separate adaptive cache bank 110 for each core 102. Each adaptive cache bank 110 may have a directory (DIR) 112 to track the cache data blocks stored in core cache 104 and the adaptive cache bank 110. In addition, shared cache 108 may include cache controller logic to handle replacements in accordance with an embodiment of the present invention. While not shown in FIG. 4 , in some implementations a private cache may be coupled between each of core caches 104 and shared cache 108.In various embodiments, shared cache 108 may be an adaptive cache that may act as a private cache, a shared cache, or both at any given time. An adaptive cache may be designed to simultaneously offer the latency benefits of a private cache design and the capacity benefits of a shared cache design. Additionally, the architecture may also allow for run time configuration to provide either a private or shared cache bias. In this way, a single cache design may act either as a private cache, a shared cache, or a hybrid cache with dynamic allocation between private and shared portions. All cores 102 may access shared cache 108. A local core 102 may allocate a line of the corresponding adaptive cache bank 110 for any address. Other cores 102 may allocate a line of the adaptive cache for a subset of addresses. The adaptive cache may allow a line to be replicated in any adaptive cache bank based on local core requests. In one embodiment, local core 102 may access an adaptive cache bank before going through a coherency protocol engine. Other cores 102 may access the adaptive cache bank via the coherency protocol engine.The cache organization may use a tiled architecture, a homogenous architecture, a heterogeneous architecture, or other CMP architecture. The tiles in a tiled architecture may be connected through a coherent switch, a bus, or other connection. A CMP tile may have one or more processor cores sharing a cache. The processor core may access via a cache controller an adaptive cache bank that is dynamically partitioned into private and shared portions. The CMP tile may have a directory to track all private cache blocks on die. The cache controller may send incoming core requests to the local adaptive cache bank, which holds private data for that tile. The cache protocol engine may send a miss in the local adaptive cache bank to a home tile via an on-die interconnect. The adaptive cache bank at the home tile, accessible via the on-die interconnect, may satisfy a data miss. The cache protocol engine may look up the directory bank at the home tile to snoop a remote private adaptive cache bank, if necessary. A miss at a home tile, after resolving any necessary snoops, may result in the home tile initiating an off-socket request. An adaptive cache bank configured to act purely as a private cache may skip an adaptive cache bank home tile lookup but may follow the directory flow. An adaptive cache bank configured to act purely as a shared cache may skip the local adaptive cache bank lookup and go directly to the home tile. The dynamic partitioning of an adaptive cache bank may be realized by caching protocol actions with regard to block allocation, migration, victimization, replication, replacement and back-invalidation.FIG. 5 illustrates in a block diagram one embodiment of a CMP with an adaptive cache architecture 300. An initial CMP tile 302 may request access to a data block after checking the home CMP tile 304 for that data block. The initial CMP tile 302 may have an initial processing core 306, an initial core cache 308, an initial adaptive cache bank 310, and an initial directory 312. The home CMP tile 304 may have a home processing core 314, a home core cache 316, a home adaptive cache bank 318, and a home directory 320. The initial CMP tile 302 may store an initial data block copy 322, or cache block, in the initial adaptive cache bank 310. The home CMP tile 304 may register a home data block registration 324 in the home directory 320 to track the copies of the data block 322 in each adaptive cache bank.In other embodiments, a cache architecture may be an inclusive cache hierarchy. Referring now to FIG. 6 , shown is a block diagram of a processor in accordance with one embodiment of the present invention. As shown in FIG. 6 , processor 200 may be a multicore processor including a plurality of processor cores 2200- 220n(generically core 220). As shown in FIG. 6 , in addition to core logic 2220-222n(generically core logic 222), each core may include multiple levels of a cache hierarchy. Specifically, each core 220 may include a lowest-level cache 2250- 225n(generically cache 225). In one embodiment, cache 225 may correspond to an L1 cache, although the scope of the present invention is not so limited. Each core 220 may further include a mid-level cache 2280- 228n(generically cache 228). Mid-level cache 228 may correspond to an L2 cache, in some embodiments.Processor 200 may further include a last-level cache (LLC) 250 formed of a plurality of banks 2400-240n(generically bank or portion 240). LLC 250 may be a higher-level cache coupled to cores 220 via an interconnect 235, and which may include copies of the data present in the lower-level caches. As shown in FIG. 5 , each core 220 may be coupled to interconnect 235 via a link 2300- 230n(generically link 230). LLC 250 may act as a shared memory that is shared among the various cores 220 within processor 200. In contrast, the multi-level cache (MLC) hierarchy including lowest-level cache 225 and mid-level cache 228 may be formed of private caches, in which data is stored only for the associated core 220.During operation, memory requests from execution units of a given core (which may be part of core logic 222) may first access the lowest level of the cache hierarchy before looking up any other caches within a system. Accordingly, for improved performance frequently accessed data may be present in the lowest possible cache level, i.e., cache 225. If the requested data is not present in cache 225, cache 228 may next be accessed to determine if the data is present there. In the embodiment shown in FIG. 6 , each mid-level cache 228 is a final lookup point for each core 220 before a request is issued to LLC 250. LLC 250 may further include directory portions 2450-245n(generically directory portion 245) that each may be associated with a portion 240 of LLC 250, and may even include cache controller logic to handle replacements in accordance with one embodiment of the present invention. While described with this particular embodiment in the embodiment of FIG. 6 , it is to be understood that the scope of the present invention is not so limited and processors may have different configurations in other embodiments.Regardless of the cache architecture used, generally a cache memory will include a tag array and a data array. Referring now to FIG. 7A , shown is a block diagram of a cache memory in accordance with an embodiment of the present invention. As shown in FIG. 7A , cache memory 110 may include a plurality of entries or cache lines 112a- 112n. As seen, each cache line 112 may include a tag field 113aof a tag array and a data field 113bof a data array. Data array 113bmay store data of the cache line (along with optional bits for error detection/correction), while tag field 113amay store various tag information. FIG. 7B is a block diagram of a tag entry in accordance with one embodiment of the present invention. As shown in FIG. 7B , tag entry 113 includes a tag address field 114, which may be used to index into the corresponding data array 113b, a state field 115 which may store the cache coherency state for the corresponding line, a weight field 116, which may be a weight counter to store a weight value in accordance with an embodiment of the present invention, and an attribute field 117, which may store optional attribute information associated with a line such as criticality information, e.g., received from a programmer or compiler. In at least one embodiment of the present invention, coherency state can be used as a proxy for the optional attribute field.To determine an appropriate weight value for a given line, reference to a weight table may be made by a cache controller or other logic upon insertion or updating of a state (or attribute) of a line. Referring now to FIG. 7C , shown is a block diagram of a weight table in accordance with one embodiment of the present invention. As shown in FIG. 7C , weight table 120 may include a plurality of entries 122a- 122n.Each entry may include an attribute/state field 123aand a weight field 123b. For each weight/attribute combination, a corresponding default weight value may be provided. These default weight values may be hard-coded or programmable, either statically or dynamically. In some embodiments, weight table 120 may be implemented using registers such as machine status register (MSRs) or configuration status registers (CSRs). By enabling programmability of such default weights, embodiments may programmably set weight values for a given type of application. For example, for a cache memory for use in one market segment, a first set of default weight values may be determined, while for a different market segment, a different set of programmable weight values may be determined. In one embodiment, such different weight values may be determined based on empirical testing of different test programs for data usage in different market segments.Embodiments may be implemented in code and may be stored on a storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Systems and methods for level-shifting multiplexing are described herein. In one embodiment, a method for level-shifting multiplexing comprises selecting one of a plurality of inputs (A, B) based on one or more select signals (Sel A, Sel B), and pulling down one of first and second nodes (460, 465) based on a logic state of the selected one of the plurality of inputs. The method also comprises pulling up the first node (460) if the second node (465) is pulled down, and pulling up the second node (465) if the first node (460) is pulled down.
1.A level shift multiplexer, comprising:A first pull-down circuit coupled to the first node and having first and second inputs, wherein the first pull-down circuit is configured to select the first and second inputs based on the one or more select signals And pulling down the first node when the first input is selected and driven to the first state and the first node when the second input is selected and driven to the second state;A second pull-down circuit coupled to the second node and having third and fourth inputs, wherein the second pull-down circuit is configured to select one of the third and fourth inputs based on the one or more select signals And pulling down the second node when the third input is selected and driven to the third state and the fourth node when the fourth input is selected and driven to the fourth state;A pull-up circuit configured to pull-up the first node when the second node is pulled down by the second pull-down circuit and pull-up the second when the first node is pulled-down by the first pull-down circuit node.2.The level shifter as claimed in claim 1, wherein each of said first, second, third and fourth states is a logic 1 state.3.The level shift multiplexer of claim 2, wherein the first and third inputs are driven by a first pair of complementary signals and the second and fourth inputs are driven by a second pair of complementary signals.4.The level shifter as claimed in claim 1, wherein the pull-up circuit comprises:A first transistor configured to pull-up the first node with the second node pulled down by the second pull-down circuit, wherein the gate of the first transistor is coupled to the second node; andThe second transistor is configured to pull up the second node with the first node pulled down by the first pull-down circuit, wherein the gate of the second transistor is coupled to the first node.5.The level shift multiplexer of claim 4, wherein the first and second transistors comprise cross-coupled p-type metal-oxide-semiconductor (PMOS) transistors.6.The level shifter as claimed in claim 4, further comprising a choke circuit configured to reduce a voltage supplied from the power supply rail when the first input is selected and driven to the first state Current to the first transistor.7.The level shifter as claimed in claim 6, wherein the choke circuit is configured to reduce the current from the power rail to the first transistor when the second input is selected and driven to the second state .8.The level shift multiplexer of claim 6, wherein the choke circuit is configured to reduce the current from the power rail to the second transistor when the third input is selected and driven to the third state .9.The level shifter as claimed in claim 1, further comprising a clamp transistor coupled between a second node and ground, wherein the clamp transistor is configured to switch between a first state when the disable signal is in a logic 1 state The case of conduction.10.A method for level shift multiplexing comprising:Select one of a plurality of inputs based on one or more selection signals;Pull down one of the first and second nodes based on a state of the selected one of the plurality of inputs;Pulling up the first node with the second node pulled down; andPull the second node up if the first node is pulled down.11.The method of claim 10, wherein each of the plurality of inputs comprises a differential input.12.The method of claim 11, wherein pulling up the first node with the second node pulled down comprises pulling the first node up to a first voltage, wherein the selected one of the plurality of inputs Driven by a differential signal having a voltage swing approximately equal to the second voltage and the first voltage being greater than the second voltage.13.The method of claim 10 wherein pulling up the first node with the second node pulled down comprises pulling up the first node with a first transistor coupled between the supply rail and the first node, And pulling up the second node with the first node pulled down comprises pulling up the second node with a second transistor coupled between the supply rail and the second node.14.The method of claim 13, further comprising throttling current from the power rail to the first transistor if the first node is pulled down.15.The method of claim 14, further comprising throttling current from the power rail to the second transistor if the second node is pulled down.16.An apparatus for level shift multiplexing, comprising:Means for selecting one of a plurality of inputs based on one or more selection signals;Means for pulling down one of the first and second nodes based on the status of the selected one of the plurality of inputs;Means for pulling up the first node if the second node is pulled down;Means for pulling up the second node if the first node is pulled down.17.The apparatus of claim 16, wherein each of the plurality of inputs includes a differential input.18.The apparatus of claim 17, characterized in that the means for pulling up the first node with the second node pulled down comprise means for pulling up the first node to a first voltage, wherein the The selected one of the plurality of inputs is driven by a differential signal having a voltage swing approximately equal to a second voltage and the first voltage is greater than the second voltage.19.The apparatus of claim 16 wherein the means for pulling up the first node in the event that the second node is pulled down comprises means for using a first transistor coupled between the supply rail and the first node The means for pulling up the first node and the means for pulling up the second node if the first node is pulled down comprise means for pulling up the second using a second transistor coupled between the supply rail and the second node Node device.20.The apparatus of claim 19, further comprising means for throttling current from the power rail to the first transistor if the first node is pulled down.21.The apparatus of claim 20, further comprising means for throttling current from the power rail to the second transistor if the second node is pulled down.22.A multiplexer comprising:A first level shift multiplexer configured to select one of a first plurality of inputs based on a first plurality of select signals such that a selected one of the first plurality of inputs Level-shifting the signal at the first level shift multiplexer and outputting the level-shifted signal of the first level shift multiplexer at the first output;A second level shift multiplexer configured to select one of a second plurality of inputs based on a second plurality of select signals such that a selected one of the second plurality of inputs Level-shifting the signal at the second level shift multiplexer and outputting the level-shifted signal of the second level shift multiplexer at the second output;A combinational circuit configured to combine the first and second outputs; andA decoder configured to generate a second state shift signal based on a pointer by setting one of the first plurality of selection signals to a first state and disabling the second level shift multiplexer or converting one of the second plurality of selection signals Setting the second state and disabling the first level shift multiplexer to select one of the first and second pluralities of inputs.23.The multiplexer of claim 21, wherein each of the first and second states is a logic 1 state.24.The multiplexer of claim 21, wherein the combinational circuit is configured to fetch the first and second outputs.25.The multiplexer of claim 24, wherein the first level shift multiplexer is configured to output at the first output with the first level shift multiplexer disabled by the decoder Logic 0, and the second level shift multiplexer is configured to output a logic 0 at the second output when the second level shift multiplexer is disabled by the decoder.26.The multiplexer of claim 21 wherein the first level shift multiplexer is configured to level shift the signal at the selected one of the first plurality of inputs from the first voltage level To the second voltage level.27.The multiplexer of claim 26, wherein the second voltage level is at least 100 mV higher than the first voltage level.28.The multiplexer of claim 21, wherein each of the first plurality of inputs comprises a differential input, and each of the second plurality of inputs comprises a differential input.
High-speed level shift multiplexerbackgroundfieldAspects of the present disclosure generally relate to level shifters and multiplexers, and more particularly to level shifter multiplexers.Background techniqueThe chip may include different power domains, where each power domain may correspond to a different supply voltage. For example, the first power domain may have a lower supply voltage to reduce the power consumption of the circuit in the first power domain, and the second power domain may have a higher supply voltage to improve the performance of the circuit in the second power domain. One or more level shifters may be used to facilitate communication between the circuits in different power domains. For example, a level shifter may allow the signal to span from one power domain to another by shifting the voltage of the signal.OverviewA simplified overview of one or more embodiments is given below to provide a basic understanding of such embodiments. This summary is not an exhaustive overview of all contemplated embodiments and is neither intended to identify key or critical elements of all embodiments nor is it intended to limit the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.According to a first aspect, a level shift multiplexer is described herein. The level shift multiplexer includes a first pull-down circuit coupled to the first node and having first and second inputs, wherein the first pull-down circuit is configured to select, based on the one or more select signals Select one of the first and second inputs and pull down the first node when the first input is selected and driven to the first state and pull down the first node when the second input is selected and driven to the second state . The level shift multiplexer further includes a second pull-down circuit coupled to the second node and having third and fourth inputs, wherein the second pull-down circuit is configured to select based on the one or more select signals Third, and fourth inputs, and pulling down the second node when the third input is selected and driven to the third state and pulling the fourth node when the fourth input is selected and driven to the fourth state. The level shift multiplexer further includes a pull-up circuit configured to pull-up the first node when the second node is pulled down by the second pull-down circuit and pulled up by the first pull-down circuit Pull down the second node in case of pull-down.The second aspect relates to a method for level shift multiplexing. The method includes selecting one of a plurality of inputs based on one or more selection signals and pulling down one of the first and second nodes based on a state of the input of the plurality of inputs selected. The method also includes pulling up the first node with the second node pulled down and pulling up the second node with the first node pulled down.A third aspect relates to an apparatus for level shift multiplexing. The apparatus includes means for selecting one of a plurality of inputs based on one or more selection signals and means for pulling down one of the first and second nodes based on a status of the one of the plurality of inputs selected One's device. The device also includes means for pulling up the first node if the second node is pulled down and means for pulling up the second node if the first node is pulled down.The fourth aspect relates to a multiplexer. The multiplexer includes a first level shifter multiplexer configured to select one of a first plurality of inputs based on a first plurality of select signals such that one of the first plurality of inputs, The signal at the selected one of the inputs is level shifted and the level shifted signal of the first level shift multiplexer is output at the first output. The multiplexer also includes a second level shift multiplexer configured to select one of the second plurality of inputs based on the second plurality of select signals such that one of the second plurality of inputs, The signal at the selected one of the plurality of inputs is level-shifted, and the level-shifted signal of the second level shift multiplexer is output at the second output. The multiplexer further includes a combinational circuit configured to combine the first and second outputs, the multiplexer further comprising a decoder configured to generate the first plurality of selection signals based on the pointer Is set to the first state and the second level shift multiplexer is disabled or one of the second plurality of select signals is set to the second state and the first level shift multiplexer is disabled to select the first And one of the second plurality of inputs.To the accomplishment of the foregoing and related ends, one or more embodiments include those hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects of the one or more embodiments. However, these aspects are merely illustrative of several of the various ways in which the principles of various embodiments may be employed, and the described embodiments are intended to cover all such aspects and their equivalents.Brief Description of the DrawingsFIG. 1 shows an example of an interface circuit of a level shifter including a multiplexer, a level shifter for a read pointer, and an output of the multiplexer.FIG. 2 shows an example of an interface circuit including a plurality of level shifters and a multiplexer.FIG. 3 shows a level shift multiplexer according to an embodiment of the present disclosure.4A shows an exemplary implementation of a level shift multiplexer according to an embodiment of the present disclosure.4B shows an example implementation of a level shift multiplexer according to another embodiment of the present disclosure.FIG. 5 illustrates an example of a multiplexer including two level shift multiplexers according to an embodiment of the present disclosure.6 shows an example of a multiplexer including four level shift multiplexers according to an embodiment of the present disclosure.FIG. 7 illustrates an example of a multiplexer including four level shift multiplexers according to another embodiment of the present disclosure.FIG. 8 shows a level shift multiplexer including a multiplexing choke circuit according to an embodiment of the present disclosure.FIG. 9 is a flowchart of a method for level shift multiplexing according to an embodiment of the present disclosure. FIG.A detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of the various configurations and is not intended to represent the only configuration in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. However, it will be apparent to one skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.Figure 1 shows an interface circuit that can be used to couple storage device arrays 110 (1) through 110 (4) (eg, first-in-first-out (FIFO) storage devices) in a first power domain with second Receiving circuits (not shown) in the power domain interface. Storage devices may also be referred to as buffers, registers, or latches. The interface circuit includes a read multiplexer 120, a first voltage level shifter 130, and a second voltage level shifter 140. The supply voltage in the first power domain is Vddin and the supply voltage in the second power domain is Vddout. In this example, the multiplexer 120 is located in the first power domain.In operation, the multiplexer 120 receives the read pointer from the receive circuit and selects the outputs 115 (1) through 115 (4) of one of the storage devices 110 (1) through 110 (4) based on the read pointer. The voltage of the read pointer is level shifted by the second level shifter 140 so that the read pointer can span the power domain boundary from the second power domain to the first power domain. The first level shifter 130 shifts the voltage of the multiplexer output signal so that the output signal can span the power domain boundary from the first power domain to the second power domain.A disadvantage of the interface circuit in FIG. 1 is that the circuit may not be suitable for operation at high data rates (eg, 25 gigahertz or higher). This is because the power domain boundary needs to be crossed by both the read pointer and the multiplexer output signal to read data from the memory device. As a result, the speed at which data can be read from the storage device is reduced. The power domain boundary span for read operations is represented by path 145 in FIG. 1.Fig. 2 shows an interface circuit according to another embodiment. In this embodiment, the interface circuit includes a plurality of level shifters 220 (1) to 220 (4), and a read multiplexer 230 in the second power domain. Each level shifter 220 (1) through 220 (4) is coupled to the outputs 115 (1) through 115 (4) of a respective one of the storage devices 110 (1) through 110 (4) The voltages of the data signals from the respective one of the memory devices 110 (1) to 110 (4) are level shifted. This allows the data signal from each storage device to cross the power domain boundary to the multiplexer 230 in the second power domain. The interface circuit in FIG. 2 alleviates the timing problem associated with the interface circuit in FIG. 1. This is because the read pointer does not have to cross the power domain boundary because the multiplexer 230 is in the second power domain. However, the interface circuit in FIG. 2 includes a separate level shifter for each memory device. This significantly increases the area of ​​this interface circuit, especially as the number of storage devices increases.FIG. 3 shows an interface circuit including a level shift multiplexer 330 according to an embodiment of the present disclosure. The level shifter multiplexer 330 receives the data signals from the memory devices 110 (1) to 110 (4) in the first power domain and outputs the selected one of the data signals in the second power domain based on the read pointer . In this embodiment, the level shift and multiplexing functions are integrated into the level shift multiplexer 330. This reduces the area of ​​the interface circuit compared to the interface circuit in FIG. 2. In addition, the level shift multiplexer 330 receives the read pointer in the second power domain, thereby alleviating the timing issue associated with the interface circuit in FIG. 1.FIG. 4A shows a level shift multiplexer 410 according to an embodiment of the present disclosure. The level shift multiplexer 410 has a first differential input and a second differential input respectively for receiving the first differential signal and the second differential signal in the first power domain. For example, the level shift multiplexer 410 may receive the first differential signal from the first storage device 115 (1) in the storage device and the second differential signal from the second storage device 115 (2) in the storage device. Thus, in this embodiment, the output of the storage device is differentiated.The first differential input comprises an input A for receiving the first differential signal and an inputhe first differential signal comprising a signal A in a first power domain and its complement(logical inverse). The second differential input includes an input B for receiving the second differential signal and an inputsecond differential signal including the signal B in the first power domain and its complement(logical inverse). Each of signals A,, andmay have a voltage swing of about Vddin.In operation, the level shifter multiplexer 410 selects the first differential inputs (ie, inputs A andbased on the logic states of the selection signals Sel A and Sel B received at the selection inputs Sel A and Sel B, respectively, Or the second differential input (ie, inputs B and. For example, the level shift multiplexer 410 may select the first differential inputs (ie, inputs A and _with the selection signal Sel A being a logic one and the selection signal Sel B being a logic zero, Logic 0 and selection signal Sel B is logic 1, the second differential input (ie, inputs B and _10) is selected. The level shift multiplexer 410 level-shifts the differential signal at the selected differential input and outputs the level shifted differential signal at a voltage swing of about Vddout in the second power domain as discussed further below of. Thus, the voltage of the differential signal at the selected differential input is level shifted from Vddin to Vddout. The logic states of the selection signals Sel A and Sel B may be specified according to the selected memory device by the read pointer shown in FIG. 3. The selection signals Sel A and Sel B may be in the second power domain.The level shift multiplexer 410 includes a pull-up circuit 412, a first pull-down circuit 420, and a second pull-down circuit 430. Pull-up circuit 412 includes cross-coupled p-type metal oxide semiconductor (PMOS) transistors 415 and 417. The sources of PMOS transistors 415 and 417 are coupled to the supply rail Vddout of the second power domain. The gate of each PMOS transistor 415 and 417 is coupled to the drain of another PMOS transistor 415 and 417. The drain of PMOS transistor 415 is coupled to node 460 and the drain of PMOS transistor 417 is coupled to node 465.The first pull-down circuit 420 includes a first branch 421 and a second branch 423. The first branch 421 includes a first select n-type metal oxide semiconductor (NMOS) transistor 422 and a first drive NMOS transistor 426 coupled in series. The gate of the first selection NMOS transistor 422 is coupled to the selection input Sel A and the gate of the first drive NMOS transistor 426 is coupled to the input A. FIG. The drain of the first select NMOS transistor 422 is coupled to node 460 while the source of the first select NMOS transistor 422 is coupled to the drain of the first drive NMOS transistor 426 and the source of the first drive NMOS transistor 426 is coupled to ground. The second branch 423 includes a second selection NMOS transistor 424 and a second drive NMOS 428 coupled in series. The gate of the second selection NMOS transistor 424 is coupled to the select input Sel B and the gate of the second drive NMOS transistor 428 is coupled to the input B. FIG. The drain of the second selection NMOS transistor 424 is coupled to node 460 while the source of the second selection NMOS transistor 424 is coupled to the drain of the second drive NMOS transistor 428 and the source of the second drive NMOS transistor 428 is coupled to ground.The second pull-down circuit 430 includes a third branch 431 and a fourth branch 433. The third branch 431 includes a third selection NMOS transistor 432 and a third drive NMOS transistor 436 coupled in series. The gate of the third selection NMOS transistor 432 is coupled to the selection input Sel A and the gate of the third drive NMOS transistor 436 is coupled to the inputthe drain of the third selection NMOS transistor 432 is coupled to node 465 while the third selection NMOS The source of transistor 432 is coupled to the drain of the third driver NMOS transistor 436 and the source of the third driver NMOS transistor 436 is coupled to ground. The fourth branch 433 includes a fourth selection NMOS transistor 434 and a fourth drive NMOS 438 coupled in series. The gate of the fourth select NMOS transistor 434 is coupled to the select input Sel B while the gate of the fourth drive NMOS transistor 438 is coupled to the drain of the input_NER 12_third select NMOS transistor 434 to node 465 while the fourth select NMOS The source of transistor 434 is coupled to the drain of fourth drive NMOS transistor 438 and the source of fourth drive NMOS transistor 438 is coupled to ground.The level shift multiplexer 410 further includes a first inverter 450, a second inverter 455, and a clamp transistor 440. The first inverter 450 has an output coupled to an input of a node 460 and to a first output (denoted "OUT") of the level shift multiplexer 410. The second inverter 455 has an input coupled to a node 465 and an output coupled to a second output (denoted asof the level shift multiplexer 410. Both of the first and second inverters 450 and 455 may be powered by the supply voltage Vddout of the second power domain. Clamp transistor 440 may be used to disable level shifter multiplexer 410 by pulling node 465 to ground, as discussed further below.In operation, the first and third selection NMOS transistors 422 and 432 are turned on and the second and fourth selection NMOS transistors 424 and 434 are turned on when the selection signal Sel A is logic 1 and the selection signal Sel B is logic 0 Deadline. As a result, the first and third drive NMOS transistors 426 and 436 are coupled to nodes 460 and 465, respectively, while the second and fourth drive NMOS transistors 428 and 438 are decoupled from nodes 460 and 465, respectively. In other words, inputs A andare selected.If signal A is a logical one, the first driver NMOS transistor 426 is turned on and node 460 is pulled to ground. Since the gate of PMOS transistor 417 is coupled to node 460, this causes PMOS transistor 417 to turn on and pull node 465 to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic one at the first output OUT of the level shift multiplexer 410 while the second inverter 455 (coupled to node 465) outputs a logic one at the level shift complex Logic 0 is output at the second output of processor 410If signal A is a logic 0, the third driver NMOS transistor 436 (driven by the inverted signalis turned on and node 465 is pulled to ground. Since the gate of PMOS transistor 415 is coupled to node 465, this causes PMOS transistor 415 to turn on and pull node 460 up to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic zero at the first output OUT of the level shift multiplexer 410 while the second inverter 455 (coupled to node 465) outputs a logic zero at the level shift complex Logic 1 is output at the second outputof the 410.When the selection signal Sel A is logic 0 and the selection signal Sel B is logic 1, the second and fourth selection NMOS transistors 424 and 434 are turned on and the first and third selection NMOS transistors 422 and 432 are turned off. As a result, the second and fourth drive NMOS transistors 428 and 438 are coupled to nodes 460 and 465, respectively, while the first and third drive NMOS transistors 426 and 436 are decoupled from nodes 460 and 465, respectively. In other words, inputs B andare selected.If signal B is logic 1, the second driver NMOS transistor 428 is turned on and node 460 is pulled to ground. Since the gate of PMOS transistor 417 is coupled to node 460, this causes PMOS transistor 417 to turn on and pull node 465 to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic one at the first output OUT of the level shift multiplexer 410 while the second inverter 455 (coupled to node 465) outputs a logic one at the level shift complex Logic 0 is output at the second outputof the 410.If signal B is a logic 0, fourth drive NMOS transistor 438 (driven by inverted signalis turned on and node 465 is pulled to ground. Since the gate of PMOS transistor 415 is coupled to node 465, this causes PMOS transistor 415 to turn on and pull node 460 up to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic zero at the first output OUT of the level shift multiplexer 410 while the second inverter 455 (coupled to node 465) outputs a logic zero at the level shift complex Logic 1 is output at the second outputof the 410.Thus, the level shift multiplexer 410 selects the first differential inputs (ie, inputs A and __EN 22_) with the selection signal Sel A being a logic one and the selection signal Sel B being a logic zero, and selects the first differential input Logic 0 and the selection signal SelB is logic 1, the second differential inputs (ie, inputs B and __EN_23_) are selected. The level shift multiplexer 410 level-shifts the differential signal at the selected differential input and outputs the level-shifted differential signal in the second power domain at the first and second outputs OUT andClamp transistor 440 (eg, an NMOS transistor) is used to selectively disable the level shift multiplexer 410. More specifically, the clamp transistor 440 disables the level shifter MUX 410 when the disable signal (denoted as "Disable A / B") is a logic one, and enables Level Shifted Reuse (MUX) when the disable signal is a logic zero Device 410. When the disable signal is logic 1, clamp transistor 440 is turned on and node 465 is pulled to ground. This causes PMOS transistor 415 to turn on and pull node 460 to Vddout. As a result, the first inverter 450 outputs a logic zero at the first output OUT and the second inverter 455 outputs a logic one at the second outputClamp transistor 440 may be used to place level shift multiplexer 410 in a known state when level shift multiplexer 410 is not in use (eg, in a sleep mode). This prevents nodes 460 and 465 from floating to an intermediate logic state (eg, half Vddout) when level shifter mux 410 is not in use. Clamp transistor 440 is turned off when the disable signal is a logic zero, and level shift multiplexer 410 operates normally as discussed above.The level shift multiplexer 410 reduces the area compared to the circuit shown in FIG. 2 in which a separate level shifter is used for each input signal. This is because the level shift multiplexer 410 uses a common pull-up circuit 412 for the first and second differential inputs.In the example shown in FIG. 4A, level shift multiplexer 410 has two differential inputs. It is to be appreciated that the level shift multiplexer 410 is not limited to this example, and the level shift multiplexer 410 may be expanded to multiplex two or more differential signals. 4B illustrates an example in which the level shifter multiplexer 470 receives the third differential signal (ie, the signal C and its complement) other than the first and second differential signals discussed above. In this example, the first pull-down circuit 480 includes a fifth branch 481 for a third differential signal (ie, signal C andwhile the second pull-down circuit 490 includes a fifth branch for a third differential signal (ie, Signal C andof the sixth branch 491.The fifth branch 481 includes a fifth selection fifth NMOS transistor 482 and a fifth drive NMOS transistor 486 coupled in series. The gate of the fifth selection NMOS transistor 482 is coupled to the selection input Sel C and the gate of the fifth drive NMOS transistor 486 is coupled to the input C. FIG. The drain of the fifth select NMOS transistor 482 is coupled to node 460 and the source of the fifth select NMOS transistor 482 is coupled to the drain of the fifth drive NMOS transistor 486 and the source of the fifth drive NMOS transistor 486 is coupled to ground.The sixth branch 491 includes a sixth selection NMOS transistor 492 and a sixth drive NMOS 496 coupled in series. The gate of the sixth selection NMOS transistor 492 is coupled to the selection input Sel C and the gate of the sixth drive NMOS transistor 496 is coupled to the drain of the inputhose sixth selection NMOS transistor 492 to node 465 while the sixth selection NMOS The source of transistor 492 is coupled to the drain of sixth drive NMOS transistor 496 and the source of sixth drive NMOS transistor 496 is coupled to ground.In operation, one of the three differential inputs is selected by setting the corresponding select signal to a logic one and the remaining two select signals to a logic zero. For example, if the third differential inputs (ie, inputs C andare selected, the selection signal Sel C is set to logic 1 and the selection signals Sel A and Sel B are set to logic 0. This causes the fifth and sixth selection NMOS transistors 482 and 492 to be on and the first, second, third and fourth selection NMOS transistors 422, 424, 432 and 434 to turn off. As a result, the fifth and sixth drive NMOS transistors 486 and 496 are coupled to nodes 460 and 465, respectively, while the other drive transistors are decoupled from nodes 460 and 465. In other words, inputs C andare selected.If signal C is logic 1, the fifth NMOS transistor 486 is turned on and node 460 is pulled to ground. This causes PMOS transistor 417 to turn on and pull node 465 to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic one at the first output OUT and the second inverter 455 (coupled to node 465) outputs a logic zero at the second outputIf signal C is a logic 0, sixth drive NMOS transistor 496 (driven by inverted signalis turned on and node 465 is pulled to ground. This causes PMOS transistor 415 to turn on and pull node 460 up to Vddout. As a result, the first inverter 450 (coupled to node 460) outputs a logic zero at the first output OUT and the second inverter 455 (coupled to node 465) outputs a logic one at the second outputBy adding a branch for each additional signal in the first pull-down circuit 480 and the second pull-down circuit 490, the level shift multiplexer 470 may be expanded to multiplex the additional signal. However, adding additional branches increases the capacitive load at each node 460 and 465, which slows the multiplexer 470.Referring back to FIG. 4A, the level shift multiplexer 410 may be combined with one or more other level shift multiplexers having the same or similar structure to form a larger level shift multiplexer. In this regard, FIG. 5 shows an example in which the level shift multiplexer 410 in FIG. 4A is combined with the second level shift multiplexer 510 to form a larger level shift multiplexer 505. In this example, the second multiplexer 510 may have substantially the same structure as the first multiplexer 410 and may be configured to select a third differential input (ie, input (ie, input) based on the selection signals Sel C and Sel D C andor the fourth differential input (ie, inputs D and. More specifically, the second multiplexer 510 may replace the input A, B,Sel A, Sel by copying the structure shown in FIG. 4A and respectively replacing the input C, D,Sel C, Sel D and the disable C / B and disable A / B.In this embodiment, the positive output 452 of the first multiplexer 410 is coupled to a first input of an OR (or) gate 515 and the positive output 552 of the second multiplexer 510 is coupled to a second input of an OR gate 515 . In this example, the negative outputfor each multiplexer 410 and 510 is unused. The OR gate 515 is in the second power domain and may be powered by the supply voltage Vddout. The output of multiplexer 505 (denoted "OUT") is taken at the output of OR gate 515.In this embodiment, one of the four differential inputs of the multiplexer 505 may be selected at a time. This selection may be controlled by a read decoder 530, which receives the read pointer and controls the selection of the signals Sel A, Sel B, Sel C and Sel D and disable the logic states of the signal disable A / D and disable C / D Differential input specified by read pointer. In this embodiment, the read pointer may have a two-bit value that specifies one of the four differential inputs. For ease of explanation, the individual connections between the multiplexers 410 and 510 and the read decoder 530 are not shown in FIG. 5.For example, if the first differential inputs (ie, inputs A andare read pointer select, the read decoder 530 may set the selection signal Sel A to logic 1, the selection signal Sel B to logic 0, and disable Signal disabled C / D is set to logic 1. This causes the first multiplexer 410 to select the first differential inputs (ie, inputs A and __NN41_) and disable the second multiplexer 510, causing the second multiplexer 510 to output a logic zero at the output 552. As a result, the logic state at the output of OR gate 515 depends on the logic state (ie, the logic state of signal A) at the positive output 452 of the first multiplexer 410. In this example, disable signal disable A / B is set to logic 0.In another example, if the second differential inputs (ie, inputs B andare read pointer select, the read decoder 530 may set the selection signal Sel B to logic 1, the selection signal Sel A to logic 0 , And disable signal disable C / D to logic 1. This causes the first multiplexer 410 to select the second differential inputs (ie, inputs B andand disable the second multiplexer 510, causing the second multiplexer 510 to output a logic zero at the output 552. As a result, the logic state at the output of OR gate 515 depends on the logic state (ie, the logic state of signal B) at the positive output 452 of the first multiplexer 410. In this example, disable signal disable A / B is set to logic 0.In yet another example, if the third differential inputs (ie, inputs C andare read pointer select, the read decoder 530 may set the selection signal Sel C to logic 1, the selection signal Sel D to logic 0 , And disable signal disable A / B to logic 1. This causes the second multiplexer 510 to select a third differential input (ie, inputs C andand disable the first multiplexer 410, causing the first multiplexer 410 to output a logic zero at the output 452. As a result, the logic state at the output of the OR gate 515 depends on the logic state (ie, the logic state of the signal C) at the positive output 552 of the second multiplexer 510. In this example, disable signal disable C / D is set to logic 0.In yet another example, if the fourth differential inputs (ie, inputs D andare read pointer select, the read decoder 530 may set the selection signal Sel D to logic 1, the selection signal Sel C to logic 0 , And disable signal disable A / B to logic 1. This causes the second multiplexer 510 to select the fourth differential input (ie, inputs D andand disable the first multiplexer 410, causing the first multiplexer 410 to output a logic zero at the output 452. As a result, the logic state at the output of the OR gate 515 depends on the logic state (ie, the logic state of the signal D) at the positive output 552 of the second multiplexer 510. In this example, disable signal disable C / D is set to logic 0.In general, the read decoder 530 selects one of these four differential inputs by setting the corresponding select signal to a logic one and setting the other select signal inputs to the same multiplexer 410 or 510 to a logic zero. The read decoder 530 disables the multiplexer 410 or 510 that does not correspond to the selected differential input by setting the corresponding disable signal to logic one. In other words, the read decoder 530 disables the multiplexer 410 or 510 whose none of the differential inputs is not selected.Disabling the multiplexer 410 or 510 that does not correspond to the selected differential input forces the positive output 452 or 552 of the multiplexer to a logic zero. As a result, the logic state at the output OUT of the OR gate 515 depends on the logic state at the positive output 452 or 552 of the multiplexer 410 or 510 corresponding to the selected differential input. Thus, the clamp transistor 440 in each of the multiplexers 410 and 510 is used to disable the multiplexer 410 or 510 that does not correspond to the selected differential input.When the entire multiplexer 505 is to be disabled (eg, in sleep mode), the read decoder 530 may set both disable signal disable A / B and disable C / D to logic one. This causes the multiplexer 505 to output a logic 0 at the output OUT of the OR gate 515. In this aspect, the read decoder 530 may receive the disable signal, and disable the multiplexer 505 when the disable signal is logic 1. Thus, the clamp transistor 440 in each of the multiplexers 410 and 510 may have two functions of disabling the respective multiplexer 410 or 510 when the corresponding multiplexer 410 or 510 does not correspond to the selected differential input, And disabling the corresponding multiplexer when the entire multiplexer 505 is to be disabled (eg, in sleep mode).Level shift multiplexers 410 and 510 in FIG. 5 may be combined with one or more additional level shift multiplexers to form an even larger multiplexer. In this regard, FIG. 6 shows an embodiment in which the first and second level shift multiplexers 410 and 510 are combined with the third and fourth level shift multiplexers 610 and 650 to form eight differential input multiplexers 605 Example. Each of the third and fourth multiplexers 610 and 650 may have substantially the same structure as the first multiplexer 410 illustrated in FIG. 4A.The third multiplexer 610 may be configured to select the fifth differential input (ie, input E andor the sixth differential input (ie, input F andbased on the selection signals Sel E and Sel F. More specifically, the third multiplexer 610 may replace the inputs A, B,Sel A, Sel by copying the structure shown in FIG. 4A and respectively replacing the inputs E, F,Sel E, Sel F and the disable E / F B and disable A / B.The fourth multiplexer 650 may be configured to select the seventh differential input (ie, input G andor the eighth differential input (ie, input H andbased on the selection signals Sel G and Sel H. More specifically, the fourth multiplexer 650 may replace the inputs A, B,Sel A, Sel by copying the structure shown in FIG. 4A and replacing the inputs A, B, Sel, Sel H, and G / H with inputs G, H, B and disable A / B.In this embodiment, the positive output 612 of the third multiplexer 610 is coupled to the first input of the second OR gate 665 and the positive output 652 of the fourth multiplexer 650 is coupled to the second input of the second OR gate 665 enter. The output 520 of the first OR gate 515 is coupled to the first input of the third OR gate 680 and the output 670 of the second OR gate 665 is coupled to the second input of the third OR gate. The output of the multiplexer 605 is taken at the output of the third OR gate 680 (denoted "OUT").In this embodiment, one of eight differential inputs of level shift multiplexer 605 may be selected at a time. This selection may be controlled by a read decoder 630 which selects the read pointer and controls the select signals Sel A to Sel H and disables the logic state of the signal disable A / D to disable G / H to select the difference specified by the read pointer enter. In this embodiment, the read pointer may have a three-bit value that specifies one of these eight differential inputs. For ease of explanation, the individual connections between the multiplexers 410, 510, 610, and 650 and the read decoder 630 are not shown in FIG. 6.In operation, the decoder 630 selects one of the eight differential inputs by setting the corresponding select signal to a logic 1 and setting the other select signal inputs to the same multiplexer 410, 510, 610, or 650 to a logic 0 One. The read decoder 530 disables the other three multiplexers that do not correspond to the selected differential input by setting the corresponding disable signal to a logical one. This results in the other three multiplexers outputting a logic 0 so that the logic state at the output OUT of the third OR gate 680 (and therefore the multiplexer 605) depends on the multiplexers 410, 510 corresponding to the selected differential inputs, 610 or 650 at the output of the logic state.For example, if the eighth differential inputs (ie, inputs H andare read pointer select, the read decoder 630 may set the selection signal Sel H to logic 1, the selection signal Sel G to logic 0, and disable Signal Disable A / B, Disable C / D, and Disable E / F Set to logic 1. In another example, if the fifth differential inputs (ie, inputs E andare read pointer select, the read decoder 630 may set the selection signal Sel E to logic 1, the selection signal Sel F to logic 0 , Disable A / B with disable signals, disable C / D, and disable G / H with logic 1.When the entire multiplexer 605 is to be disabled (eg, in sleep mode), the read decoder 530 may set the disable signal to disable A / B to disable G / H all to logic one. This causes the multiplexer 605 to output a logic 0 at the output OUT of the third OR gate 680. In this aspect, the read decoder 630 may receive the disable signal, and disable the multiplexer 605 when the disable signal is logic 1.Thus, multiple level shift multiplexers can be combined to achieve a larger multiplexer by taking the OR of the outputs of the multiple level shift multiplexers. FIG. 5 shows an example in which the OR gate 515 is used to combine two level shift multiplexers 410 and 510 to form a multiplexer 505 capable of multiplexing four different signals. 6 shows an example in which the OR gates 515, 665, and 680 are combined to combine the four level shift multiplexers 410, 510, 610, and 650 to form a multiplexer 605 capable of multiplexing eight differential signals.It is appreciated that the outputs of the multiple level shift multiplexers can be ORed using different types of logic gates. 7 shows an example in which the first OR gate 515 is replaced by the first NOR gate 715, the second OR gate 665 is replaced by the second NOR gate 765, and the third OR gate 680 is NAND- An example of a multiplexer 705 that replaces gate 780. The combination of NOR gates 715 and 765 and NAND gate 780 may logically be equivalent to the combination of OR gates 515, 665 and 680 in FIG. 6. In this example, the positive outputs 452 and 552 of the first and second multiplexers 410 and 510 are input to the first NOR gate 715 while the positive outputs 612 and 652 of the third and fourth multiplexers 610 and 650 Is input to the second NOR gate 765. The outputs 720 and 770 of the first and second NOR gates 715 and 765 are input to the NAND gate 780 while the output of the multiplexer 705 (denoted as "OUT") is taken at the output of the NAND gate 780.Referring back to FIG. 4A, the level shift multiplexer 410 may be limited in the amount of power supply voltage Vddin of the first power domain may be different from the amount of power supply voltage Vddout of the second power domain. This can be illustrated by the following example in which the first differential inputs (ie, inputs A andare selected.When signal A transitions from 0 to 1, the first driver NMOS transistor 426 turns on and attempts to pull node 460 to ground. However, the PMOS transistor 415 of the pull-up circuit 412 may still be turned on and therefore oppose (resist) the attempt of the first driver NMOS transistor 426 to pull the node 460 to ground. As the difference between Vddout and Vddin increases, it becomes harder and harder for the first driver NMOS transistor 426 (driven by Vddin) to pull down the node 460. Therefore, if the difference between Vddin and Vddout becomes too large, the multiplexer may stop working properly.In this regard, FIG. 8 illustrates a level shift multiplexer 810 capable of operating over a wider supply voltage range in accordance with an embodiment of the present disclosure. Level shift multiplexer 810 includes level shift multiplexer 410 in FIG. 4A and multiplexed choke 815 coupled to the sources of PMOS transistors 415 and 417 of pull-up circuit 412. The multiplexing choke circuit 815 allows the difference between Vddin and Vddout to be larger as compared to the single multiplexer 410, as further explained below.The multiplexing choke circuit 815 includes a first selection PMOS transistor 818, a second selection PMOS transistor 820, a first choke circuit 822, and a second choke circuit 832. The gate of the first select PMOS transistor 818 is coupled to select input A while the gate of the second select PMOS transistor 820 is coupled to the select input Sel B.The first choke circuit 822 includes a first choke PMOS transistor 824 and a second choke PMOS transistor 826. The first choke PMOS transistor 824 is coupled between the first select PMOS transistor 818 and the PMOS transistor 415 of the pull-up circuit 412. The second PMOS transistor 826 is coupled between the second select PMOS transistor 820 and the PMOS transistor 415 of the pull-up circuit 412. The gate of the first choke PMOS transistor 824 is coupled to input B and the gate of the second choke PMOS transistor 826 is coupled to input A. FIG.The second choke circuit 832 includes a third choke transistor 834 and a fourth choke PMOS transistor 836. The third choke PMOS transistor 834 is coupled between the first select PMOS transistor 818 and the PMOS transistor 417 of the pull-up circuit 412 and the fourth choke transistor 836 is coupled between the second select PMOS transistor 820 and the pull-up circuit 412 Between PMOS transistors 417. The gate of the third choke PMOS transistor 834 is coupled to the inputand the gate of the fourth choke PMOS transistor 836 is coupled to the inputAs discussed above, the multiplexing choke circuit 815 allows the difference between Vddout and Vddin to be larger than that of the single level shift multiplexer 410. This can be explained by the following example.When the selection signal Sel A is logic 1 and the selection signal Sel B is logic 0, the first selection transistor 818 is turned off and the second selection transistor 820 is turned on. As a result, the second and fourth choke PMOS transistors 826 and 836 are coupled to the supply rail Vddout of the second power domain while the first and second choke PMOS transistors 824 and 834 are decoupled from the supply rail Vddout of the second power domain. In other words, the choke PMOS transistors 826 and 836 corresponding to the first differential input are selected when the first differential inputs (ie, inputs A and __ JILL 61_) are selected.In this example, when signal A transitions from 0 to 1, the first choke circuit 822 helps the first driver NMOS transistor 426 pull-down the node 460 by throttling the current from Vddout to the PMOS transistor 415 of the pull-up circuit 412 . This is because a logic 1 of signal A turns the second choke PMOS transistor 826 OFF (or partially OFF), thereby reducing (throttling) the current from Vddout through the second choke PMOS transistor 826 to the PMOS transistor 415. As a result, the ability of the PMOS transistor 415 to oppose (resist) the attempt of the first drive transistor 426 to pull down the node 460 is diminished. This allows the difference between Vddout and Vddin to be larger than the multiplexer 410 in FIG. 4A. Since the first selection PMOS transistor 818 is turned off, the current does not flow from the power supply rail Vddout to the PMOS transistor 415 through the first choke PMOS transistor 824.In this example, the fourth choke circuit 836 assists the third driver NMOS transistor 436 pull-down node 465 by throttling the current from Vddout to the PMOS transistor 417 of the pull-up circuit 412 when the signal A transitions from 1 to 0 . This is because the inverted signalis logic 1 and the logic 1 of signalturns the fourth choke PMOS transistor 836 off (or partially off), thereby reducing (throttling) the flow from Vddout through the fourth choke PMOS transistor 836 to the PMOS transistor 417. Since the first selection PMOS transistor 818 is turned off, current does not flow from the power supply rail Vddout to the PMOS transistor 417 through the third choke PMOS transistor 834.When the selection signal Sel A is logic 0 and the selection signal Sel B is logic 1, the first selection transistor 818 is turned on and the second selection transistor 820 is turned off. As a result, the first and third choke PMOS transistors 824 and 834 are coupled to the supply rail Vddout of the second power domain while the second and fourth PMOS transistors 826 and 836 are decoupled from the supply rail Vddout of the second power domain. In other words, the choke PMOS transistors 824 and 834 corresponding to the second differential input are selected when the second differential inputs (ie, inputs B andare selected.In this example, the first choke circuit 822 helps the second driver NMOS transistor 428 pull-down node 460 by throttling the current from Vddout to the PMOS transistor 415 of the pull-up circuit 412 when the signal B transitions from 0 to 1 . This is because a logic 1 of signal B turns the first choke PMOS transistor 824 OFF (or partially OFF), thereby reducing (throttling) the current from Vddout through the first choke PMOS transistor 824 to the PMOS transistor 415. Since the second selection PMOS transistor 820 is turned off, the current does not flow from the supply rail Vddout to the PMOS transistor 415 through the second choke PMOS transistor 826.In this example, the third choke circuit 834 helps the fourth drive NMOS transistor 438 pull-down node 465 by throttling the current from Vddout to the PMOS transistor 417 of the pull-up circuit 412 when the signal B transitions from 1 to 0 . This is because the inverted signalis logic 1 and the logic 1 of signal _NER 66_ turns the third choke PMOS transistor 834 off (or partially off), thereby reducing (throttling) the flow from Vddout through the third choke PMOS transistor 834 to the PMOS transistor 417. Since the second selection PMOS transistor 820 is turned off, current does not flow from the supply rail Vddout to the PMOS transistor 417 through the fourth choke PMOS transistor 836.Thus, the multiplexing throttle circuit 815 allows the difference between Vddout and Vddin to be larger than that of the single level shift multiplexer 410. The difference between Vddout and Vddin may be 100 mv or more, 200 mV or more, or 300 mV or more.The level shift multiplexer 810 may further include a second clamp transistor 840 (eg, PMOS transistor) as shown in FIG. 8. The second clamp transistor 840 may have the source coupled to the supply rail Vddout of the second power domain, the drain coupled to the source of the PMOS transistor 415 at node 842, and the gate driven by the inverse of the disable signal. The anti-disable signal may be generated, for example, using an inverter powered by Vddout. In this embodiment, the first clamp transistor 440 is on and the node 465 is pulled to ground while the disable signal disable A / B is logic one, while the second clamp transistor 840 is on and the node 842 (and So the source of PMOS transistor 415) is pulled to Vddout. PMOS transistor 415 is turned on because node 465 is pulled to ground by first clamp transistor 440. As a result, PMOS transistor 415 pulls node 460 to about Vddout at node 842. The first inverter 450 outputs a logic 0 at the first output OUT and the second inverter 455 outputs a logic 1 at the second outputWhen the disable signal disables A / B to logic 0, the two clamp transistors 440 and 840 are turned off, and the multiplexer 810 operates normally as discussed above.FIG. 9 is a flowchart illustrating a method 900 for level shift multiplexing according to an embodiment of the present disclosure. The method 900 may be performed by the level shift multiplexer 410, 470, or 810.At step 910, one of the plurality of inputs is selected based on the one or more selection signals. For example, each of the plurality of inputs may be a differential input (eg, a first differential input) that includes a pair of complementary inputs (eg, inputs A andof the first differential input). In one example, the one or more selection signals may include a respective selection signal (eg, a selection signal Sel A for the first differential input) for each of the plurality of inputs. In this example, one of the multiple inputs may be selected when the corresponding select signal is a logical one. The selection signal that is not selected for input may be a logic zero.At step 920, based on the state of the input of the selected one of the plurality of inputs, one of the first and second nodes is pulled down. For example, each of the plurality of inputs may be a differential input, and the selected one of the plurality of inputs may be driven by a respective differential signal that includes a complementary signal (eg, signals A and. In this example, the first node may be pulled down if the differential signal is in one state (eg, signal A is logic 1 and signalis logic 0), and if the differential signal is in another state (eg, signal A Is logic 0 and signalis logic 1) then the second node can be pulled down.At step 930, if the second node is pulled down, the first node is pulled up. For example, the first node (eg, node 460) may be pulled up by a PMOS transistor (eg, PMOS transistor 415) that is gate coupled to a second node (eg, node 465).At step 940, if the first node is pulled down, the second node is pulled up. For example, the second node (eg, node 465) may be pulled up by a PMOS transistor (eg, PMOS transistor 417) that is gate coupled to the first node (eg, node 460).Although various embodiments of the present disclosure are discussed using examples of differential input signals, it is to be appreciated that the present disclosure is not limited to differential signals. For example, the signal end signal may be input to the multiplexer 410 in FIG. 4A. In this example, the complement of the signal end signal may be generated by the inverter in the first power domain, and the resulting complement signal may be input to the multiplexer 410. Further, it is to be appreciated that each of the multiplexers 410, 510, 610, and 650 in FIGS. 5-7 may further include a multiplexing choke circuit to extend the range of supply voltages on which the multiplexer may operate.The read decoder 530 or 630 may be implemented with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device designed to perform the functions described herein, Discrete gate or transistor logic, discrete hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in cooperation with a DSP core, or any other such configuration.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples described herein, but rather should be given its broadest scope consistent with the principles and novel features disclosed herein.
A cryptographic device includes: a data input; a data output; a cipher circuit configured to perform a cipher algorithm on cipher-algorithm input data to produce cipher-algorithm output data; and a network coupled to the data input, the data output, and the cipher circuit, the network comprising a plurality of switches and a plurality of logical signal combiners that are configured to provide the cipher-algorithm input data to the cipher circuit and to provide device output data to the data output using the cipher-algorithm output data and that, in combination with the cipher circuit, are configured to implement a plurality of different cryptographic algorithms that each include the cipher algorithm that the cipher circuit is configured to perform.
CLAIMS:1. A cryptographic device comprising:a data input;a data output;a cipher circuit configured to perform a cipher algorithm on cipher-algorithm input data to produce cipher-algorithm output data; anda network coupled to the data input, the data output, and the cipher circuit, the network comprising a plurality of switches and a plurality of logical signal combiners that are configured to provide the cipher-algorithm input data to the cipher circuit and to provide device output data to the data output using the cipher-algorithm output data and that, in combination with the cipher circuit, are configured to implement a plurality of different cryptographic algorithms that each include the cipher algorithm that the cipher circuit is configured to perform.2. The device of claim 1, wherein the cipher circuit is a single instance of the cipher circuit.3. The device of claim 1, wherein the network includes a controller configured to be programmed to actuate the plurality of switches differently to implement the plurality of different cryptographic algorithms.4. The device of claim 3, wherein the controller is configured to be programmed to actuate the plurality of switches differently to cause different logical combinations of signals to provide different cipher-algorithm input data from the data input to the cipher circuit and/or to cause different logical combinations of the cipher-algorithm output data to provide the device output data to the data output to implement the plurality of different cryptographic algorithms.5. The device of claim 3, wherein the controller is configured to be programmed to actuate the plurality of switches differently to effect values of respective variables in equations representing the plurality of different cryptographic algorithms to implement the plurality of different cryptographic algorithms.6. The device of claim 5, wherein the controller is configured to be programmed to actuate the plurality of switches differently to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the plurality of different cryptographic algorithms.7. The device of claim 3, wherein the controller implements a state machine.8. The device of claim 3, wherein the controller comprises a memory and a processor communicatively coupled to the memory, the memory comprising processor-readable instructions configured to cause the processor to actuate the plurality of switches selectively.9. The device of claim 1, further comprising an authentication circuit coupled to the network and configured to determine an authentication tag, the network being configured to provide a constant logical zero signal to the authentication circuit during a time when the cryptographic device is active but the authentication circuit is not determining the authentication tag.10. The device of claim 1, further comprising an authentication circuit coupled to the network and configured to determine an authentication tag in combination with the network, the authentication circuit being separate from the cipher circuit, wherein the network is configured such that at least a same one of the plurality of switches and/or at least a same one of the plurality of logical signal combiners is used to perform at least one of the plurality of different cryptographic algorithms and to determine the authentication tag.11. The device of claim 1 , wherein the network and the cipher circuit are configured to implement the plurality of different cryptographic algorithms without an unregulated loop.12. A cryptographic device comprising:a data input configured to receive cryptographic algorithm input data;a data output; andmeans, coupled to the data input and the data output, for implementing a plurality of different cryptographic algorithms, the means for implementing comprising:cipher means for performing a cipher algorithm on cipher-algorithm input data to produce cipher-algorithm output data; andnetwork means, coupled to the cipher means, for producing, based upon the cryptographic algorithm being implemented, cipher-algorithm input data from the cryptographic algorithm input data, for providing the cipher-algorithm input data to the cipher means, for producing, based upon the cryptographic algorithm being implemented, cryptographic algorithm output data from the cipher-algorithm output data, and for providing the cryptographic algorithm output data to the data output.13. The device of claim 12, wherein the network means are for selectively logically combining data based upon the cryptographic algorithm being implemented.14. The device of claim 13, wherein the network means are configured to actuate a plurality of switches differently to implement the plurality of different cryptographic algorithms.15. The device of claim 13, wherein the network means are configured to provide different combinations of data inputs to one or more logical signal combiners to implement the plurality of different cryptographic algorithms.16. The device of claim 15, wherein the network means are configured to provide the different combinations of data inputs to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the plurality of different cryptographic algorithms.17. The device of claim 12, wherein the means for implementing further comprise authentication means, coupled to the network means, for determining an authentication tag associated with the cryptographic algorithm output data, the network means being further for providing a constant logical zero signal to the authentication means during a time when the cryptographic device is active but the authentication means are not determining the authentication tag.18. The device of claim 12, wherein the means for implementing further comprise authentication means, coupled to the network means, for determining an authentication tag associated with the cryptographic algorithm output data, the network means and the authentication means sharing at least one switch and/or at least one logical signal combiner.19. A cryptographic method comprising:receiving cryptographic algorithm input data at a cryptographic device;directing the cryptographic algorithm input data in the cryptographic device through a network of switches and logical signal combiners to produce cipher-algorithm input data;performing a cipher algorithm on the cipher-algorithm input data in a cipher circuit to produce cipher-algorithm output data; anddirecting the cipher-algorithm output data in the cryptographic device through the network of switches and logical signal combiners to produce cryptographic algorithm output data;wherein the cryptographic algorithm input data and the cipher-algorithm output data are directed through the network of switches and logical signal combiners based upon a selected cryptographic algorithm from a plurality of cryptographic algorithms implementable by different paths through the network of switches and logical signal combiners, with each path including the cipher circuit.20. The method of claim 19, wherein directing the cryptographic algorithm input data, performing the cipher algorithm, and directing the cipher-algorithm output data implement values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation applicable to the plurality of different cryptographic algorithms to implement the selected cryptographic algorithm.21. The method of claim 19, further comprising determining an authentication tag, associated with the cryptographic algorithm output data, using an authentication circuit to perform a one-way function.22. The method of claim 21, further comprising providing a constant logical zero signal to the authentication circuit while the authentication circuit is idle.23. The method of claim 21, wherein the authentication tag is determined using at least one logical signal combiner, in the network of switches and logical signal combiners, through which data pass in implementing the selected cryptographic algorithm.24. The method of claim 19, wherein the cryptographic algorithm input data are first cryptographic algorithm input data, the cipher-algorithm input data are first cipher-algorithm input data, and the cryptographic algorithm output data are first cryptographic algorithm output data corresponding to a first cryptographic algorithm of the plurality of cryptographic algorithms, the method further comprising:receiving second cryptographic algorithm input data at the cryptographic device;directing the second cryptographic algorithm input data in the cryptographic device through the network of switches and logical signal combiners to produce second cipher-algorithm input data;performing the cipher algorithm on the second cipher-algorithm input data in the cipher circuit to produce second cipher-algorithm output data; anddirecting the second cipher-algorithm output data in the cryptographic device through the network of switches and logical signal combiners to produce second cryptographic algorithm output data corresponding to a second cryptographic algorithm of the plurality of cryptographic algorithms, the second cryptographic algorithm being different from the first cryptographic algorithm.25. A non-transitory, processor-readable storage medium comprising processor-readable instructions configured to cause a processor to:receive cryptographic algorithm input data;receive an indication of a selected cryptographic algorithm from a plurality of different cryptographic algorithms;produce, based upon the selected cryptographic algorithm, cipher-algorithm input data from the cryptographic algorithm input data;perform a cipher algorithm on the cipher-algorithm input data to produce cipher-algorithm output data; andproduce, based upon the cryptographic algorithm being implemented, cryptographic algorithm output data from cipher-algorithm output data.26. The storage medium of claim 25, wherein the instructions configured to produce the cipher-algorithm input data and/or the instructions configured to cause the processor to produce the cryptographic algorithm output data are configured to cause the processor to selectively logically combine data based upon the selected cryptographic algorithm.27. The storage medium of claim 26, wherein the instructions configured to cause the processor to selectively logically combine data are configured to cause the processor to provide a particular combinations of data, based upon the selected cryptographic algorithm, to be logically combined.28. The storage medium of claim 28, wherein the instructions configured to cause the processor to provide the particular combination of data are configured to cause the processor to provide the particular combination of data to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the selected cryptographic algorithm.29. The storage medium of claim 25, further comprising instructions configured to cause the processor to determine an authentication tag associated with the cryptographic algorithm output data.
CRYPTOGRAPHIC MODE PROGRAMMABILITYBACKGROUND[0001] There are many different types of electronic communication today. Standards have been developed for different types of communication, including different types of cryptography(encryption and decryption) for data being communicated. Often a single communication device is capable of several different types of communication. For example, a smart phone may employ one type of cryptography for voice communications and another type of cryptography for Internet data traffic. To accommodate different types of cryptography, physically separate, dedicated circuits for each type of cryptography are provided in a single device, and the appropriate circuit is selected based on the type of communication involved.SUMMARY[0002] An example of cryptographic device includes: a data input; a data output; a cipher circuit configured to perform a cipher algorithm on cipher-algorithm input data to produce cipher- algorithm output data; and a network coupled to the data input, the data output, and the cipher circuit, the network comprising a plurality of switches and a plurality of logical signal combiners that are configured to provide the cipher-algorithm input data to the cipher circuit and to provide device output data to the data output using the cipher-algorithm output data and that, incombination with the cipher circuit, are configured to implement a plurality of different cryptographic algorithms that each include the cipher algorithm that the cipher circuit is configured to perform.[0003] Implementations of such a device may include one or more of the following features. The cipher circuit is a single instance of the cipher circuit. The network includes a controller configured to be programmed to actuate the plurality of switches differently to implement the plurality of different cryptographic algorithms. The controller is configured to be programmed to actuate the plurality of switches differently to cause different logical combinations of signals to provide different cipher-algorithm input data from the data input to the cipher circuit and/or to cause different logical combinations of the cipher-algorithm output data to provide the device output data to the data output to implement the plurality of different cryptographic algorithms. The controller is configured to be programmed to actuate the plurality of switches differently to effect values of respective variables in equations representing the plurality of different cryptographic algorithms to implement the plurality of different cryptographic algorithms. The controller is configured to be programmed to actuate the plurality of switches differently to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the plurality of differentcryptographic algorithms. The controller implements a state machine. The controller comprises a memory and a processor communicatively coupled to the memory, the memory comprising processor-readable instructions configured to cause the processor to actuate the plurality of switches selectively.[0004] Also or alternatively, implementations of such a device may include one or more of the following features. The device further includes an authentication circuit coupled to the network and configured to determine an authentication tag, the network being configured to provide a constant logical zero signal to the authentication circuit during a time when the cryptographic device is active but the authentication circuit is not determining the authentication tag. The device further includes an authentication circuit coupled to the network and configured to determine an authentication tag in combination with the network, the authentication circuit being separate from the cipher circuit, where the network is configured such that at least a same one of the plurality of switches and/or at least a same one of the plurality of logical signal combiners is used to perform at least one of the plurality of different cryptographic algorithms and to determine the authentication tag. The network and the cipher circuit are configured to implement the plurality of different cryptographic algorithms without an unregulated loop.[0005] Another example of a cryptographic device includes: a data input configured to receive cryptographic algorithm input data; a data output; and means, coupled to the data input and the data output, for implementing a plurality of different cryptographic algorithms, the means for implementing comprising: cipher means for performing a cipher algorithm on cipher-algorithm input data to produce cipher-algorithm output data; and network means, coupled to the cipher means, for producing, based upon the cryptographic algorithm being implemented, cipher- algorithm input data from the cryptographic algorithm input data, for providing the cipher- algorithm input data to the cipher means, for producing, based upon the cryptographic algorithm being implemented, cryptographic algorithm output data from the cipher-algorithm output data, and for providing the cryptographic algorithm output data to the data output.[0006] Implementations of such a device may include one or more of the following features. The network means are for selectively logically combining data based upon the cryptographic algorithm being implemented. The network means are configured to actuate a plurality of switches differently to implement the plurality of different cryptographic algorithms. The network means are configured to provide different combinations of data inputs to one or more logical signal combiners to implement the plurality of different cryptographic algorithms. The network means are configured to provide the different combinations of data inputs to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the plurality of different cryptographic algorithms.[0007] Also or alternatively, implementations of such a device may include one or more of the following features. The means for implementing further comprise authentication means, coupled to the network means, for determining an authentication tag associated with the cryptographic algorithm output data, the network means being further for providing a constant logical zero signal to the authentication means during a time when the cryptographic device is active but the authentication means are not determining the authentication tag. The means for implementing further comprise authentication means, coupled to the network means, for determining an authentication tag associated with the cryptographic algorithm output data, the network means and the authentication means sharing at least one switch and/or at least one logical signal combiner.[0008] An example of a cryptographic method includes: receiving cryptographic algorithm input data at a cryptographic device; directing the cryptographic algorithm input data in the cryptographic device through a network of switches and logical signal combiners to produce cipher-algorithm input data; performing a cipher algorithm on the cipher-algorithm input data in a cipher circuit to produce cipher-algorithm output data; and directing the cipher-algorithm output data in the cryptographic device through the network of switches and logical signal combiners to produce cryptographic algorithm output data; where the cryptographic algorithm input data and the cipher- algorithm output data are directed through the network of switches and logical signal combiners based upon a selected cryptographic algorithm from a plurality of cryptographic algorithms implementable by different paths through the network of switches and logical signal combiners, with each path including the cipher circuit.[0009] Implementations of such a device may include one or more of the following features. Directing the cryptographic algorithm input data, performing the cipher algorithm, and directing the cipher-algorithm output data implement values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation applicable to the plurality of different cryptographic algorithms to implement the selected cryptographic algorithm. The method further includes determining an authentication tag, associated with the cryptographic algorithm output data, using an authentication circuit to perform a one-way function. The method further includes providing a constant logical zero signal to the authentication circuit while the authentication circuit is idle. The authentication tag is determined using at least one logical signal combiner, in the network of switches and logical signal combiners, through which data pass in implementing the selected cryptographic algorithm.[0010] Also or alternatively, implementations of such a device may include one or more of the following features. The cryptographic algorithm input data are first cryptographic algorithm input data, the cipher-algorithm input data are first cipher-algorithm input data, and the cryptographic algorithm output data are first cryptographic algorithm output data corresponding to a first cryptographic algorithm of the plurality of cryptographic algorithms, the method further comprising: receiving second cryptographic algorithm input data at the cryptographic device;directing the second cryptographic algorithm input data in the cryptographic device through the network of switches and logical signal combiners to produce second cipher-algorithm input data; performing the cipher algorithm on the second cipher-algorithm input data in the cipher circuit to produce second cipher-algorithm output data; and directing the second cipher-algorithm output data in the cryptographic device through the network of switches and logical signal combiners to produce second cryptographic algorithm output data corresponding to a second cryptographic algorithm of the plurality of cryptographic algorithms, the second cryptographic algorithm being different from the first cryptographic algorithm.[0011] An example of a non-transitory, processor-readable storage medium includes processor- readable instructions configured to cause a processor to: receive cryptographic algorithm input data; receive an indication of a selected cryptographic algorithm from a plurality of differentcryptographic algorithms; produce, based upon the selected cryptographic algorithm, cipher- algorithm input data from the cryptographic algorithm input data; perform a cipher algorithm on the cipher-algorithm input data to produce cipher-algorithm output data; and produce, based upon the cryptographic algorithm being implemented, cryptographic algorithm output data from cipher- algorithm output data.[0012] Implementations of such a device may include one or more of the following features. The instructions configured to produce the cipher-algorithm input data and/or the instructions configured to cause the processor to produce the cryptographic algorithm output data are configured to cause the processor to selectively logically combine data based upon the selected cryptographic algorithm. The instructions configured to cause the processor to selectively logically combine data are configured to cause the processor to provide a particular combinations of data, based upon the selected cryptographic algorithm, to be logically combined. The instructions configured to cause the processor to provide the particular combination of data are configured to cause the processor to provide the particular combination of data to effect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the selected cryptographic algorithm. The storage medium further includes instructions configured to cause the processor to determine an authentication tag associated with the cryptographic algorithm output data.BRIEF DESCRIPTION OF THE DRAWINGS[0013] FIG. 1 is a simplified diagram of a wireless communication system.[0014] FIG. 2 is a block diagram of components of a device shown in FIG. 1.[0015] FIG. 3 is a state diagram for a state machine to implement multiple cryptographic modes.[0016] FIG. 4 is a simplified circuit diagram of a cryptographic engine shown in FIG. 2.[0017] FIG. 5 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for initial stage CBC mode encryption.[0018] FIG. 6 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for subsequent-stage CBC mode encryption.[0019] FIG. 7 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for initial stage CBC mode decryption.[0020] FIG. 8 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for subsequent-stage CBC mode decryption.[0021] FIG. 9 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for initial-stage CMAC authentication tag generation.[0022] FIG. 10 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for initial-data-block CMAC authentication tag generation.[0023] FIG. 11 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for intermediate-data-block CMAC authentication tag generation.[0024] FIG. 12 is a circuit diagram of the cryptographic engine shown in FIG. 2 showing signal flow for final-data CMAC authentication tag generation.[0025] FIG. 13 is a block flow diagram of a cryptographic method. DETAILED DESCRIPTION[0026] Techniques are discussed herein for implementing multiple cryptographic modes using shared circuitry. For example, a single instance of a cipher circuit and/or a shared signal-modifying network can be used to implement multiple cryptographic modes. Input data may be selectively manipulated before being provided, as cipher-algorithm input data, to a cipher circuit such that while the cipher circuit performs the same cipher algorithm, different cipher-algorithm input data are produced by the selective manipulation such that different output data are produced for the same input data depending upon the cryptographic mode that is programmed to be performed. These examples, however, are not exhaustive.[0027] Items and/or techniques described herein may provide one or more of the following capabilities, as well as other capabilities not mentioned. Multiple cryptographic modes may be implemented in a single interconnection network. Space, size, and/or cost may be reduced for providing multiple encryption mode capability. Future cryptographic modes may beaccommodated without requiring a hardware change to a cryptographic engine. Other capabilities may be provided and not every implementation according to the disclosure must provide any, let alone all, of the capabilities discussed. Further, it may be possible for an effect noted above to be achieved by means other than that noted, and a noted item/technique may not necessarily yield the noted effect.[0028] Referring to FIG. 1, a wireless communication system 10 includes various devices 12, here a smart phone, a tablet computer, and a laptop computer all in communication with a communications network 14. The devices 12 may each be configured to communicate with the network 14 directly and/or indirectly, wirelessly and/or through wired connections, e.g., through an access point 16 or a base station 18 (e.g., a cellular base station). The devices 12 may communicate through different mechanisms, e.g., Wi-Fi, cellular, etc., and may communicate different types of communications, e.g., voice, data, Internet data, etc. The devices 12, in order to provide different types of communication, may implement different cryptography types for the differentcommunication types. The devices 12 shown in FIG. 1 are examples only and numerous other types of devices may be used including, but not limited to, Internet of Things (IoT) devices such as proximity sensors, camera sensors, remote locks, garage door openers, irrigation systems, weather sensors, etc.[0029] Referring also to FIG. 2, an example of the devices 12 shown in FIG. 1 includes a processor 30, a transceiver 32, a memory 34 including software (SW) 36, and a System-on-a-Chip (SoC) 40. The processor 30 may include multiple physical entities, and these entities may be physically distributed throughout the device 12. The transceiver 32 is communicatively coupled to the processor 30, the memory 34, and the SoC 40 and is configured to bi-directionally communicate with the network 14. The transceiver 32 may be configured to communicate with the network 14 through one or more wired connections and/or wirelessly, either directly (e.g., with the transceiver 32 including a modem) or indirectly (e.g., through the access point 16, through the base station 18, etc.). The processor 30 is preferably an intelligent hardware device, for example a central processing unit (CPU) such as those made or designed by QUALCOMM®, a microcontroller, an application specific integrated circuit (ASIC), etc. The memory 34 is communicatively coupled to the processor 30 and both the memory 34 and the processor 30 are communicatively coupled to the SoC 40. The SoC 40 includes a hardware cryptographic processor 42 that is communicatively coupled to the memory 34 and the processor 30. The cryptographic processor 42 includes a cryptographic engine 44 that includes a decryption engine 46, an encryption engine 48, and a controller 50. The software 36 may include processor-readable instructions configured to cause the processor 30 to perform functions discussed herein, e.g., programming the controller 50 to implement different cryptographic algorithms. For example, the software 36 may include processor-readable instructions configured to cause the processor 30 to process signals according to the discussion herein, e.g., regarding FIGS. 5-13 as well as to implement other cryptographic algorithms in accordance with the teachings herein.[0030] The cryptographic engine 44, in particular the decryption engine 46 and the encryption engine 48, under control of the controller 50, is configured to implement multiple cryptographic algorithms (called modes or cryptographic modes) using a shared hardware, here a shared cryptography circuit 52. A mode as used herein is an algorithm for the cryptographictransformation of data that features a symmetric cipher algorithm. The controller 50 is configured to cause various data to be provided to the shared cryptography circuit, and to cause selective portions of the shared cryptography circuit to be used, in order to implement a selectedcryptographic algorithm out of a set of different cryptographic algorithms that the cryptographic engine 44 is configured to implement. The shared cryptography circuit 52 is shown separately from the decryption engine 46 and the encryption engine 48, but is part of both the decryption engine 46 and the encryption engine 48 and thus shared by the decryption engine 46 and the encryption engine 48. The shared cryptography circuit 52 includes a cipher circuit 54 and a digest circuit 56.[0031] The cipher circuit 54 is preferably, but not necessarily, a single instance of a circuit configured to perform a symmetric cipher algorithm. The cipher circuit 54 may have portions that are physically separate from each other, but the cipher circuit 54 is one collection of circuitry configured to perform a cipher algorithm. The device 12 could also have other circuitry to perform other functions, and may even have other cipher circuitry, but the multiple cryptographic algorithms can be implemented by the cipher circuit 54 in combination with other non-cipher circuitry without having other instances of the cipher circuitry. For example, the multiple cryptographic algorithms can be implemented without multiple separate circuits for implementing different modes, with the different circuits each having a cipher circuit of the same configuration (i.e., configured to implement the same cipher algorithm). The cipher circuit 54 is preferably configured to perform a cipher algorithm on input data to produce cipher-algorithm output data. While examples are discussed herein for operating on blocks of data, symmetric ciphers may be applied to blocks of data or streams of data and the discussion herein, including the various components discussed and the claims, includes both of these possibilities unless a possibility is explicitly excluded. The controller 50 is configured to control portions of the decryption engine 46 and the encryption engine 48 to use desired input data to produce cipher-algorithm input data, possibly by logically combining the input data, and to provide the cipher-algorithm input data to the cipher circuit 54. The controller 50 is further configured to control portions of the decryption engine 46 and the encryption engine 48 to use cipher-algorithm output data from the cipher circuit 54 to produce device output data, possibly by logically combining the cipher-algorithm output data with other data. The controller 50 is configured to selectively logically combine data based upon the cryptographic algorithm being implemented.[0032] The digest circuit 56 is configured to produce an authentication tag associated with encrypted data produced by the encryption engine 48. The digest circuit 56 is configured to perform a digest algorithm, that preferably implements a one-way cryptographic function, on data input to the digest circuit 56. The one-way cryptographic function is irreversible, at least from a practical standpoint. The controller 50 is configured to control portions of the encryption engine 48 to use desired input data to produce digest input data, possibly by logically combining the input data based on the cryptographic algorithm being implemented, and to provide the digest input data to the digest circuit 56. The controller 50 is further configured to control portions of the encryption engine 48 to use digest output data from the digest circuit 56 as an authentication tag for corresponding cipher text.[0033] The following table illustrates expressions for implementing several standardcryptographic algorithms. Encryption ec yp sQftMode; {>P<t>,(C¾ice Pc-¾¾? θ i ~¾ © c,;.PCSC:¾iV! 8 Pc «€;,.} θ Pi ;-¾ί .< Θ C;C ^l ) Θ Pi t ¾PV Φ Q>CTf¾ Cs=¾ once |! IV) φ ?00=¾(ηοί ε· if fV) $ Psi P0~Ok(nonce ¾ i ) @ o j P.-*£ nonce 1 W) φ C,Table 1 shows expressions for processing an initial (H)} and subsequent (i>0) blocks of data of a message according to cryptographic algorithms: EC (Electronic Codebook), CMC (Cipher Block Chaining), PC C (Propagating Cipher Block Chain , CFB (Cipher Feedback), OE8 (Output Feedback), and CTR. (Counters, Still other modes ec ild be used, such as XCBG, EAX, CCM, XTS, GC , f S. F9, etc,'In Table 1. I V is initialization vector, which may be a random number, and the symbol(S indicates a logical XOR (exciusive-OR) operation. The expressions shown are for symmetric cryptography modes where a plaintext message P is decomposed into blocks of a uniform block size such thatΡ - Ρ,,, Ει, Ρ^ .^Ρ,.ί (1 )For 0 < n- 1 , the length of the plaintext block P, is the block size, if the length of the last, plaintext block, IC..I- is less than the block size, then appropriate padding Is added to reach the block ske. Further, in Table I . !¾( } and ¾( ) represent encryption and decryption functions, respectively, or a symmetric cipher with a shared secret k. Lastly, the cipher text indicated in Table I and resulting from encryption of the plaintext P may be expressed asC - O, CS. C:mThe block si e is the amount of data that the decryption engine 46 is configured to process to decrypt (or that the encryption engine 48 i configured to encrypt) at any one time. This amount of data may be of various sizes (e.g., 128 bits, 512 bits, etc.).£0034} It has been discovered that the expressions in Table 1 may be condensed to Fewer expressions that include variables (that may be set to various values to achieve a particular one of the expressions shown in Table I ). In particular, it has been found that the expressions in Table I may be reduced to the expressions shown below in Table 2. encryption ■Oee yiJt&n di;;. {}ECB,CBC, Ρβ.==ϊ ¾ Θ 505>C8£CfB,0f¾, © ¾ Θ ¾Each of t e variables , Y, Z, S, and 1"can be given an appropriate onzer value, or a value of zero, in order to make the corresponding ex essi into one of the expressions in Table I . A subscript of 0 indicates an initialization value of the variable. I.e.*for an initial block of a message processed for the respective cryptographic algorithm and a subscript if i indicates a steady-state value for the variable, it., for any block, after the initial block, of a message for the respective cryptographic algorithm. Table shows th values of the variables in FIG. 2 to implement the expressions in Table 1. llin Table 3, a dash (■·) indicates that this variable is not used. The values of X, and S:being nonce 1! IV indicate that the argument for the F¾ and Du functions, respectively, are nonce |{ IV.pvj$J The controller 50 is configured to assign the values to tire variables according to'fable 3 to implement the desired cryptographic algorithm. The controller 50 ma implement a finite state machine or a processor .and software with instructions configured to he executed by the processor to perform the appropriate functions. Referring to FK1 3, functional stales of the controller 50 as a state machine include an idle stale 70. an ECB encryption state 72. a CBC encryption state 74, and a PCBC encryption state 76, The states 72, 74, 76 re steady states, i.e.. after initialization of the corresponding state. In FIG. 3. only encryption states are shown and only states for the ECB, CBC, and PCBC modes are shown tor simplicity. The controller 5ft is configured to set the values or the variables as shown in FIG, 3 nd Table 3 to implement the cryptographic algorithms for encryption using the BBC, C8C, and PCBC modes. The controller SO is further configured to set values of e variables as shown in FIG. to implement cryptographic .algorithms tor decryption using the ESC, CBC, and PCBC m des, and to implement the cryptographic algorithms for encryption and decryption using the CFB. OF , and CTR modes. Ahemativeiy, the controller 50 could be configured to implement fewer than all six of the modes shown in Table 3, and/or may be configured to implement one or m re other modes not discussed.\W \ It has further been discovered thai the expressions in Table 2 ma be condensed to fewer expressions that include variables that may take on plaintext, cipher text, or initialization vector values. In particular, it has been found that, the expressions in Table 2 may be reduced to the expressions shown below in Table 4.iriicry iion D8 ry ic:5Modei>0: iC , CBC,?C8C, era.. ΟΓ¾4Λ;@ Χ0} Υ0o¾ cmIn this case, the values of A and B may be plaintext, cipher text, and V, etc., and values of X, V , Z, S, and T are assigned as appropriate to achieve the desired expression shown in Table 1 . The controller 50 may be configured to provide the appropriate values of the variables to implement a desired mode.| 3?| Referring to FIG. 4, with further reference to FIG. 2, a cryptographic engine 1 10 that is an example of the cryptographic engine 44 includes a data Input 1 12. a data output 1 1.4, a network 1 16, a cipher circuit 1 18, and a digest circuit 120. Not all of the components of, or connections between components in, the cryptographic engine 1 10 are shown in FIG. 4 (or FIGS. 5-12 below, some of which show features not shown in other figures). The data input 1 ! 2 includes a counter sub-input 1 0, a data sub-input 132, an initialization vector sub-input 1 4, an alternative'initialization vector sub-input 136, and a mask sub-input 138. The data output 1 14 includes a data sub-output ! 40 (here a FIFO (first in, first out) register), and an authentication sub-output 142, The network 1 ¼ is coupled to the data input 1 1 and the data output 1 14 and includes multiple switches Sun, here multiplexers tMUXex), and multiple logical signal combiners Π ?! 4>here exelusive-OR (XOR) gates. The network 1 16 is configured to route data from the data input 1 12, possibly combining data along the way, to the cipher circuit 1 18 and the digest circuit 120, to route data from the cipher circuit 118, possibly combining data along the way, back to the cipher circuit 118 and/or to the data output 114, and to route data from the digest circuit 120 to the data output 114 and/or back to the digest circuit 120, possibly combining data along the way. The network 116 is configured to manipulate data that is provided to the cipher circuit 118 and/or data output by the cipher circuit 118 differently to implement different cryptographic algorithms. The network 116 is preferably a single instance of the components shown that is shared between implementations of different cryptographic algorithms. Multiple instances of the network components could be used, but the discussion herein focuses on a single instance of the network components being used. The network 116 may be considered a single network, common to the multiple cryptographic algorithm implementations using the cipher circuit 118. The cipher circuit 118 is an example of the cipher circuit 54 shown in FIG. 2 and is configured to perform a symmetrical block cipher algorithm. The digest circuit 120 is an example of the digest circuit 56 shown in FIG. 2 and is configured to perform a one-way function such as a hash function. The digest circuit 120 here is configured to process a block of data at a time.[0038] The network 116 is configured to provide a constant logical zero signal to various components. For example, the network 116 may provide a logical signal to the cipher circuit 118 or the digest circuit 120 when device 12, and in particular the cryptographic engine 44, is active but the cipher circuit 118 or the digest circuit 120 is idle and thus not producing ciphertext, plaintext, or an authentication tag, respectively. By providing a constant logical zero signal to the cipher circuit 118 or the digest circuit 120, prevents the cipher circuit 118 or the digest circuit 120 from seeing a variable data on its respective input, and thus prevents power consumption corresponding to the cipher circuit 118 or the digest circuit 120 processing the variable data. The constant logical zero signal may have a voltage that varies over time but that stays within a range corresponding to a logical zero, i.e., does not change in logical value. For example, a signal may be considered a logical zero if it's voltage is at or below 0.5 V. In this example, the constant logical zero signal may vary in value from 0 V to 0.5 V and still be considered a constant logical zero signal. The network 116 may provide a logical zero signal to a multiplexer when the output of the multiplexer is not being used.[0039] The data input 112 is configured to receive several types of information and to provide the information to the network 116. The counter sub-input 130 may be a passive input that receives a counter value or may be a counter that generates and provides a counter value. The data sub-input 132 is coupled and configured to receive plaintext messages to be encrypted and cipher text messages to be decrypted. The initialization vector sub-input 134 may be a passive input that receives an initialization vector or may be a device configured to generate and provide an initialization vector. For example, the initialization vector sub-input 134 may be a random-number generator or a pseudo-random-number generator and the initialization vector may be a random number or a pseudo-random number (or other value). The altemative initialization vector sub-input 136 may be a passive input that receives an alternative initialization vector or may be a device configured to generate and provide an altemative initialization vector. The mask sub-input 138 may be a passive input that receives a mask value or may be a device configured to generate and provide a mask value.[0040] The network 116 is configured to convey and manipulate data from the data input 112 to the cipher circuit 118 and the digest circuit 120, from the cipher circuit 118 to the data output 114 and/or to the cipher circuit 118, and from the digest circuit 120 to the data output 114 and/or the digest circuit 120. The network 116 is configured to convey data from any of the sub-inputs 130, 132, 134, 136, 138 to the cipher circuit 118 and/or the digest circuit 120 as appropriate. For example, the network 116 may route plaintext from the data sub-input 132 and/or an initialization vector from the initialization vector sub-input 134 to the cipher circuit 118. The network 116 may logically combine the plaintext and/or the initialization vector with each other and/or with other data to form cipher-algorithm input data and provide the cipher-algorithm input data to the cipher circuit 118. Alternatively, the network 116 may provide data from the data sub-input 132 (e.g., plaintext or cipher text) or from the initialization vector sub-input 134 to the cipher circuit 118 without altering any of these data, e.g., without logically combining the data (e.g., plaintext, cipher text, initialization vector) with any other data. The network 116 may route and/or logically combine data from others of these sub-inputs 130, 132, 134, 136, 138 to produce the cipher- algorithm input data and/or to produce digest input data and provide the digest input data to the digest circuit 120. Further, the network 116 is configured to convey an output of the digest circuit 122 to the authentication sub-output 142 and/or back to the digest circuit 120. For example, the network 116 may store results of the processing by the cipher circuit 118 in a register 144 and store results of the processing of the digest circuit 120 in a register 146. The network 116 is also configured to convey data output from the cipher circuit 118, e.g., as stored in the registers 144, 146, to the data sub-output 140 and/or back to the cipher circuit 118. While routing the data output from the cipher circuit 118, the network 116 may logically combine the data output from the cipher circuit 118 with other data, such as mask data from the mask sub-input 138, before providing the data to the data sub-output 140. [0041] To convey the data from the data input 112 to the cipher circuit 118 and/or the digest circuit 120, and from the cipher circuit 118 and/or the digest circuit 122 the data output 114 and/or back to the cipher circuit 118 or the digest circuit 120, respectively, the network 116 routes the data through one or more of the logical signal combiners 117 and one or more of the switches S (here multiplexers) as appropriate. The network 116 is configured such that these logical signal combiners 117 and these switches S can provide cipher-algorithm input data to the cipher circuit 118, which is a single instance of a cipher circuit, and to provide device output data to the data output 114 using cipher-algorithm output data from the cipher circuit 118. The network 116, in combination with the single instance of the cipher circuit 118, is configured to implement the different cryptographic algorithms implementable by the cryptographic engine 44, with each of the cryptographic algorithms including the cipher algorithm that the single instance of the cipher circuit 118 is configured to perform.[0042] The network 116 includes the controller 50 which is configured to be programmed to actuate the switches S in the network 116 to route data and to cause the logical combinations of data. The controller 50 is configured to be programmed to actuate the switches S differently to implement the different cryptographic algorithms. In particular, the controller 50 is configured to be programmed to actuate the switches S differently to cause different logical combinations of signals in the logical signal combiners 117 to provide different cipher-algorithm input data from the data input 112 to the cipher circuit 118. Also or alternatively, the controller 50 may cause different logical combinations of cipher-algorithm output data from the cipher circuit 118 to provide device output data to the data output 114, and in particular the data sub-output 140, and (as appropriate) back to the cipher circuit 118, to implement the different cryptographic algorithms. The controller 50 may be configured to be programmed to actuate the switches S differently to affect values of respective variables and equations representing the different cryptographic algorithms, e.g., as shown in Table 2 and Table 4, to implement the different cryptographic algorithms. In particular, the controller 50 may be configured to be programmed to actuate the switches S to affect values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation to implement the different cryptographic algorithms. Further, the network 116 is configured such that the network 116 and the cipher circuit 118 may implement the different cryptographic algorithms without forming an unregulated loop.[0043] The network 116 is also configured to provide, in accordance with control signals from the controller 50, data to the digest circuit 120 to provide an authentication mechanism for producing an authentication tag, e.g., corresponding to cipher text produced by the cipher circuit 118. The digest circuit 120 is configured to perform a one-way function on received data. The network 116 is preferably configured to feedback output of the digest circuit 120 until all the data to be authenticated has been processed, yielding an authentication tag that is smaller than the data being authenticated, and preferably an authentication tag of the same size regardless of the size of the authenticated data message.[0044] Referring to FIG. 5, with further reference to FIGS. 2 and 4, the controller 50 can selectively actuate the switches S to implement encryption of an initial block of plaintext according to the CBC cryptographic mode. The controller 50 is configured to cause each of the switches S noted below to connect the appropriate input to the output of the respective switch S to make the appropriate connections and provide the data routing as discussed below. For simplicity, however, it is not stated each time that the controller 50 is configured to cause, or causes, the respective switch S to select the appropriate input and connect the selected input to the output of the respective switch S. It may simply be stated that the network 116 routes the data, or that data flows as shown in the figure, or that a respective switch S routes the data, etc. A plaintext message is received at the data sub-input 132 and the first block of plaintext is provided to the switch Si. While the plaintext is also provided to the switch S4, the controller 50 causes the switch S4not to select the switch input from the data sub-input 132. The switch Si selects the switch input connected to the data sub-input 132 and provides the plaintext data to the output of the switch Si, with this output being connected to the logical signal combiner 117i. The network 116 routes an initialization vector (IV) from the initialization vector sub-input 134 through the switch S4and the switch S10to the logical signal combiner 1174. Logical zeroes are supplied to the switch Sg and by the switch Sg to the logical signal combiner 1174. Supplying logical zeroes to the logical signal combiner 1174, here an exclusive-OR gate, causes the logical signal combiner 1174to act as a pass-through, not changing the data received from the switch Sloto the output of the logical signal combiner 1174, such that data provided to the logical signal combiner 1174is the same as the data output by the logical signal combiner 1174. Consequently, the network 116 routes the initialization vector to the switch S2 and on to the logical signal combiner 1171. Logical zeroes are provided from the mask sub-input 138 to an AND gate 148, and thus logical zeroes are provided to a third input of the logical signal combiner 117i such that only the data from the switches Si and S2 affect the output of the logical signal combiner 1171. The logical signal combiner 117i combines the initialization vector with the plaintext received from the switch Si and provides the logically-combined output as cipher-algorithm input data to the cipher circuit 118. The cipher circuit 118 performs the cipher algorithm on the cipher-algorithm input data and provides the resulting output data, in this case encrypted data that is a block of cipher text, to the register 144. The network 116 routes the block of cipher text from the register 144 through the switch S9 to the switch Sn and through the switch Sn to the data sub-output 140.[0045] Referring to FIG. 6, with further reference to FIGS. 2, 4 and 5, the controller 50 can selectively actuate the switches S to implement encryption of further blocks (i.e., beyond the initial block) of plaintext according to the CBC mode. Similar to FIG. 5, the network 116 routes each block of cipher text from the register 144 through the switch S9 to the switch Sn and through the switch Sn to the data sub-output 140. Also similar to FIG. 5, logical zeroes are provided from the mask sub-input 138 to the AND gate 148 and blocks of plaintext data are provided by the network 116 from the data sub-input 132 through the switch Si to the logical signal combiner 117i.Contrary to FIG. 5, however, the secondary input to the logical signal combiner 117i originates from the register 144. The network 116 routes the previous block of cipher text stored in the register 144 through the switch Sg to the logical signal combiner 1174. Logical zeroes are provided through the switch S10to the logical signal combiner 1174such that the logical signal combiner 1174acts as a pass-through for the cipher text to be provided to the logical signal combiner 117i through the switch S2. Thus, the most-recent cipher text is used to produce the present cipher text, as reflected in the expression for CBC encryption for i > 0 shown in Table 1.[0046] Referring to FIG. 7, with further reference to FIGS. 2 and 4, the controller 50 can selectively actuate the switches S to implement decryption of an initial block of cipher text according to the CBC mode. A cipher text message is received at the data sub-input 132 and the first block of the cipher text is provided through the switch Si to the logical signal combiner 117i. Logical zeroes are supplied to the switch S2 and by the AND gate 148 to the logical signal combiner 117i and the logical signal combiner 117i consequently passes the first block of cipher text as cipher-algorithm input data to the cipher circuit 118. The cipher circuit 118 performs the cipher algorithm on the cipher-algorithm input data and provides the resulting cipher-algorithm output data to the register 144. The network 116 routes the block of cipher-algorithm output data from the register 144 through the switch Sg to the logical signal combiner 1174. The network 116 routes an initialization vector from the initialization vector sub-input 134 through the switch S4and the switch Sloto the logical signal combiner 117 . The logical signal combiner 117 combines the cipher-algorithm output data from the register 144 with the initialization vector and routes the resulting plaintext block through the switch S9 and the switch Sn to the data sub-output 140. [0047] Referring to FIG. 8, with further reference to FIGS. 2, 4 and 7, the controller 50 can selectively actuate the switches S to implement decryption of further blocks (i.e., beyond the initial block) of cipher text according to the CBC mode. The controller 50 causes the network 116 to process further cipher text blocks from the data sub-input 132 similarly to the processing shown in FIG. 7, except that instead of an initialization vector being provided at the initialization vector sub- input 134, the immediately-prior block of cipher text is provided to the initialization vector sub- input 134. Consequently, the immediately-prior cipher text block (i.e., the last cipher text block processed before the present cipher text block being processed) is logically combined (here exclusive-ORed) with the present cipher-algorithm output data to produce the device output data provided to the data sub-output 140.[0048] Referring to FIGS. 9-13, the cryptographic engine 110 may authenticate data by determining an authentication tag. The authentication process may be repeated to produce a verification authentication tag when the data are to be used and the data only used if the original authentication tag and the verification authentication tag match. That is, the original authentication tag and the verification (recreated) authentication tag may be compared, e.g., by the processor 30 and the data from which the verification authentication data was produced will only be used if the original authentication tag and the verification authentication tag are identical. FIGS. 9-12 illustrate use of the cryptographic engine 110 to produce an authentication tag in accordance with a CMAC (Cipher-Based Message Authentication Code) protocol. The authentication tag may be produced using any amount of data, for example the cipher text stored for later retrieval and use. In this way, the authentication tag may be used to verify that the stored cipher text has not been modified. A portion of the cryptographic engine 110 for performing encryption and/or decryption may share one or more components (e.g., one or more switches and/or one or more logical signal combiners) with a portion of the cryptographic engine 44 for performing authentication (e.g., determining an authentication tag).[0049] Referring to FIG. 9, the controller 50 can selectively actuate the network 116 to implement encryption of 0's in accordance with the CMAC protocol. The controller 50 causes the network 116 to provide logical 0's to the digest circuit 120 through the switch S3. The digest circuit 120 processes the 0's in accordance with the digest algorithm and outputs corresponding digest output data. As the digest output data was determined by processing 0's, the digest output data are labeled, here, 0's digest output data. The controller 50 further causes the network 116 to route the 0's digest output data through the switch S 10 to the logical signal combiner 1174. The controller 50 causes the logical signal combiner 1174to be supplied by the 0's digest output data and 0's through the switches S8, S12such that the O's digest output data is passed, unchanged, through the logical signal combiner 1174. The controller 50 causes the O's digest output data to be provided to a temporary-data storage device 150.[0050] Referring to FIG. 10, the controller 50 can selectively actuate the network 116 to process a first block of data to be authenticated in accordance with the CMAC protocol. The controller 50 causes the network 116 to provide a block of data to the digest circuit 120 from the data sub-input 132 through the logical signal combiner 1172(by supplying the other input(s) of the logical signal combiner 1172with logical O's, the circuitry for which is omitted from FIG. 10 for simplicity and clarity) and through the switch S3. The digest circuit 120 processes the block of data in accordance with the digest algorithm and outputs corresponding digest data to the register 146.[0051] Referring to FIG. 11, the controller 50 can selectively actuate the network 116 to process subsequent blocks of data (after the first block of data and before a last block of data) to be authenticated in accordance with the CMAC protocol. The controller 50 causes the network 116 to provide a block of data to the logical signal combiner 1172and to supply a previous (the most- recently determined) digest output block of data to the logical signal combiner 1172through the switch S6 and the logical signal combiner 1173. The logical signal combiner 1172combines these two blocks of data and provides the combined data block through the switch S3 to the digest circuit 120. The digest circuit 120 processes the combined block of data in accordance with the digest algorithm and outputs corresponding digest data to the register 146.[0052] Referring to FIG. 12, the controller 50 can selectively actuate the network 116 to process a final amount of data to be authenticated in accordance with the CMAC protocol. The final amount of data may be a full block of data (i.e., of the size of data processable by the digest circuit 120) or less than a full block of data. If the final amount of data is less than a full block, then O's may be added to the final amount of data to reach a full block size. The controller 50 causes the network 116 to provide the final data, of the set of data to be authenticated, to the logical signal combiner 1172. The controller 50 causes the network 116 to supply a previous (the most-recently determined, here the pen-ultimate) digest output block of data to the logical signal combiner 1173through the switch S6. The controller 50 also causes the network 116 to provide the O's digest output data from the temporary-data storage device 150 to the logical signal combiner 1173. The O's digest output data may be processed by logic (not shown) inside the temporary-data storage device 150. The logic used to process the O's digest output data may be different depending upon whether the final amount of data is a full block size or less than a full block size. The controller 50 causes the output of the temporary-data storage device 150 to be supplied to the logical signal combiner 1173through the switch S5. The logical signal combiner 1173combines the last digest output data with the data from the temporary-data storage device 150 and provides these combined data to the logical signal combiner 1172. The logical signal combiner 1172combines these combined data with the final amount of data and provides these combined data to the digest circuit 120 through the switch S3. The digest circuit 120 processes the combined block of data in accordance with the digest algorithm and outputs an authentication tag that is provides to the authentication sub-output 142. The authentication tag is stored in association with authenticated data for later retrieval and comparison with a verification authentication tag produced using the authenticated data (or what is believed to be the authenticated data) to determine whether the authenticated data has been altered since being stored.[0053] Referring to FIG. 13, with further reference to FIGS. 1-12, a cryptographic method 210 includes the stages shown. The method 210 is, however, an example only and not limiting. The method 210 may be altered, e.g., by having stages added, removed, rearranged, combined, performed concurrently, and/or having single stages split into multiple stages.[0054] At stage 212, the method 210 includes receiving cryptographic algorithm input data at a cryptographic device. For example, counter data, plaintext, cipher text, an initialization vector, an alternative initialization vector, and/or mask data may be received by the data input 112 of the device 12. Receiving the cryptographic algorithm input data may include producing the cryptographic algorithm input data, e.g., producing a counter value, producing a random number or pseudorandom number as an initialization vector or alternative initialization vector.[0055] At stage 214, the method 210 includes directing the cryptographic algorithm input data in the cryptographic device through a network of switches and logical signal combiners to produce cipher-algorithm input data. For example, the network 116 selectively routes data from the data input 112 through one or more of the switches S and one or more of the logical signal combiners 117 to produce cipher-algorithm input data. Which data are routed through which switch(es) S and through which logical signal combiner(s) 117 and whether the data are altered or not by the logical signal combiner(s) 117 is controlled by the controller 50 selectively actuating (i.e., actuating or not actuating) the switch(es) S, and selectively actuating (i.e., actuating or not actuating) one or more data sub-inputs such as the counter sub-input 130. The different routing and logical combinations produce the cipher-algorithm input data in accordance with the selected cryptographic algorithm, which may be programmed, e.g., either by programming a state machine or by programming software that is executed by a processor. [0056] At stage 216, the method 210 includes performing a cipher algorithm on the cipher- algorithm input data in a single instance of a cipher circuit to produce cipher-algorithm output data. For example, the cipher circuit 1 18 processes the cipher-algorithm input data according to a cipher algorithm that the cipher circuit 118 is configured to perform. The cipher algorithm is preferably a symmetric cipher algorithm in which case the cipher circuit 1 18 ciphers a block of the cipher- algorithm input data, forming cipher text from plain text, or forming plaintext from cipher text, or transforming cipher text into text that may be further manipulated into plaintext, e.g., by logically combining the text with further data. The cipher algorithm is performed using the cipher circuit 118 regardless of which of multiple cryptographic algorithms (modes) is being implemented. Thus, the cipher algorithm for multiple modes is performed without using separate physical cipher circuits each of which can perform the same cipher algorithm.[0057] At stage 218, the method 210 includes directing the cipher-algorithm output data in the cryptographic device through the network of switches and logical signal combiners to produce cryptographic algorithm output data. For example, the network 1 16 routes a block of data output from the cipher circuit 118 from the register 146 to the data sub-output 140 of the data output 1 14. In other examples, the network 1 16 may route the cipher-algorithm output data through one or more switches and/or one or more logical signal combiners as appropriate for an implemented cryptographic algorithm.[0058] The cryptographic algorithm input data and the cipher-algorithm output data are directed through the network of switches and logical signal combiners based upon a selected cryptographic algorithm from multiple cryptographic algorithms implementable by different paths through the network, with each path including the single instance of the cipher circuit. Thus, multiple different cryptographic algorithms may be implemented by routing data through the network differently, combining data logically as appropriate for the particular cryptographic algorithm being implemented. For example, directing the cryptographic algorithm input data, performing the cipher algorithm, and directing the cipher-algorithm output data implement values of respective variables in an initial-state encryption equation, a steady-state encryption equation, an initial-state decryption equation, and a steady-state decryption equation applicable to the plurality of differentcryptographic algorithms to implement the selected cryptographic algorithm. Examples of such equations are provided in Tables 2 and 4 above. A cryptographic algorithm may be selected by, e.g., programming the controller 50 or providing a selection indication to the controller 50. In a software implementation, an indication of a selected cryptographic algorithm may be received, e.g., by receiving an indication of a cryptographic algorithm (e.g., "CBC") or by receiving indications of values of variables (e.g., for the expressions shown in Table 4) that correspond to a particular cryptographic algorithm.[0059] The method 210 may further include other features and/or stages. For example, the method 210 may further include determining an authentication tag, associated with the output data, using an authentication circuit to perform a one-way function, e.g., as discussed with respect to FIGS. 9-13. The method 210 may further include providing a constant logical zero signal to the authentication circuit while the authentication circuit is idle. The authentication tag may be determined using at least one logical signal combiner, in the network of switches and logical signal combiners, through which data pass in implementing the selected cryptographic algorithm. The cryptographic algorithm implemented is a first cryptographic algorithm and the method 210 may further include implementing another, second cryptographic algorithm that is different from the first cryptographic algorithm. The second cryptographic algorithm may be implemented by receiving other input data, directing the other input data through the cipher circuit and through the network of switches and logical signal combiners differently than when implementing the first cryptographic algorithm.[0060] Other Considerations[0061] Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software and computers, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or a combination of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0062] Also, as used herein, "or" as used in a list of items prefaced by "at least one of or prefaced by "one or more of indicates a disjunctive list such that, for example, a list of "at least one of A, B, or C," or a list of "one or more of A, B, or C" means A or B or C or AB or AC or BC or ABC (i.e., A and B and C), or combinations with more than one feature (e.g., AA, AAB, ABBC, etc.).[0063] As used herein, unless otherwise stated, a statement that a function or operation is "based on" an item or condition means that the function or operation is based on the stated item or condition and may be based on one or more items and/or conditions in addition to the stated item or condition.[0064] Further, an indication that information is sent or transmitted, or a statement of sending or transmitting information, "to" an entity does not require completion of the communication. Such indications or statements include situations where the information is conveyed from a sending entity but does not reach an intended recipient of the information. The intended recipient, even if not actually receiving the information, may still be referred to as a receiving entity, e.g., a receiving execution environment. Further, an entity that is configured to send or transmit information "to" an intended recipient is not required to be configured to complete the delivery of the information to the intended recipient. For example, the entity may provide the information, with an indication of the intended recipient, to another entity that is capable of forwarding the information along with an indication of the intended recipient.[0065] A wireless communication system is one in which communications are conveyed wirelessly, i.e., by electromagnetic and/or acoustic waves propagating through atmospheric space rather than through a wire or other physical connection. A wireless communication network may not have all communications transmitted wirelessly, but is configured to have at least some communications transmitted wirelessly. Further, the term "wireless communication device," or similar term, does not require that the functionality of the device is exclusively, or evenly primarily, for communication, or that the device be a mobile device, but indicates that the device includes wireless communication capability (one-way or two-way), e.g., includes at least one radio (each radio being part of a transmitter, receiver, or transceiver) for wireless communication.[0066] Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might beimplemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.[0067] The terms "machine-readable medium" and "computer-readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. Using a computer system, various computer-readable media might be involved in providing instructions/code to processor(s) for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media include, for example, optical and/or magnetic disks. Volatile media include, without limitation, dynamic memory.[0068] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.[0069] Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to one or more processors for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by a computer system.[0070] The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.[0071] Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations provides a description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.[0072] Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, some operations may be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional stages or functions not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform one or more of the described tasks.[0073] Components, functional or otherwise, shown in the figures and/or discussed herein as being connected or communicating with each other are communicatively coupled. That is, they may be directly or indirectly connected to enable communication between them.[0074] Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of operations may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bound the scope of the claims.[0075] Further, more than one invention may be disclosed.
The disclosure is directed to storing trace information. An aspect includes determining whether or not a pen is within a threshold distance of the touchscreen, storing trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen, and clearing the touch buffer and storing trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen.
CLAIMS WHAT IS CLAIMED IS: 1. A method for storing trace information, comprising: determining whether or not a pen is within a threshold distance of a touchscreen; if the pen is not within the threshold distance of the touchscreen, storing trace information generated by a user's touch in a touch buffer; and if the pen is within the threshold distance of the touchscreen, clearing the touch buffer and storing trace information generated by the pen in the touch buffer, 2. The method of claim 1, wherein the threshold distance is zero. 3. The method of claim 1, further comprising: displaying, on the touchscreen, the trace information generated by the pen; and displaying, on the touchscreen, the trace information generated by the user's touch. 4. The method of claim 1, further comprising: if the pen is touching the touchscreen, disabling trace information generated by the user's touch. 5. The method of claim 1, further comprising: if the pen is not touching the touchscreen, determining whether or not the user's touch is a finger touch or a palm touch. 6. The method of claim 5, wherein the determining whether or not the user's touch is a finger touch or a palm touch compri es: determining whether or not a size of the user's touch is greater than a threshold; if the size is not greater than the threshold, determining that the user's touch is a finger touch; if the size is greater than the threshold, determining whether or not the user's touch is moving: 2013/185119 25 if the user's touch is moving, determining that the user's touch is a finger touch; and if the user's touch is not moving, detennining that the user's touch is a palm touch. 7. The method of claim 1, further comprising: if the pen is not touching the touchscreen, permitting multi-touch events, 8. The method of claim 1 , wherein the determining, storing, and clearing are performed on a native operating system layer that is transparent to a pen application. 9. The method of claim 1, further comprising: detennining that the touchscreen is in a multi-layer scratch paper mode; displaying a scratch paper layer over a background layer; and increasing a transparency of the background layer, I.0. The method of claim 9, further comprising: storing the scratch paper layer and the background layer in the touch buffer, I I . The method of claim 9, wherein the increasing permits both the background layer and the scratch paper layer to be visible to a user. 12. The method of claim 9, further comprising: determining that the touchscreen is not in the multi-layer scratch paper mode; and moving the scratch paper layer behind the background layer. 13. The method of claim 12, further comprising: determining whether or not the scratch paper layer should be deleted; if the scratch paper layer should be deleted, removing the scratch paper layer from the touch buffer and decrementing a layer counter; and if the scratch paper layer should not be deleted, storing the scratch paper layer in the touch buffer and incrementing the layer counter. 2013/185119 26 14. The method of claim I , further comprising: detennining that the pen is not touching the touchscreen: calculating a vertical distance between the pen and the touchscreen; and changing a trace of the pen based on the vertical distance. 15. The method of claim 14, wherein the changing the trace comprises changing a color of the trace or a width of the trace, 16. The method of claim 14, further comprising: calculating a tilt of the pen; and changing the trace of the pen based on the tilt. 17. The method of claim 16, wherein changing the trace of the pen based on the tilt comprises changing a width of the trace. 18. The method of claim 14, further comprising: determining that the touchscreen is in a pressure sense mode; and overriding a tip-up timer of the pen. 1 . The method of claim 1 , wherein the pen is an ultrasound pen. 20. An apparatus for storing trace information, comprising: logic configured to determine whether or not a pen is within a threshold distance of a touchscreen; logic configured to store trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen; and logic configured to clear the touch buffer and store trace infonnation generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. 21 . The apparatus of claim 20, wherein the threshold distance is zero. 2013/185119 27 22. The apparatus of claim 20, further comprising: logic configured to display, on the touchscreen, the trace information generated by the pen; and logic configured to display, on the touchscreen, the trace information generated by the user's touch. 23. The apparatus of claim 20, further comprising: logic configured to disable trace information generated by the user's touch if the pen is touching the touchscreen. 24. The apparatus of claim 20, further comprising: logic configured to determine whether or not the user's touch is a finger touch or a palm touch if the pen is not touching the touchscreen. 25. The apparatus of claim 24, wherein the logic configured to determine whether or not the user's touch is a finger touch or a palm touch comprises: logic configured to determine whether or not a size of the user's touch is greater than a threshold; logic configured to determine that the user's touch is a finger touch if the size is not greater than the threshold; logic configured to determine whether or not the user's touch is moving if the size is greater than the threshold; logic configured to determine that the user's touch is a finger touch if the user's touch is moving; and logic configured to determine that the user's touch is a palm touch if the user's touch is not moving. 26. The apparatus of claim 20, further comprising; logic configured to permit multi-touch events if the pen is not touching the touchscreen. 2013/185119 28 27. The apparatus of claim 20, wherein a native operating system layer that is transparent to a pen application includes the logic configured to determine, the logic configured to store, and the logic configured to clear. 28. The apparatus of claim 20, further comprising: logic configured to determme thai the touchscreen is in a multi-layer scratch paper mode; logic configured to display a scratch paper layer over a background layer; and logic configured to increase a transparency of the background layer. 29. The apparatus of claim 28, further comprising: logic configured to store the scratch paper layer and the background layer in the touch buffer. 30. The apparatus of claim 28. wherein the logic configured to increase permits both the background layer and the scratch paper layer to be visible to a user. 31. The apparatus of claim 28, further comprising: logic configured to determine that the touchscreen is not in the multi-layer scratch paper mode; and logic configured to move the scratch paper layer behind the background layer. 32. The apparatus of claim 31, further comprising: logic configured to determine whether or not the scratch paper layer should be deleted; logic configured to remove the scratch paper layer from the touch buffer and decrement a layer counter if the scratch paper layer should be deleted; and logic configured to store the scratch paper layer in the touch buffer and increment the layer counter if the scratch paper layer should not be deleted. 33. The apparatus of claim 20, further comprising: logic configured to determine that the pen is not touching the touchscreen; 2013/185119 29 logic configured to calculate a vertical distance between the pen and the touchscreen; and logic configured to change a trace of the pen based on the vertical distance. 34. The apparatus of claim 33, wherein the logic configured to change the trace comprises logic configured to change a color of the trace or a width of the trace. 35. The apparatus of claim 33, further comprising: logic configured to calculate a tilt of the pen; and logic configured to change the trace of the pen based on the tilt. 36. The apparatus of claim 35, wherein the logic configured to change the trace of the pen based on the tilt comprises logic configured to change a width of the trace. 37. The apparatus of claim 33, further comprising: logic configured to determine that the touchscreen is in a pressure sense mode; and logic configured to override a tip-up timer of the pen. 38. The apparatus of claim 20, wherein the pen is an ultrasound pen. 39. An apparatus for storing trace information, comprising: means for determining whether or not a pen is within a threshold distance of a touchscreen; means for storing trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen; and means for clearing the touch buffer and storing trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. 40. A non-transitory computer-readable medium for storing trace information, comprising: at least one instruction to determine whether or not a pen is within a threshold distance of a touchscreen; 2013/185119 30 at least one instruction to store trace infonnation generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen; and at least one instniction to clear the touch buffer and store trace infonnation generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen.
STORING TRACE INFORMATION Claim of Priority under 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Provisional Application No. 61/657,618, entitled "PALM REJECTION," filed June 8, 2012, and assigned to the assignee hereof and hereby expressly incorporated by reference in its entirety herein. Field of the Disclosure [0002] The disclosure relates to touchscreen input methods, and more particularly to rejecting unintentional inputs from a user's palm. Background [0003] There are two categories of stylus pens, passive pens, such as capacitive pens, and active pens, such as ultrasound pens and electromagnetic resonance (EMR) pens. There are various situations that require palm rejection (i.e., distinguishing between a palm touch and a pen or finger touch) while a user is using a stylus pen. For example, ultrasound pens can be used to write both on screen and off screen. For on screen usage, a user's palm often rests on the touchscreen while writing. Such a palm touch should be rejected'ignored, but the high level operating system (HLOS) may not be able to distinguish between the palm touch and a finger touch. As another example, while writing, the user may lift the pen up momentarily and use a finger for gesture control, such as a pan or zoom, and then start writing again. Tn these scenarios, it would be beneficial for the touchscreen to show the pen input without any palm induced traces. It would also be beneficial for the user to be able to use a finger to perform touch controls on screen when not writing. [0©O4] Current solutions are not sufficient to meet these requirements. One solution disables finger touch detection when the pen is within two to three inches of the touchscreen. In normal usage, however, the user's palm can be resting on the device even with the pen two or three inches above the device. Further, the accuracy of determining the stylus pen to be within a certain zone above the touchscreen is dependent on the technology. For example, accurate ultrasound pen proximity detection may be more challenging than that of an EMR-type stylus pen that has a more uniform inductive grid under the touchscreen. [00 SJ Further, the touchscreen should remain active for gestures. Various current solutions use complex algorithms to distinguish traces generated by a finger while ignoring palm touch traces, The results, however, can be inconsistent depending on the size, orientation, or relative movement of the user's palm. f §06] Accordingly, current solutions fail to perform palm rejection effectively in at least the following scenarios: (1 ) the user wishes to start writing on the touchscreen, but before the pen is hovering over or touches the touchscreen, the user's palm/wrist is already resting on the touchscreen, causing palm induced traces on the touchscreen; (2) the user pauses writing for a moment and uses a finger touch gesture to zoom the content, but the pen is not lifted high enough to get out of the sensing zone, so the gesture input is ignored; (3) like (2), except that the pen is still touching the touchscreen, but not moving. SUMMARY [0007] Embodiments of the disclosure are directed to storing trace information. A method for storing trace information includes determining whether or not a pen is within a threshold distance of a touchscreen, storing trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen, and clearing the touch buffer and storing trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. [0008] An apparatus for storing trace information includes logic configured to determine whether or not a pen is within a threshold distance of a touchscreen, logic configured to store trace information generated by a user's touch in a touch buffer if the pen is not wi thin the threshold distance of the touchscreen, and logic configured to clear the touch buffer and store trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. |0O 9] An apparatus for storing trace information includes means for determining whether or not a pen is within a threshold distance of a touchscreen, means for storing trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen, and means for clearing the touch buffer and storing trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. [0010] A non-transitory computer-readable medium for storing trace information includes at least one instruction to determine whether or not a pen is within a threshold distance of a touchscreen, at least one instruction to store trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen, and at least one instruction to clear the touch buffer and store trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen, BRIEF DESCRIPTION OF THE DRAWINGS [0011] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof. [0012] FiG, 1 illustrates an exemplary architecture according to at least one aspect of the disclosure, [0013] FiG. 2 illustrates an exemplary flow for erasing all touch traces except for the pen trace. [0014] FIG, 3 illustrates an exemplar)-' flow for distinguishing finger and palm traces. [001S] FIG. 4 illustrates an exemplary flow that integrates the flow of FIG. 2 and the flow of FIG. 3. [0016] FIG. 5 illustrates an exemplary flow of a scratch mode operation. [0017] FIG. 6 illustrates an exemplar}' structure of a pressure sensor equipped ultrasound pen. [0018] FIG. 7 illustrates an exemplary flow for providing a virtual pressure sense mode. [0019] FIG. 8 illustrates an exemplary ultrasound stylus pen and touchscreen device that can perform the flow of FIG. 7. [0020] FIG. 9 illustrates a block diagram of an exemplary apparatus in accordance with various aspects disclosed herein, [0021] FIG, 10 illustrates an example of a user equipment (UE) in accordance with an aspect of the disclosure. [0022] FIG. 1 1 illustrates a communication device that includes logic configured to perform functionality in accordance with an aspect of the disclosure. DETAILED DESCRIPTION [00233 Various aspects are disclosed in the following description and related drawings. Alternate aspects may be devised without departing from the scope of the disclosure. Additionally, well-known elements of the disclosure will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure, 0024] The words "exemplary" and/or "example" are used herein to mean "serving as an example, instance, or illustration," Any aspect described herein as "exemplary" and/or "example" is not necessarily to be construed as prefeired or advantageous over other aspects. Likewise, the term "aspects of the disclosure" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation, [0025] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [0026] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, "logic configured to" perform the described action. [0027] A client device, referred to herein as a user equipment (UE), may be mobile or stationary, and may connect to the internet over a local wireless network, such as a WiFi network (e.g., based on IEEE 802.1 1, etc.). As used herein, the term "UE" may be referred to interchangeably as an "access terminal" or "AT," a "wireless device," a "subscriber device," a "subscriber terminal," a "subscriber station," a "user terminal" or UT. a "mobile terminal," a "mobile station" and variations thereof. UEs can be embodied by any of a number of types of devices including but not limited to PC cards, compact flash devices, external or internal modems, wireless or wireline phones, and so on. [0028] Various aspects of the disclosure are directed to various features for devices configured to accept touch input from, for example, a stylus pen, a user's finger, a user's palm, and/or the like. A palm rejection aspect rejects palm touches while allowing finger touch gesture control for digital stylus pen applications by using a selective buffering scheme. A multi-layer buffering scheme provides a multi-layer scratch paper mode to enhance the pen application. A virtual pressure sense-enabled pen mode uses a three dimensional pen hovering function that allows the pen application to change the trace color and/or line width depending on the virtual pen pressure information to mimic the behavior of a real ink pen. [0029] FIG. 1 illustrates an exemplary architecture 100 according to at least one aspect of the disclosure. The architecture 100 of FIG. 1 can be embodied in any UE that has a touchscreen. A user application 110 includes a scratch mode configuration 1 12, a palm rejection configuration 114, and a virtual pressure sense configuration 1 16. These configurations may be one or more software modules, one or more hardware modules, or a combination of software and hardware. The scratch mode configuration 112, palm rejection configuration 114, and virtual pressure sense configuration 1 6 are each coupled to and send data to a multi-layer buffer manager 1 18. The multi -layer buffer manager 1 8 is coupled to a touch buffer, such as buffer 906 in FIG. 9, and the multilayer buffer manager 11.8 and the touch buffer are referred to interchangeably herein. The multi-layer buffer manager 1 8 is coupled to and receives data from an ultrasound pen service 122 and a touch sensor service 124, The data from the ultrasound pen service 122 may include touch data and button data. The user application 1 10 is also coupled to and receives data from the ultrasound pen service .122 and the touch sensor service 124. The ultrasound pen service 122 is coupled to and sends and receives data to/from an inter-process communication (IPC) socket 128 and an active stylus driver 132 (for an ultrasound pen). The touch sensor service 124 is coupled to and sends and receives data to/from the IPC socket 128 and a touch sensor driver 134. [0030] While the foregoing illustration describes the various elements in terms of applications, it will be appreciated that the various elements may be embodied in hardware. For example, the buffer may implemented as specific hardware such as memory coupled one or more processors and/or with embedded logic to perform the functionality disclosed herein. Additionally, as previously noted the various elements / functions described herein can implemented by specific circuits, by program instructions being executed by one or more processors, or by a combinations of hardware and software to perform the functionality described herein. [0031] The disclosed palm rejection for touchscreen devices, such as tablets, personal digital assistants, smartphones, etc., works for any active type pen drivers that can distinguish pen and finger touch input, such as an ultrasound pen, by using different device identifiers for pen input, and touch gestures. This palm rejection may be used with any active type pens or pen drivers other than those that are explicitly recited herein. [0032] A pen application can save and buffer the trace history generated by user touches (finger or palm) and erase them whenever a pen-down event (e.g., when the pen touches the touchscreen or is within a threshold distance of the touchscreen) is sensed. The pen application may still respond to any multi-touch events, such as for a zoom, while the pen is up. Any palm-induced traces on the touchscreen may be erased when the user starts writing again. This functionality may be achieved through selectively buffering the pen and touch trace history. Finger touches can be preserved along with the pen trace by introducing a finger/palm detection algorithm. By implementing the detection algorithm in the kernel touch driver in some aspects, operating system user interface (UI) widgets can also be immune from palm touches. [0033] FIG. 1 illustrates an exemplary architecture 100 for implementing this aspect of the disclosure. For example, the palm rejection configuration 114 may be configured to implement the functionality particular to this aspect. [0034] FIG. 2 illustrates an exemplary flow 200 for erasing all touch traces except for the pen trace. The flow 200 may be performed by the architecture 100 illustrated in FIG. 1. At 210, the flow starts. At 220, the architecture 100, for example the palm rejection configuration 114 in combination with the ultrasound pen service 122, determines whether or not the pen is down, that is, whether or not the user is writing. The architecture 100 may detennine that the pen is "down" if it is touching the touchscreen or if it is within a threshold distance of the touchscreen. The architecture 100, for example the ultrasound pen service 122 and the active stylus driver 132, can detennine whether or not the pen is touching the touchscreen or is within the threshold distance from the touchscreen based on ultrasound, projective capacitance, capacitive touch, a pressure sensor in the pen tip, etc. The threshold distance can vary according to the various aspects described herein. For example, the threshold distance may be zero, meaning that the pen must be touching the touchscreen to be "down." [0035] if the pen is not down, that is, the user is not writing, then at 230, a pen application, such as the user application 110 illustrated in FIG. 1, is in the touch mode and, at 280, any touches or gestures received are processed accordingly. At 240, the architecture 100, for example the palm rejection configuration 14 in combination with the multi-layer buffer manager 1 18 and the touch sensor service 124, buffers the touch traces made by the user, then returns to 220. [0036] If, however, the pen is down at 220, then at 250, the architecture 100, for example the palm rejection configuration 1 14 in combination with the multi-layer buffer manager 118 and the ultrasound pen service 122, clears the touch buffer and, at 280, processes any pen information received. At 260, the architectare 100, for example the palm rejection configuration 114 in combination with the touch sensor service 124, disables the touch sensing. At 270, the architecture 100, for example the palm rejection configuration 114 in combination with the multi-layer buffer manager 118 and the user application 1 10, buffers any received pen data and, at 280, outputs the buffered pen data. The flow then returns to 220. [0037 FIG. 3 illustrates an exemplary flow 300 for distinguishing finger and palm traces. The flow 300 may be performed by the architecture 100 illustrated in FIG. 1. For example, the palm rejection configuration 114 may be configured to implement the functionality particular to this aspect. [0 138] At 310, the architecture 100, for example the palm rejection configuration 1 14, calls the touch application programming interface (API) with the function Get_size(), which returns the size of the touch. The size of the touch may be determined by identifying a cluster of multiple points in a given area of the touchscreen. Those of skill in the art will appreciate that other methods of determining the size of the touch may be used. [0039] At 320, the architecture 100, for example the palm rejection configuration Π4, determines if the returned size is greater than a threshold. If it is not, then at 330, the touch is determined to be a finger touch and is processed accordingly. If, however, at 320, the size is greater than the threshold, then at 340, the architecture 100, for example the palm rejection configuration 114, determines whether or not. the touch is moving, and therefore generating a trace history. If it is, then at 330, the touch is determined to be a finger touch and is processed accordingly. If the touch is not moving, however, then at 350, the trace hisioiy is buffered, and at 360, the touch is determined to be a palm touch. The flow then returns to 310. ΘΘ40] A palm touch may also be distinguished from a finger touch by identifying a contour of the cluster of multiple points and determining that the user's touch is a palm touch based on the shape of the contour. Additionally, a palm touch may be distinguished from a finger touch based on a cluster of points that move together, the profile of the contour, a determination of where the user's palm is expected to be based on whether the user is right or left handed, etc. [0041] There are situations that require keeping the finger touch traces as well as the pen traces. FIG. 4 illustrates an exemplary flow 400 that integrates flow 200 of FIG. 2 and flow 300 of FIG. 3. In FIG. 4, when the presence of the user's palm is detected, both finger and palm traces are buffered and will be cleared when the pen tip is removed from the touchscreen. The flow 400 may be performed by the architecture 100 illustrated in FIG. 1, for example, the palm rejection configuration 114. [0042] Flow 400 starts at 410. At 420, the architecture 100, for example the palm rejection configuration 1 14 in combination with the ultrasound pers service 122, determines whether or not the pen is down, as in 220 of FIG. 2. If it is not down, that is, the user is not writing, then at 430, the architecture 100 is in touch mode. At. 440, the architecture 100, for example the palm rejection configuration 114 in combination with the touch sensor service 124, determines whether or not the user's palm is on the touchscreen, as described with respect to FIG. 3. if it is, then at 450, the architecture 100, for example the palm rejection configuration 1 14 in combination with the multilayer buffer manager 118, buffers the touch traces made by the user, then returns to 420. If it is not, then the flow simply returns to 420. Either way, at 490, any touches or gestures received are processed accordingly. [0043] If. at 420, it is determined thai the pen is down, then at 460, the architecture 100, for example the palm rejection configuration 114 in combination with the multi-layer buffer manager 118 and the ultrasound pen service 122, clears the touch buffer and, at 490, processes any pen information received. At 470, the architecture 100, for example the palm rejection configuration 114 in combination with the touch sensor service 124, disables the touch sensing. At 480, the architecture 100, for example the palm rejection configuration 114 in combination with the multi-layer buffer manager 1 18 and the ultrasound pen service 122, buffers any received pen data, and at 490, outputs the buffered pen data. The flow then returns to 420, [0044] Flow 400 can be modified to determine whether or not a confidence level associated with the touch being a palm or finger touch is high enough to distinguish the finger and palm induced traces. In that case, only the palm trace is buffered. The buffered palm traces will be cleared next time the pen tip is down while preserving the finger traces while the palm is present. [0045] An aspect of the disclosure provides a multi-layer scratch paper mode for a touchscreen device. Students often use scratch paper and an ink pen to do their homework. In an aspect, digital scratch pages and the "real," or original, page can be seen on the same touchscreen via an overlay in a scratch pen mode. The scratch pen mode can be entered either by using a button on the stylus pen or from a menu selection within the application. After entering this mode, the original content on the original page, such as text and/or graphics, can change color and/or fade into the background, but is still visible. The touchscreen can then be used for scratch operations, such as for equation calculations. Any subsequent pen or finger traces are saved in multi-layer buffers. For example, given two equations on the same original page, the user could save traces for the first calculation in one of the multi-layer buffers and the traces for the other calculation in another of the multi-layer buffers, [0046] After getting the proper result on the scratch paper, i.e., the scratch mode overlay layer, the user can switch back to the non-scratch mode and transfer the result from the scratch page(s)/layer(s) to the original page. The user can then select either to keep the current scratch pages/layers or erase them. Through this multi-layer buffering scheme, the user can save up to a predefined number of scratch pages for later reference. [0847] FIG, 1 illustrates an exemplary architecture 100 for implementing this aspect of the disclosure. For example, the scratch mode configuration i 12 may be configured to implement the functionality particular to this aspect. [0048] FIG. 5 illustrates an exemplary flow 500 of a scratch mode operation. The flow 500 may be performed by the architecture 100 illustrated in FIG. 1, for example the scratch mode configuration 1 12 in combination with other elements of the architecture 100. Flow 500 starts at 505, At 510, the architecture 100, for example the scratch mode configuration 1 12, determines whether or not it has entered the scratch mode. As discussed above, this may be based on the user pressing a button on the stylus pen or a menu selection within the user application 110. [0049] If the user application 110 is in the scratch mode, then at 570, the architecture 100, for example the scratch mode configuration 1 12 in combination with the multilayer buffer manager 118, loads a scratch page/layer to the foreground and overlays it on the original page and any previously loaded scratch pages/layers. The previously added scratch pages/layers may be grayed-out similar to the original page, as discussed above. At 580, the architecture 100, for example the scratch mode configuration 112 in combination with the multi-layer buffer manager 118 and the ultrasound pen service 122, buffers the scratch page/layer and any traces made thereon, outputs any pen information received at 590, and returns to 510. [0050] If, at 510, the architecture 100, for example the scratch mode configuration 112, determines that the application is not in the scratch mode, or leaves the scratch mode, then at 520, the architecture 100, for example the scratch mode configuration 112 in combination with the multi-layer buffer manager 118, moves the scratch page/layer to the background and, at 530, buffers the original page and brings it to the foreground. At 590, the architecture 100, for example the scratch mode configuration 1 12 in combination with the ultrasound pen service 122, outputs any received pen information, such as any pen traces. [0051] At 540, the architecture 100, for example the scratch mode configuration 112, determines whether or not it should delete the scratch page/layer. This may be based on user input, lack of storage space, expiration of a timer, or any other appropriate criteria. If the scratch page/layer should be deleted, then at 550, the architecture 100, for example the scratch mode configuration 112 in combination with the multi-layer buffer manager 118, clears the current scratch page/layer and decrements a counter representing the number of scratch pages/layers. If, however, the scratch page/layer should not be deleted, then at 560, architecture 100, for example the scratch mode configuration 112 in combination with the multi-layer buffer manager 118, saves the scratch page/layer and increments the counter representing the number of scratch pages/layers, The flow 500 then returns to 510. [0052] An aspect of the disclosure provides a virtual pressure sensor for a stylus pen. To mimic the behavior of a real ink pen, pressure information can be used to change the trace color and/or line width. Current touch APIs have pressure and size properties that can be used by the pen application and they do not require the pen to send any pressure information to the touchscreen device. These properties are not accurate enough, however, to realize useful pressure sensing functionality, since capacitive touchscreens, the most common type, are more sensitive to size than to pressure, and the pressure result varies as a function of the touch orientation, size, etc. Further, touchscreens from different vendors will produce different output. [0053] To increase the accuracy of the pressure sensing functionality, a stylus pen can send pressure information directly to the touchscreen device. This requires a pressure sensor to be installed in the pen. The structure of a pressure sensor equipped ultrasound pen 600 is illustrated m FIG. 6. Ultrasound pen 600 includes a pen tip 610 that sends data to a pressure sensor 620. The pressure sensor 620 sends data to an analog/digital (A/D) converter 630, which sends data to a message encoder 640 and an on/off switch 660. The on/off switch state can be derived from the pressure sensor data rather than using a hardware switch. The message encoder 640 receives data from the on/off switch 660 and sends data to a transmitter 650. |0054] Challenges for a hardware pressure sensor approach include the needs to design small form factor pressure sensors that can achieve high pressure resolution. Hardware pressure sensors also increase the cost, design complexity, and calibration difficulty of the pen, |00S5| The virtual pressure sensor mode of the disclosure does not require a hardware pressure sensor. For an ultrasound pen, three dimensional (3D) coordinates (x, y, z) are available and the resolution of the Z-axis is high enough to substitute for real pressure information. To enter this mode, a user could, for example, hold a button on the pen while tracing. The button can disable the normal pen power-save mode that normally starts after a pen tip up timeout. In this mode, pen traces above the touchscreen can change the line width as a function of the height (Z distance) of the tip relative to the touchscreen. The virtual pressure mode can also change the pen to a brush mode with different brush widths based on the pen tilt information. An ultrasound pen may track this tilt information. The larger the tilt angle, the wider the brush, similar to the behavior of a real ink pen/brush. Another alternative is to give the pen tip more room to move up and down depending on the pen tip pressure. This allows the virtual pressure mode to also work for screen writing with the pen tip down. [0856] FIG. 1 illustrates an exemplary architecture 100 for implementing this aspect of the disclosure. For example, the virtual pressure sense configuration 116 may be configured to implement the functionality particular to this aspect. [0057] FIG. 7 illustrates an exemplary flow 700 for providing a virtual pressure sense mode. Flow 700 starts at 710. At 720, the architecture 100, for example the virtual pressure sense configuration 116, determines whether or not it is in the pressure sense mode. If it is not, then at 780, it operates as in the normal on-screen tracing mode, for example, according to the flow of FIG. 4, and outputs any received pen information at 790. If, however, the architecture 100, for example the virtual pressure sense configuration 1 16, determines that it is in the pressure sense mode, then at 730, it determines whether or not the pen is tip-down on the touchscreen, as in 220 of FIG. 2. In this aspect, if the pen is touching the touchscreen, that is, if the threshold distance is zero, then at 780, the architecture 100 operates as in the norma! on-screen tracing mode, for example, according to 460-480 of the flow of FIG. 4, and outputs any received pen information at 790. 0058] If, however, the architecture 100, for example the virtual pressure sense configuration 116 in combination with the ultrasound pen service 122, determines at 730 that the pen is not contacting the touchscreen, then at 740, the architecture 100, for example the virtual pressure sense configuration 116 in combination with the ultrasound pen service 122, changes the current trace color to a darker color and/or increases the line width. Flow 700 returns to 720 to determine whether or not the application leaves the pressure sense mode and also proceeds to 750. At 750, the architecture 100, for example the virtual pressure sense configuration 116 in combination with the ultrasound pen service 122, calculates the Z distance from the pen to the touchscreen. The Z distance may b constrained by a maximum threshold (which may be same or different than the threshold distance discussed above, for example with respect to FIG. 2), or limited only by the accuracy/range of the ultrasound pen. At 760, the architecture 100, for example the virtual pressure sense configuration 1 16 in combination with the ultrasound pen service 122, changes the trace color and/or line width according to the Z distance, returns to 720 to determine whether or not. the application leaves the pressure sense mode, and also proceeds to 770. At 770, if "enabled, the architecture 100, for example the virtual pressure sense configuration 1 16 in combination with the ultrasound pen service 122, changes the brush width according to the pen tilt angle. At 790, the architecture 100. for example the virtual pressure sense configuration 116 in combination with the ultrasound pen service 122, outputs the pen information, including the trace color, line width, and/or brush width. |0059] in virtual pressure sense mode illustrated in FIG. 7, the touch buffer may be cleared and the pen trace buffered, as in 460 and 480 of FIG. 4, if the pen is not touching the screen (i.e., the no branch of 730). This is because, although the pen is not physically touching the screen, the user is still using it to write on the screen. In embodiments where the threshold distance discussed above, for example with respect to FIG. 2, is greater than zero, the touch buffer may be clear and the pen trace buffered only when the pen is within the threshold distance. In some embodiments, a first threshold distance may be used when the pressure sense mode is engaged, and a seco d (different) threshold distance used when the pressure sense mode is not engaged. For example, the user may wish to clear the touch buffer at a further distance in the pressure sense mode because the user may be likely to provide input using the pen at a greater distance in the pressure sense mode. In some embodiments, there may be no threshold distance or the threshold distance may be functionally infinite while in the pressure sense mode or set thereto when the pressure sense mode is engaged, for example, such that the touch buffer is cleared and the pen trace buffered regardless of how far the pen is from the touchscreen. 10060] FIG. 8 illustrates an exemplary ultrasound stylus pen 802 and touchscreen device 806 that can perform the flow 700 of FIG. 7. A front view 850 shows a top view of one or more ultrasound emitters 804 and the touchscreen device 806. The ultrasound emitter(s) 804 emit ultrasonic waves 830, 832, 834, 836, 838, and 840 that are received by various microphones to deiennine the ultrasound emitter(s) 804's position over the touchscreen device 806 in the x-y-z plane. A side view 860 shows the ultrasound pen 802 above the touchscreen device 806. The ultrasound emitters) 804 emit ultrasonic waves 812, 814, and 816 thai are received by microphones 822, 824, and 826 to determine vertical distance, or Z distance, 820, in addition to the x, y position. The user can increase or decreas vertical distance 820 to increase or decrease the line width and/or change the trace color. To calculate the tilt of the pen 802, at least two ultrasound emitters 804 can be used. 0061] FIG. 9 illustrates a block diagram of an exemplary system 900 in accordance with the various aspects disclosed herein. For example, system 900 can include elements such as a touchscreen 910, a buffer 906, a pen 904, a processor 902. and a memory 908. These elements may be in communication with each other as is known in the art and discussed herein (e.g., passive / active pens, etc.). Those of skill in the art will appreciate that the links shown in FIG. 9 may be wired or wireless links. Further, those of skill in the art will appreciate that the links shown may be direct, indirect, or logical links. For example, the pen 904 is not required to be in direct communication with the buffer 906 and/or the processor 902. in some embodiments, neither information nor data from the pen 904 is communicated to the buffer 906 via the processor 902 and/or one or more elements such as an input/output interface, microphone, wireless receiver, etc. Accordingly, the arrangement of FIG. 9 is merely provided for illustration and not limitation of the various aspects of the disclosure. Additionally, it will be appreciated that the various elements illustrated in FIG. 9 can be used to perform the various functionalities disclosed herein. For example, in one aspect, system 900 can be configured to reject, palm couches on a touchscreen 910. The system 900 can include logic configured to determine (e.g., implemented in processor 902 in combination with memory 908) whether or not a pen 904 is touching the touchscreen 910, and a touch buffer (e.g., buffer 906) can be configured to buffer trace information generated by a user's touch if the pen is not touching the touchscreen 910. The system 900 can also include logic configured to clear the touch buffer 906 and buffer (e.g., also in buffer 906) trace information generated by the pen, if the pen 904 is touching the touchscreen 910. It will be appreciated that the logic configured to clear the touch buffer can also be implemented by processor 902 in combination with memory 908, However, aspects may also be implemented in independent application specific circuits as discussed herein and may be integrated into the various elements (e.g., pen 904, touchscreen 910, etc.). Further, while buffer 906 has been described as buffering both the touch and pen information, it will be appreciated that aspects may include two or more buffers that may be separate (e.g., may be integrated in the touchscreen 910 and pen 904), may be implemented by the processor 902 (which may be one or more processors) in combination with memory 908 (which may be integrated into processor 902 or one or more separate elements) or any combination of the foregoing. Also, it will be appreciated that system 900 may be any device that uses, integrates and/or incorporates the various elements disclosed herein, such as a wireless device, smart phone, personal computer, laptop, tablet, etc. [00621 Further, the system 900 may include various elements of the architecture 100 illustrated in FIG. 1. For example, the processor 902 in combination with the memory 908 and buffer 906 may include the scratch mode configuration 112, the palm rejection configuration 114, the multi-layer buffer manager 118, the ultrasound pen service 122, and/or the touch sensor service 124. Alternatively, the elements of architecture 100 may be incorporated in other system components not shown in FTG. 9. [Θ063] FIG. 10 illustrates an exemplary UE in accordance with an aspect of the disclosure. Referring to FIG. 10, UE 1000 is illustrated as a touchscreen device (e.g., a smart phone, a tablet computer, etc.). As shown in FiG. 10, an external casing of UE 1000 is configured with a touchscreen display 1005, peripheral buttons 1010, 1015, 1020, and 1025 (e.g., a power control button, a volume or vibrate control button, an airplane mode toggle button, etc.), at least one front-panel button 1030 (e.g., a Home button, etc.), among other components, as is known in the art. While not shown explicitly as part of UE 1000, the UE 1000 can include one or more external antennas and or one or more integrated antennas that are built into the external casing of UE 1000, including but not limited to WiFi antennas, cellular antennas, satellite position system (SPS) antennas (e.g., global positioning system (GPS) antennas), and so on. [0064] While internal components of UEs, such as the UE 1000, can be embodied with different hardware configurations, a basic high-level UE configuration for internal hardware components is shown as platform 1002 in FIG. 10. The platform 1002 can receive and execute software applications, data and/or commands transmitted from a wireless access point or a radio access network (RAN) thai may ultimately come from a core network, the Internet, and/or other remote servers and networks (e.g., an application server, web URLs, etc.). The platform 1002 can also independentl execute locally stored applications without RAN interaction. The platform 1002 can include a transceiver 1006 operably coupled to an application specific integrated circuit (ASIC) 1008, or other processor, microprocessor, logic circuit, or other data processing device. The ASIC 1008 or other processor executes the application programming interface (API) 1004 layer that interfaces with any resident programs in the memory 1012 of the wireless device. The memory 1012 can be comprised of read-only memory (ROM) or random-access memory (RAM), electrically erasable programmable ROM (EEPROM), flash cards, or any memory common to computer platforms. The platform 1002 also can include a local database 1014 that, can store applications not actively used in memory 1012, as well as other data. The local database 1034 is typically a flash memory cell, but can be any secondary storage device as known in the art, such as magnetic media, EEPROM, optical media, tape, soft or hard disk, or the like. [0065] Further, the platform 1002 may include various elements of the architecture 100 illustrated in FIG. 1. For example, the ASIC 1008 in combination with the API 1004, the memory 1012, and the local database 1014 may include the scratch mode configuration 1 12, the palm rejection configuration 114, the multi-layer buffer manager 118, the ultrasound pen service 122, and/or the touch sensor service 124. Alternatively, the elements of architecture 100 may be incorporated in other UE components not shown in FIG. 10. Further, the platform 1002 may omit one or more of the elements illustrated in FIG. 10, For example, the transceiver 1006 may be omitted in some embodiments. Further, one or more microphones configured to receive ultrasonic signals may be included in the platform 1002, as may one or more other elements which are not illustrated in FIG. 10. [0Θ66] Accordingly, an aspect of the disclosure can include a UE (e.g., UE 1000, etc.) including the ability to perform the functions described herein. As will be appreciated by those skilled in the art, the various logic elements can be embodied in discrete elements, software modules executed on a processor or any combination of software and hardware to achieve the functionality disclosed herein. For example, ASIC 1008, memoiy 1012, API 1004 and local database 1014 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component. Therefore, the features of the UE 1000 in FIG. 10 are to be considered merely illustrative and the disclosure is not limited to the illustrated features or arrangement. [0067] FIG. 1 1 illustrates a communication device 1100 that includes logic configured to perform functionality. The communication device 1 100 can correspond to any of the above-noted communication devices, including but not limited to UEs 806 and 1000. [00681 Referring to FIG. 11, the communication device 1 100 includes logic configured to receive and/or transmit information 1 105. In an example, if the communication device 1 100 corresponds to a wireless communications device (e.g., UE 1000), the logic configured to receive and/or transmit information 1 105 can include a wireless communications interface (e.g., Bluetooth, WiFi, 2G, CDMA. W-CDMA, 3G, 4G, LIE, etc.) such as a wireless transceiver and associated hardware (e.g., a radio frequency (RF) antenna, a MODEM, a modulator and/or demodulator, etc.). In another example, the logic configured to receive and/or transmit information 1105 can correspond to a wired communications interface (e.g., a serial connection, a USB or Firewire connection, an Ethernet connection through which the Internet can be accessed, etc.). In a further example, the logic configured to receive and/or transmit information 1105 can include sensor or measurement hardware by which the communication device 1100 can monitor its local environment (e.g., an accelerometer, a temperature sensor, a light sensor, an antenna for monitoring local RF signals, etc.). In another example, the logic configured to receive and or transmit information 1 105 can include one or more microphones and/or sensors to receive information from a pen, such as ultrasound pen 802 in FIG. 8. The logic configured to receive and/or transmit information 1 105 may include the IPC/socket 128, the active stylus driver 132, and/or the touch sensor driver 134 of FIG. 1. The logic configured to receive and/or transmit information 1105 can also include software that, when executed, permits the associated hardware of the logic configured to receive and/or transmit information 1 105 to perform its reception and/or transmission function(s). However, the logic configured to receive and/or transmit information 1105 does not correspond to software alone, and the logic configured to receive and/or transmit information 1 105 relies at least in part upon hardware to achieve its functionality. [Θ069] Referring to FIG. 1 1, the communication device 1 100 further includes logic configured to process information 1 110. In an example, the logic configured to process information 1 110 can include at least a processor. Example implementations of the type of processing that can be performed by the logic configured to process information 1110 includes but is not limited to performing determinations, establishing connections, making selections between different information options, performing evaluations related to data, interacting with sensors coupled to the communication device 1100 to perform measurement operations, converting information from one format to another (e.g., between different protocols such as .wmv to ,avi, etc.), and so on. The logic configured to process information 1110 may include the scratch mode configuration 1 12, the palm rejection configuration 1 14, the multi-layer buffer manager 118, the ultrasound pen service 122, and/or the touch sensor sendee 124 of FIG. 1 , and/or the processor 902 of FIG. 9. The logic configured to process information 1 1 10 can include logic configured to determine whether or not a pen is within a threshold distance of the touchscreen, logic configured to store trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen, and logic configured to clear the touch buffer and store trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen. The processor included in the logic configured to process information 11 10 can correspond to a general purpose processor, a digital signal processor (DSP), an ASIC, a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The logic configured to process information 1 110 can also include software that, when executed, permits the associated hardware of the logic configured to process information 1 1 10 to perform its processing function(s). However, the logic configured to process information 110 does not. correspond to software alone, and the logic configured to process information 1 1 10 relies at least in part upon hardware to achieve its functionality. [0070] Referring to FIG. 11 , the communication device 1 00 further includes logic configured to store information 1115. The logic configured to store information 11 15 can include logic configured to store trace information generated by a user's touch in a touch buffer if the pen is not within the threshold distance of the touchscreen and logic configured to clear the touch buffer and store trace information generated by the pen in the touch buffer if the pen is within the threshold distance of the touchscreen, in an example, the logic configured to store information 11 15 can include at least a non- transitory memory and associated hardware (e.g., a memory controller, etc.). For example, the non-transitory memory included in the logic configured to store information 1115 can correspond to RAM, flash memory, ROM, erasable prograramabie ROM (EPROM), EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The logic configured to store information can include the multi-layer buffer manager 1 18 of FIG. 1 and/'or the buffer 906 and/'or the memory 908 of FIG. 9, The logic configured to store information 11 15 can also include software that, when executed, permits the associated hardware of the logic configured to store information 11 15 to perform its storage function(s). However, the logic configured to store information 1 115 does not correspond to software alone, and the logic configured to store information 1115 relies at least in part upon hardware to achieve its functionality. [0071 J Referring to FIG. 11, the communication device 1100 further optionally includes logic configured to present information 1120. In an example, the logic configured to present information 1 120 can include at least an output device and associated hardware. For example, the output device can include a video output device (e.g., a display screen, a port that can carry video information such as USB, HDMi, etc), an audio output device (e.g., speakers, a port that can carry audio information such as a microphone jack, USB, HDMi, etc.), a vibration device and/or any other device by which information can be formatted for output or actually outputted by a user or operator of the communication device 1100. For example, if the communication device 1 100 corresponds to UE 1000 as shown in FIG. 10. the logic configured to present information 1120 can include the touchscreen display 1005 of UE 1000. I a further example, the logic configured to present information 1120 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to present information 1 120 can also include software that, when executed, permits the associated hardware of the logic configured to present information 1 120 to perform its presentation function(s). However, the logic configured to present information 1120 does not correspond to software alone, and the logic configured to present information 1120 relies at least in part upon hardware to achieve its functionality. [β§72] Referring to FIG. 1 1, the communication device 1100 further optionally includes logic configured to receive local user input 1125. In an example, the logic configured to receive local user input 1 125 can include at least a user input device and associated hardware. For example, the user input device can include buttons, a touchscreen display, a keyboard, a camera, an audio input device (e.g., a microphone or a port that can carry audio information such as a microphone jack, etc.), and/or any other device by which information can be received from a user or operator of the communication device 1 100. For example, if the communication device 1 00 corresponds to UE 1000 as shown in FIG. 10, the logic configured to receive local user input 1 125 can include any of the buttons 1010 through 1025, the touchscreen display 1005, etc. As another example, if the communication device 1100 corresponds to UE 806 as shown in FIG. 8, the logic configured to receive local user input 1 125 can include any of the microphones 822, 824, and 826. In a further example, the logic configured to receive local user input 1 125 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to receive local user input 1125 can also include software that, when executed, permits the associated hardware of the logic configured to receive local user input 1 125 to perform its input reception function(s). However, the logic configured to receive local user input. 1125 does not correspond to software alone, and the logic configured to receive local user input 1125 relies at least in part upon hardware to achieve its functionality. [0073J Referring to FIG. 1 1, while the configured logics of 1105 through 1 125 are shown as separate or distinct blocks in FIG. 1 1, it will be appreciated that the hardware and/or software by which the respective configured logic performs its functionality can overlap in part. For example, any software used to facilitate the functionality of the configured logics of 1105 through 1 125 can be stored in the non-transitory memory associated with the logic configured to store infonnation 1115, such that the configured logics of 1105 through 1 125 each performs their functionality (i.e., in this case, software execution) based in part upon the operation of software stored by the logic configured to store information 1 115, Likewise, hardware that is directly associated with one of the configured logics can be borrowed or used by other configured logics from time to time. For example, the processor of the logic configured to process information 11 10 can format data into an appropriate format before being transmitted by the logic configured to receive and/or transmit information 1 105, such that the logic configured to receive and/or transmit information 1105 performs its functionality (i.e., in this case, transmission of data) based in part upon the operation of hardware (i.e., the processor) associated with the logic configured to process information 1110. [§0741 Generally, unless stated otherwise explicitly, the phrase "logic configured to" as used throughout this disclosure is intended to invoke an aspect that is at least partially implemented with hardware, and is not intended to map to software-only implementations that are independent of hardware. Also, it, will be appreciated that the configured logic or "logic configured to" in the various blocks are not limited to specific logic gates or elements, but generally refer to the ability to perform the functionality described herein (either via hardware or a combination of hardware and software). Thus, the configured logics or "logic configured to" as illustrated in the various blocks are not necessarily implemented as logic gates or logic elements despite sharing the word "logic." Other interactions or cooperation between the logic in the various blocks will become clear to one of ordinar skill in the art from a review of the aspects described below in more detail. [0075] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0076] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0077] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0078] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal (e.g., UE). In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. [0079] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof, if implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another, A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM. ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium thai can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and b!u-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0 )80] While the foregoing disclosure shows illustrative aspects of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
A processor of an aspect includes a decode unit to decode an aperture access instruction, and an execution unit coupled with the decode unit. The execution unit, in response to the aperture access instruction, is to read a host physical memory address, which is to be associated with an aperture that is to be in system memory, from an access protected structure, and access data within the apertureat a host physical memory address that is not to be obtained through address translation. Other processors are also disclosed, as are methods, systems, and machine-readable medium storing aperture access instructions.
1.A processor comprising:a decoding unit for decoding an aperture access instruction;An execution unit coupled to the decoding unit, the execution unit responsive to the aperture access instruction:Reading from the protected protected structure the host physical memory address that will be associated with the aperture that will be in system memory;The data within the aperture is accessed at the host physical memory address that would not be obtained by address translation.2.The processor of claim 1 wherein the aperture will represent a portion of the system memory that is not accessible by address translation.3.The processor of claim 1 wherein the decoding unit decodes the aperture access instruction, the aperture access instruction will be an aperture write instruction, wherein the aperture write instruction will indicate a source operand, and wherein the execution unit is responsive to The aperture write instruction will receive data from the source operand and will store data from the source operand to the host physical memory address within the aperture.4.The processor of claim 3 wherein the source operand is to be in system memory, and wherein the execution unit is to perform address translation in response to the aperture write instruction to obtain host physical memory to be used to receive data from the source operand address.5.The processor of claim 1 wherein the decoding unit decodes the aperture access instruction, the aperture access instruction will be an aperture read instruction, wherein the aperture read instruction will indicate a destination operand, and wherein the unit response is performed The data is read from the host physical memory address within the aperture and the data read from the aperture is stored to the destination operand.6.A processor according to any one of claims 1 to 5, wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the access protected structure comprising the virtual machine control structure .7.A processor according to any one of claims 1 to 5, wherein the decoder will decode at least one payload from a memory instruction that, if executed, will not be allowed to read from accessing the protected structure The host physical memory address associated with the aperture.8.A processor according to any one of claims 1 to 5, wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being stored in system memory And wherein the decoding unit will decode the aperture access instruction that will not indicate any architecturally visible memory address information for access to the protected structure.9.A processor according to any one of claims 1 to 5, wherein the decoding unit will decode the aperture access instruction indicating the offset, and wherein the execution unit will access the host physical memory address in response to the aperture access instruction Data within the aperture, the host physical memory address will differ from the host physical memory address corresponding to the base address of the aperture by the offset.10.A processor according to any one of claims 1 to 5, wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being representative for The host physical memory address of the base address of a plurality of aperture blocks of adjacent apertures.11.The processor of claim 10 wherein the decoding unit decodes the aperture access instruction indicating the aperture selector to select one of the plurality of apertures.12.A processor according to any one of claims 1 to 5, wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being represented for the aperture list The host physical memory address of the base address, and wherein the aperture list will store a plurality of host physical memory addresses, each for a base address of a different one of a plurality of potentially non-contiguous apertures.13.The processor of claim 12 wherein the decoding unit decodes an aperture access instruction that indicates the aperture selector to select one of the plurality of apertures.14.A method performed by a processor, comprising:Receiving an aperture write instruction at the processor, the aperture write instruction indicating a source operand;Reading a host physical memory address associated with an aperture in system memory from accessing the protected structure in response to the aperture write instruction;In response to the aperture write instruction, data received from the source operand is stored within the aperture to a host physical memory address that is not obtained by address translation.15.The method of claim 14 further comprising:Receiving an aperture read command at the processor, the aperture read instruction indicating a destination operand;Reading a host physical memory address associated with an aperture in system memory from accessing the protected structure in response to the aperture read instruction;Reading data from the aperture at a host physical memory address that is not obtained by address translation in response to the aperture read instruction;The data read from the aperture is stored to the destination operand.16.The method of claim 15 further comprising issuing an aperture write instruction from the first virtual machine, issuing an aperture read instruction from the second virtual machine, and wherein the aperture write instruction and the aperture read instruction are for use in the first virtual The data is shared between the machine and the second virtual machine.17.The method of claim 14 further comprising preventing the host physical memory address storing the data received from the source operand from being reachable by the second level hierarchical paging structure.18.The method of claim 14, executed by a virtual machine, and wherein the virtual machine is prevented from knowing a host physical memory address that stores data received from the source operand.19.An article of manufacture comprising a non-transitory machine readable storage medium storing instructions that, if executed by a machine, cause the machine to perform operations comprising the steps of:Allocating system memory areas for apertures;Having the host physical memory address associated with the aperture stored in the access protected structure;The host physical memory address of the aperture is made inaccessible through the second level hierarchical paging structure.20.The article of claim 19 wherein the instructions for storing the host physical memory address further comprise instructions that, if executed by the machine, cause the machine to perform operations comprising: storing the host physical memory address to include virtual machine control Structure access is protected in the structure.21.The article of any one of claims 19 to 20, wherein the instructions to store the host physical memory address further comprise instructions that, if executed by the machine, cause the machine to perform operations comprising the steps of:Storing the host physical memory address in an access protected structure that will correspond to the first virtual machine;The host physical memory address is stored in a second access protected structure that will correspond to the second virtual machine.22.A computer system comprising: an interconnect; a processor according to any one of claims 1 to 5 coupled to an interconnect; and a dynamic random access memory (DRAM) coupled to the interconnect.23.A device operative to perform the method of any one of claims 14-18.24.A device comprising means for performing the method according to any one of claims 14 to 18.25.A non-transitory machine readable storage medium storing instructions comprising a first instruction, if executed by a machine, to cause a machine to perform the method of any one of claims 14-18.
Aperture access processor, method, system and instructionsTechnical fieldEmbodiments described herein relate generally to processors. In particular, the embodiments described herein generally relate to processors having architectural extensions that support virtualization.Background techniqueA virtual machine monitor (VMM) can be used to create a virtual machine system in which virtual machines (VMs) can be operated. The VMM can present the abstraction of the VM to the guest software running within each VM. VMM can facilitate access to system hardware while generally maintaining control over various aspects of system hardware and operation.In some implementations, VMs may generally be unaware that they are running on the VMM, and generally may not be aware of the existence of other VMs in the system. In other implementations, the VMs can be aware that they are running on the VMM and can be aware that other VMs are present in the system. Such VMs are sometimes described as being "semi-virtualized" or "informed."DRAWINGSThe invention may best be understood by reference to the following description and the accompanying drawings. In the drawing:1 is a block diagram of an embodiment of a virtual machine system in which embodiments of the present invention may be implemented.2 is a block flow diagram of an embodiment of a method that can be performed by a VMM to provide an aperture.3 is a block diagram of an example embodiment of a VMM module.4 is a block flow diagram of an embodiment of a method of performing an embodiment of an aperture write instruction.5 is a block diagram of an embodiment of a processor that operates to perform an embodiment of an aperture write instruction.6 is a block flow diagram of an embodiment of a method of performing an embodiment of an aperture read command.7 is a block diagram of an embodiment of a processor that operates to perform an embodiment of an aperture read instruction.Figure 8 is a block diagram of a first method for accessing data from an aperture.9 is a block diagram of a second method for accessing data from an aperture.Figure 10 is a block diagram of a third method for accessing data from an aperture.11A is a block diagram illustrating an embodiment of an in-order pipeline and an embodiment of a register rename out-of-order issue/execution pipeline.11B is a block diagram of an embodiment of a processor core including a front end unit coupled to an execution engine unit and both the front end unit and the execution engine unit being coupled to a memory unit.Figure 12A is a block diagram of an embodiment of a single processor core along with its connection to an on-die interconnect network and a local subset of its level 2 (L2) cache.12B is a block diagram of an embodiment of an expanded view of a portion of the processor core of FIG. 12A.13 is a block diagram of an embodiment of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics elements.14 is a block diagram of a first embodiment of a computer architecture.Figure 15 is a block diagram of a second embodiment of a computer architecture.16 is a block diagram of a third embodiment of a computer architecture.17 is a block diagram of an embodiment of a system-on-a-chip architecture.18 is a block diagram of the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with an embodiment of the present invention.Detailed waysEmbodiments of an aperture access instruction, an embodiment of a processor for executing an aperture access instruction, an embodiment of a method performed by a processor when executing an aperture access instruction, and one or more of instructions for performing an aperture access instruction are disclosed herein. Embodiments of embodiments of a processor system, and embodiments of a program or machine readable medium providing aperture access instructions. In some embodiments, the processor may have a decoding unit to decode the aperture access instruction, or other logic to receive the aperture access instruction, and an execution unit or other logic to execute the aperture access instruction. Modules, programs, and machine readable media for managing apertures (eg, assigning apertures, protecting apertures, configuring which entities can access one or more apertures, etc.) are also disclosed.In the following description, numerous specific details are set forth (eg, specific instruction operations, data formats, processor configurations, micro-architecture details, operational sequences, virtual machine systems, etc.). However, the embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring the understanding of the specification.1 is a block diagram of an embodiment of a virtual machine system 100 in which embodiments of the present invention may be implemented. The virtual machine system includes a plurality of virtual machines (VMs) 102, virtual machine monitors (VMMs) 108, and system hardware 110. In the illustrated example, the VM includes a first virtual machine (VM1) 102-1, a second virtual machine (VM2) 102-2, and optionally other virtual machines.In various embodiments, system hardware 110 may represent one or more desktop computers, laptop computers, notebook computers, tablet computers, servers, mainframes, network devices (eg, routers, switches, etc.), smart phones, or One or more other types of computer systems or electronic devices. In some embodiments, the virtual machine system may alternatively be implemented on two or more such electronic devices and/or may be a distributed virtual machine system, although the scope of the invention is not so limited. Generally, system hardware can include at least one processor 112 and memory 126. In some embodiments, the processor can be a general purpose processor. Alternatively, the processor can be a dedicated processor. Examples of suitable dedicated processors include, but are not limited to, network processors, communication processors, cryptographic processors, graphics processors, coprocessors, embedded processors, and digital signal processors (DSPs), to name a few. Two or more processors of the same or different types may also optionally be used. Memory 126 can include one or more memory devices of one or more types. Examples of suitable memory devices include, but are not limited to, various different types of random access memory (RAM), various different types of read only memory (ROM), one or more hard disks, flash memory, and the like, as well as various combinations thereof. The memory can be used to store software and data used to implement the virtual machine system.Each VM can have its own guest software. As shown, the first VM can have a first guest operating system 104-1 and a first set of one or more guest applications 106-1. Likewise, the second VM can have a second guest operating system 104-2, and a second set of one or more guest applications 106-2. In various embodiments, these operating systems may represent standard or real-time operating systems, highly disassembled operating devices with limited operating system functionality, or may not necessarily include all or at least some conventional operating system functionality and/or facilities. Software. Guest software on each virtual machine may often desire to access at least some of the system hardware (eg, processor 112 and memory 126).The virtual machine system also includes a virtual machine monitor (VMM) 108. In the illustrated embodiment, a single VMM is shown, but in other embodiments two or more VMMs may alternatively be used. VMM is sometimes referred to in the art as a hypervisor. The VMM can be implemented in software, firmware, hardware, or a combination thereof. VMM may be more privileged than VM. VMM can emulate bare metal interfaces and export them to higher level software. The VMM can present the abstraction of the plurality of VMs 102 to other software (eg, guest software running on the VM and/or within the VM). The VMM can facilitate guest software access to system hardware 110 while maintaining adequate control of system hardware and/or certain other operational aspects (eg, interrupts, etc.) to help provide proper operation of the guest software and to help provide functionality from the guest software. Protection between the guest and the guest software.The VMM can generally allocate resources to the VM's system hardware (eg, hardware threads, system memory space, disk drive storage space, etc.). The VMM can also take over control when needed or when certain types of events occur. The transition from VM operation to VMM operation and/or control transfer from the guest software to the VMM may be referred to as VM exit. Potential causes of VM exit include, but are not limited to, certain types of privileged exceptions, system events, virtualization events, access violations to protected areas of memory, and the like. The transition to VM operations and/or the control transfer from VMM to VM (eg, guest software) may be referred to as VM entry. Such VM entry and exit generally tend to have associated overhead or performance costs that are largely due to context or state switching. Virtual machines do not need to be aware that they are running on the VMM, and do not need to be aware of the existence of other VMs in the system, but can be "semi-virtualized" or "informed." Often, however, in addition to some fairly limited types of interactions, such as data sharing between VMs as described elsewhere herein, the VMM may in some way isolate the guest software stacks of different VMs from each other.Virtual machine system 100 can use virtual memory. Virtual memory generally represents a memory management technique in which potentially non-contiguous physical memory (eg, physical address space) is presented to a process and/or application (eg, guest software) as a contiguous memory (eg, a virtual address space). A virtual address space can sometimes also be referred to as a linear address space. The physically allocated physical memory may not be contiguous or sequentially organized and may even be included on multiple different types of storage devices. However, due to virtual memory, processes can treat memory as a linear sequence through successive virtual address spaces. Virtual memory allows an application to execute without having to relocate its entire address space in physical memory, which may allow the application to be executed using less physical memory than would be required to accommodate its entire address space. Applications may not need to be aware of how physical memory is allocated. Virtual memory can also be used to isolate applications from each other because each application virtual address space can be independently mapped to one or more pages that are exclusively allocated to the application's physical memory. Physical memory can be logically divided into pages. The pages of physical memory can be mapped to virtual addresses. Processes and applications (for example, guest software) can view only their virtual address space, which is where the corresponding data actually resides.A virtual address space can be mapped to a physical address space using a process called address translation. Address translation may involve looking up a physical address based on a given virtual address and optionally based on other information (eg, in a processor register). In general, the first and second guest operating systems 104-1, 104-2 and VMM 108 can work together to translate virtual addresses into physical memory addresses of actual physical memory locations in memory 126. Memory access instructions (eg, reading instructions from memory, writing instructions to memory, etc.) may have associated virtual or linear addresses. A set of hierarchical paging structures 136 can be stored in memory. In the case of a virtual machine system, these hierarchical paging structures may include both a first level paging structure and a second level paging structure. As an example, the first level paging structure may include page tables or other paging structures, such as those that are commonly used for address translation in non-virtualized systems. In general, the guest operating systems 104-1, 104-2 can manage or at least help manage these first level paging structures. The first level paging structure can be used to translate the virtual address of the instruction into a so-called guest physical address.The guest physical address may not yet represent the actual physical memory address available to access the actual physical memory location in memory, but may represent an intermediate memory address that requires further address translation. In some embodiments, in addition to the first level paging structure (eg, the first level page table), there may be additional second level paging structures, such as, for example, an extended page table, other second level page tables, or other The second level of paging structure. The VMM can manage the second level paging structure. The guest physical address obtained from the first level paging structure can be used as an input into the second level paging structure. The second level paging structure can be used to convert the guest physical address to the host physical memory address. Host physical memory addresses are sometimes referred to in the art as platform physical memory addresses. The host physical memory address can represent the actual physical memory address that can be used to access the actual physical memory location. The second level paging structure can also have access rights or permissions for associated pages. As an example, such access rights or permissions may indicate whether the page is readable, writable, executable, or a combination thereof. In such a virtual machine system, access to the actual physical memory location may generally only be provided by the second level paging structure, at least for normal load, storage, and other common memory access instructions that do not have special or additional privileges. And/or as obtained by efficient address translation allowed by the VMM.Referring again to Figure 1, hierarchical paging structure 136 is typically stored in memory. The processor can have address translation logic 124 to help translate the virtual memory address to the host physical memory address. As an example, the address translation logic can include a memory management unit (MMU) and one or more levels of translation lookaside buffers (TLBs). Initially, the MMU can operate to check the one or more TLBs to see if the address translation has been cached or otherwise stored in the one or more TLBs. Each of the one or more TLBs may cache a previously determined virtual memory address to a host physical memory address translation. A TLB "hit" occurs when an appropriate address translation is stored in the one or more TLBs. In the case of a TLB hit, the address translation can be retrieved from the TLB entry. Conversely, a TLB "miss" occurs when the appropriate address translation is not stored in the one or more TLBs. In the event of a TLB miss, the MMU can perform a page table walk to determine the address translation. For example, the MMU can include a page miss handler unit or logic, a page table walk unit or logic, and the like. For example, the MMU can access and walk or advance through the hierarchical paging structure 136 to attempt to reach a page table entry of the storage host physical memory address. Once determined, the physical memory address can be accessed using the host physical memory address. Moreover, the address translation determined by the page table walk or can be cached in the TLB for possible future use. If address translation is required again, the address translation can be retrieved from the TLB relatively quickly for a sufficiently short period of time, while the MMU does not need to perform a relatively slow page table walk.Referring again to FIG. 1, the memory can store at least one virtual machine control structure (VMCS) 128. The at least one VMCS broadly represents at least one access protected structure for storing control of virtual machines and/or virtualization associations. A VMCS can be access protected because it can only be limited by special instructions that have special privileges that are specifically designed and intended to interact with it (eg, special access privileges to configure VMCS, read VMCS, write to VMCS, etc.) The set of instructions is accessed, and is not accessible by regular general-purpose read from memory, to memory store instructions, and the like. The VMCS can control various different aspects associated with the operation of the virtual machine system in different embodiments, such as, for example, aspects related to VM operations, VM entry, VM exit, and the like. In some embodiments, multiple VMs can optionally share the same VMCS. In other embodiments, a different set of one or more VMCSs can be used for each VM. In yet another aspect, a different set of one or more VMCSs can be used for each logical processor of each VM, where there can be at least two VMs. One specific example of a suitable VMCS 128 is a virtual machine control structure used in virtual machine extensions (VMX) of certain Intel® 64 and IA-32 architecture processors, although the scope of the invention is not so limited. Another specific suitable example of a suitable VMCS 128 is the VMCS used in IBM PowerVM® virtualization technology on a power system server, although the scope of the invention is not so limited. For the sake of clarity, the use of the terms virtual machine control structure and VMCS is not intended to refer to a particular VMCS of a VMX unless specifically specified.Referring again to FIG. 1, processor 112 has a virtualization extension 114. Virtualization extensions may include architectural extensions to support virtualization and/or hardware-assisted virtualization support logic. These virtualization extensions may include a set of instructions to support virtualization, as well as hardware, firmware, software, or a combination of such logic to execute the instructions. In some embodiments, the virtualization extension can include at least one embodiment of aperture access instructions 116 as disclosed herein. In some embodiments, the at least one aperture access instruction can optionally include at least one embodiment of an aperture write instruction 118 as disclosed herein, the aperture write instruction 118 can be used to write data 134 to the aperture 132. In some embodiments, the at least one aperture access instruction can optionally include at least one embodiment of an aperture read instruction 120 as disclosed herein, the aperture read instruction 120 can be used to read data 134 from aperture 132 . In some embodiments, only one of these instructions may optionally be supported. In other embodiments, these two instructions may optionally be supported. The virtualization extension may also include aperture access logic 122 that operates to execute each of the at least one aperture access instructions 116. In some embodiments, the aperture access instruction(s) may be permitted to use aperture access logic and access aperture, but conventional general purpose memory access instructions may not be permitted to use aperture access logic and access aperture.In some embodiments, at least one aperture 132 can be located in memory 126. The aperture can broadly represent the protected range, area, or other portion of the memory that can be used to securely store data. In some embodiments, the aperture may represent such a portion of the memory that the VMM allows or restricts access to. In some embodiments, the VMM can include an aperture management module 109 that operates to perform one or more aspects associated with managing apertures. Examples of such management aspects include, but are not limited to, allocating a portion of memory for apertures, configuring apertures, configuring or allowing one or more VMs to access apertures, protecting apertures from being accessed by conventional general purpose memory access instructions, and the like.In some embodiments, VMM 108 and/or aperture management module 109 may control aperture 132 or protect aperture 132 from unintended access by making it impossible for the aperture to be accessed or arrived through address translation. For example, in some embodiments, the VMM and/or aperture management module can configure the second level hierarchical paging structure 136 such that there is no conversion of host physical memory addresses into the aperture. In some embodiments, the VMM and/or aperture management module may store the aperture address information 130 in the virtual machine control structure 128(s) or in another access protected structure(s). . The VMM and/or aperture management module may selectively allow only one or more intended or authorized entities (eg, one or more VMs) to access aperture address information while leaving aperture address information secret or confidential and unavailable An unexpected or unauthorized entity (for example, one or more other VMs). Therefore, only the entity that the VMM intends and authorizes to utilize the aperture can operate to utilize it. This can help protect or secure the data stored in the aperture.In various embodiments, the at least one aperture 132 and the at least one aperture access instruction 116 can be used for a variety of different purposes. As an example, VMM 108 and/or aperture management module 109 may make aperture address information 130 available to both first VM 102-1 and second VM 102-2 to allow first and second VMs to share data. As an example, in some embodiments, the first VM may issue an instance of the aperture write instruction 138 to access the aperture address information 130 (eg, from the virtual machine control structure 128 or another type of access protected structure) and use it Data 134 is written or otherwise stored from the source operand of the aperture write instruction to aperture 132. Subsequently, in some embodiments, the second VM 2 may issue an instance of the aperture read instruction 140 to access the aperture address information 130 (eg, from the virtual machine control structure 128 or another type of access protected structure) and use it Data 134 is read from aperture 132 into the destination operand of the aperture read command. In this manner, the aperture (eg, by making aperture address information 130 accessible) can be used for data protected sharing between two or more VMs that the VMM allows access to the aperture.2 is a block flow diagram of an embodiment of a method 246 that may be performed by a VMM to provide an aperture. In some embodiments, method 246 can be performed by VMM 108 and/or aperture management module 109 of FIG. 1 and/or with VMM 108 and/or aperture management module 109 of FIG. The components, features, and specific optional details described herein for VMM 108 and/or aperture management module 109 are also optionally applicable to method 246. Alternatively, method 246 can be performed by a similar or different VMM or aperture management module and/or within a similar or different VMM or aperture management module. Moreover, VMM 108 and/or aperture management module 109 can perform the same, similar, or different methods as method 246.The method includes allocating a range, region, or other portion of the system memory for the aperture at block 247. As an example, in some embodiments, the portion can include one or more pages of system memory.At block 248, the host physical memory address associated with the aperture may be stored in an access protected structure. In some embodiments, the host physical memory address can be the host physical memory address of the aperture. In other embodiments, the host physical memory address may not necessarily be the host physical memory address of the aperture, but may be required to access the aperture. For example, the host physical memory address may lead to a protected storage location of the host physical memory address in which the aperture is stored.Access to a protected structure can broadly represent a structure for which access to it is controlled, restricted, or otherwise protected. In some embodiments, the VMM can protect access to the protected structure from unauthorized access. Examples of suitable structures include, but are not limited to, data structures in memory, storage locations on a processor die, another structure on a die, and the like. One specific suitable example of accessing a protected structure is a virtual machine control structure, although the scope of the invention is not so limited. Alternatively, instead of using a virtual machine control structure, a dedicated structure can be used to store host physical memory addresses without necessarily having to store some or all of another type of data that is often stored in the virtual machine control structure. In some embodiments, such dedicated structures may be optionally protected using techniques that are the same or similar to those used to protect the virtual machine control structure.In some embodiments, the host physical memory address stored in the access protected structure may be protected from instructions that are not specifically designed to access it and/or have no special access privileges to access it, but may It is accessible to at least one instruction (e.g., the at least one aperture access instruction 116) that is specifically designed to access it and/or has special access privileges to access it. As an example, on-die processor logic can be configured to allow or deny access based on instructions (eg, its opcode). For example, in some embodiments, the host physical memory address can be stored in an access protected structure that can be in system memory, and the on-die processor logic can be operative to selectively allow access to the instruction through the at least one aperture 116 access to a host physical memory address stored therein, and operable to prevent pairs stored by conventional general memory access instructions (eg, from memory load/read instructions, to memory store/write instructions, etc.) stored therein Access to the host physical memory address.Referring again to FIG. 2, at block 249, the host physical memory address of the aperture (eg, all host physical memory addresses for the entire aperture) may be made unavailable through extended page tables, other second level page tables, or other second level Hierarchical paging structure to access. For example, the second level hierarchical paging structure can be configured to have no conversion of host physical memory addresses to the aperture (eg, all host physical memory addresses for the entire aperture). This can help to make the aperture generally not accessible or reachable through address translation. This, in turn, can help protect the aperture from access by conventional general purpose memory access instructions that do not have special access privileges (eg, read instructions from memory, store instructions to memory, store copy instructions, collect instructions, etc.) and generally only The actual physical memory is reached by paging and address translation. For example, such a memory access instruction may specify a virtual or linear address that needs to be converted to a host physical memory address by address translation in order to access the actual physical memory. In a virtual machine system, such address translation ultimately requires a second level hierarchical paging structure (or a cached TLB obtained based on such a second level hierarchical paging structure). However, in the case where the second level hierarchical paging structure is configured to have no conversion to the host physical memory address anywhere within the entire aperture, they will not be found when such instructions undergo address translation. Any host physical memory address or path that allows them to access the aperture.The VMM can use the method 246 in a different manner in different embodiments. As an example, the VMM can use this method to selectively allow two or more VMs to share data using apertures. As an example, this may be done in response to a request from a VM or associated software, such as, for example, via a hypercall, which allows the VM to share data with one or more other VMs through an aperture. The VMM may decide to allow such data sharing and may provide an aperture (if one is not yet available) and may selectively configure the intended VM to use the aperture without configuring other unintended VMs to use the aperture. This can be achieved by selectively allowing only the intended VM, rather than other unintended VMs, to access the host physical memory address. The host physical memory address (potentially along with one or more other host physical memory addresses) may be required in order to access the aperture. As an example, each VM may have its own corresponding access protected structure (eg, VMCS), and the VMM may selectively store the host physical memory address in an access that is only corresponding to those VMs that are allowed to use the aperture. In the structure, not in the access protected structure corresponding to other VMs. As another example, multiple VMs may share access to a protected structure (eg, a shared VMCS) and (eg, in a shared access protected structure) may exist to control which of the VMs are allowed to access the host physical memory Control of the address. Other methods are also possible.FIG. 3 is a block diagram of an example embodiment of a VMM module 308. In some embodiments, VMM module 308 is operative to perform method 246 of FIG. The particular optional features and details described for method 246 may also be selectively applied to VMM module 308. Alternatively, method 246 can be performed by and/or utilizing a similar or different VMM module. Moreover, VMM module 308 is operative to perform methods similar or different than method 246.The VMM module includes a memory allocation module 350, a virtual machine control structure (VMCS) management module 352, and a second level hierarchical paging structure management module 354. Each of these modules can be implemented in hardware, firmware, software, or a combination thereof. The memory allocation module is operative to allocate pages of physical memory. The VMCS management module is operable to manage one or more VMCSs. The second level hierarchical paging structure management module is operable to manage one or more second level hierarchical paging structures.The VMM module may also include an embodiment of the aperture management module 309, which may be implemented in hardware, firmware, software, or a combination thereof. As shown, in some embodiments, the aperture management module can optionally include functionality or modules implemented within the memory allocation module, the VMCS management module, and the second level hierarchical paging structure management module. In particular, aperture allocation module 356 can optionally be implemented within a memory allocation module, host physical memory address storage module 358 can optionally be implemented within a VCMS management module, and aperture access protection module 360 can optionally be The second level hierarchical paging structure management module is implemented. Alternatively, each of these modules can optionally be implemented as a separate component that can interact with the memory allocation module, the VMCS management module, and the second level hierarchical paging structure management module as needed.In some embodiments, aperture allocation module 356 and/or memory allocation module 350 and/or aperture management module 309 are operable to allocate an area of system memory for the aperture. In some embodiments, this can be done similarly as described for block 247 of FIG. In some embodiments, host physical memory address storage module 358 and/or VMCS management module 352 and/or aperture management module 309 are operative to store host physics associated with the aperture in the one or more virtual machine control structures Memory address. In some embodiments, this can be done similarly as described for block 248 of FIG. In some embodiments, the aperture access protection module 360 and/or the second level hierarchical paging structure management module 354 and/or the aperture management module 309 are operative to render the layered paging structure unreachable by the one or more second levels The host physical memory address of the access aperture. In some embodiments, this can be done similarly as described for block 249 of FIG.4 is a block flow diagram of an embodiment of a method 682 of an embodiment of performing an aperture write instruction. In various embodiments, the method can be performed by a processor, an instruction processing device, a digital logic device, or an integrated circuit.The method includes receiving an aperture write instruction at block 463. In various aspects, the instructions can be received at a processor or a portion thereof (eg, an instruction fetch unit, a decode unit, a bus interface unit, etc.). In various aspects, sources from outside the processor and/or off-core (eg, from memory, interconnects, etc.) or from sources on the processor and/or die (eg, from instruction cache, instructions) The queue, etc.) receives the instruction.The aperture write instruction may specify (eg, explicitly specified by a field or other set of bits) or otherwise indicate (eg, implicitly indicate) the source operand. In some embodiments, the source operands can optionally be in memory. In other embodiments, the source operand may optionally be in a register of the processor. In other embodiments, other source storage locations may alternatively be used.At block 464, data may be accessed or otherwise received from the source operand in response to the aperture write instruction. In a specific example embodiment, the amount of data may optionally be sixty-four bytes of data. In other embodiments, the amount of data may optionally be more or less than sixty-four bytes.At block 465, in response to the aperture write instruction, the host physical memory address associated with the aperture in system memory can be read from the access protected structure. The access protected structure can be similar or identical to those discussed above. In some embodiments, the host physical memory address may not be architecturally exposed to software corresponding to the aperture write instruction and/or may remain invisible to such software.Then, at block 466, data received from the source operand may be stored within the aperture to the host physical memory address in response to the aperture write instruction. In some embodiments, the host physical memory address to which the data is stored may not be obtained by address translation (eg, may not have been obtained from the TLB, or by performing page table walks, etc.).In some embodiments, the host physical memory address read from the protected structure at block 465 can optionally be within the same aperture of the data stored at block 466. For example, the host physical memory address read from the protected structure at block 465 can optionally be the base address of the aperture, the instructions can also have an offset from the base address of the aperture, and the host physics storing the data at block 466 The memory address can optionally be an address obtained by applying an offset to the base address. In other embodiments, the host physical memory address read from the protected structure at block 465 may optionally not be within the same aperture of the data stored at block 466. Rather, in some embodiments, the host physical memory address read from the protected structure at block 465 can address a location in memory that stores one or more other host physical memory addresses. In some embodiments, there may optionally be a list or otherwise a plurality of such other host physical memory addresses, for example, each of which corresponds to a different aperture. In such cases, the instructions may optionally have an index or other aperture selector to select one of the host physical memory addresses and/or select one of the apertures. In some embodiments, the selected host physical memory address may be the host physical memory address used at block 466, or in other embodiments the offset indicated by the instruction may be applied thereto to determine for use at block 466. The base physical memory address of the host physical memory address. In this case, there is only a single indirect level, but alternatively two or more indirect levels may alternatively be used. In general, the host physical memory address read from accessing the protected structure at block 465 can represent at least one such host physical memory address required to access the aperture.The illustrated method involves architectural operations (eg, those that are visible from a software perspective). In other embodiments, the method can optionally include one or more micro-architectural operations. As an example, instructions may be acquired, decoded, and dispatched out of order, source operands may be accessed, and execution units may perform micro-architectural operations to implement instructions and the like.FIG. 5 is a block diagram of an embodiment of a processor 512 that is operative to execute an embodiment of an aperture write instruction 518. In some embodiments, processor 512 is operative to perform method 462 of FIG. The components, features, and specific optional details described herein for processor 512 and/or instructions 518 are also optionally applicable to method 462. Alternatively, method 462 can be performed by similar or different processors or devices and/or within similar or different processors or devices and/or using similar or different instructions. Moreover, processor 512 is operative to perform methods similar or different than method 462.In some embodiments, the processor may be a general purpose processor (eg, a general purpose microprocessor or central processing unit (CPU) of the type used in a desktop, laptop or other computer). Alternatively, the processor can be a dedicated processor. Examples of suitable special purpose processors include, but are not limited to, network processors, communication processors, cryptographic processors, graphics processors, coprocessors, embedded processors, digital signal processors (DSPs), and controllers (eg, micro Controller). The processor can have various Complex Instruction Set Computing (CISC) architectures, Reduced Instruction Set Computing (RISC) architecture, Very Long Instruction Word (VLIW) architecture, hybrid architecture, any architecture in other types of architecture, or have different architectures Combinations (for example, different cores may have different architectures). In some embodiments, the processor can include being disposed on at least one integrated circuit or semiconductor die. In some embodiments, the processor may include at least some hardware (eg, transistors, integrated circuits, non-volatile memory that stores microcode, etc.).During operation, processor 512 can receive aperture write instruction 518. For example, the instruction can be received from memory via a bus or other interconnect. The instructions may represent macro instructions, machine code instructions, or other instructions or control signals of the processor's instruction set. In some embodiments, the aperture write instruction may explicitly specify or otherwise indicate (eg, implicitly indicate) the source operand 572 with data 574 (eg, by one or more fields or sets of bits). As an example, an instruction may have a source operand designation field to specify a register, memory location, or other storage location for the source operand. Alternatively, the source operand may optionally be stored in an implicit storage location for the instruction (eg, a register that is implicitly or inherently indicated by the opcode of the instruction, although not expressed). As an example, as shown in the illustrated example embodiment, source operand 572 can be stored in memory 526, and the aperture write instruction can specify or otherwise indicate having a set of registers 578 for addressing source operations. One or more registers of memory address information 580 of number 572.Referring again to FIG. 1, the processor includes a decoding unit or decoder 568. The decoding unit can receive and decode the aperture write command. The decoding unit may output one or more relatively lower levels of instructions or control signals (eg, one or more microinstructions, micro-ops, microcode entry points, decoded instructions or control signals, etc.), which reflect, represent, and / or derived from a relatively high level of aperture write instructions. In some embodiments, the decoding unit can include and be coupled to one or more input structures (eg, port(s), interconnect(s), interfaces) for receiving aperture write instructions. Identifying and decoding the instruction recognition and decoding logic of the aperture write instruction, and one or more output structures coupled thereto to output the low level instruction(s) or control signal(s) (eg, (or Multiple) ports, (one or more) interconnects, interfaces). Decoding units may be implemented using a variety of different mechanisms including, but not limited to, microcode read only memory (ROM), lookup tables, hardware implementations, programmable logic arrays (PLAs), and other mechanisms suitable for implementing decoding units. . In some embodiments, the decoding unit can be included on the die (eg, on a die having an execution unit 570). In some embodiments, the decoding unit may include at least some hardware (eg, transistors, integrated circuits, or on-die firmware, etc.).In some embodiments, instead of providing an aperture write instruction directly to decode unit 568, an instruction simulator, converter, deformer, interpreter, or other instruction conversion module may alternatively be used. Various types of instruction conversion modules can be implemented in software, hardware, firmware, or a combination thereof. In some embodiments, the instruction conversion module can be external to the processor, such as, for example, on a separate die and/or in memory (eg, as a static, dynamic, or runtime simulation module). As an example, the instruction conversion module can receive an aperture write instruction, which can be a first set of instructions, and can simulate, translate, morph, interpret, or otherwise convert the aperture write instruction into one or more corresponding intermediate instructions or A control signal, which can be a second different set of instructions. The one or more intermediate instructions or control signals of the second set of instructions may be provided to a decoding unit (eg, decoding unit 568), which may decode them into native hardware (eg, one by the processor) One or more lower level instructions or control signals executed by the plurality of execution units).Referring again to FIG. 5, execution unit 570 is coupled to decoding unit 568. In some embodiments, the execution unit can be on a die or integrated circuit (eg, on a die or integrated circuit with a decoding unit). The execution unit may receive one or more decoded or otherwise converted instructions or control signals that are represented and/or derived from aperture write instructions. During operation, the execution unit is operative to couple with memory 526 when deployed in the system. The execution unit and/or processor may include specific or specific logic (eg, a transistor, an integrated circuit, or other hardware potentially combined with firmware) operable to perform an aperture write instruction (eg, stored in a non-volatile memory) Instructions) and / or software). The illustrated execution units are shown as a single unit, but it is to be appreciated that the execution units may potentially/optionally include logic distributed or distributed across memory access resources of various different components or processors of the memory subsystem.The execution unit may be responsive to the aperture write instruction and/or as a result of the aperture write instruction (eg, in response to the one or more instructions or control signals decoded from the instruction and/or in response to the decoded instruction and/or response The data 574 of the source operand 572 is accessed to access or otherwise receive the instructions provided to the decoder. Where the source operand is in memory, the execution unit and/or processor may perform an address translation access source operand. In a specific example embodiment, the amount of data may optionally be sixty-four bytes of data. In other embodiments, the amount of data may optionally be more or less than sixty-four bytes.The execution unit may also operate in response to the aperture write instruction and/or as a result of the aperture write instruction to access or otherwise receive the host physical memory address 530 associated with the aperture 532. As shown, in some embodiments, the host physical memory address can optionally be stored in the access protected structure 528. Access to protected structures can be similar or identical to those previously discussed. As discussed above, accessing a protected structure can broadly represent a structure for which access to it is controlled, restricted, or otherwise protected. In some embodiments, the VMM can protect access to the protected structure from unauthorized access. One specific suitable example of accessing a protected structure is VMCS, although the scope of the invention is not so limited. Alternatively, instead of using a virtual machine control structure, a dedicated structure can be used to store host physical memory addresses without necessarily having to store some or all of the other types of data that are often stored in the VMCS. In some embodiments, such a dedicated structure may optionally be protected using techniques similar or identical to those used to protect the VMCS. As shown, in some embodiments, access protected structures may optionally be stored in memory 526. Alternatively, the protected structure can alternatively be accessed using on the die or on the processor.In some embodiments, the processor may have on-die logic to protect host physical memory address 530 stored in access protected structure 528 from being specifically designed to access it and/or have no access to it. Access to the special access privileged instructions. For example, conventional general purpose read instructions from memory, store instructions to memory, gather instructions, scatter instructions, etc. may not have access to the host physical memory address stored in the access protected structure. However, on-die logic may allow an aperture write instruction (eg, based on its opcode) to access a host physical memory address stored in an access protected structure. The aperture write instruction can be specifically designed and/or specifically given privileges to access it. In some embodiments, an aperture write instruction may be able to access an access protected structure when stored in memory, even though the aperture write instruction may not indicate any architecturally visible information to indicate access to the protected structure for storage in memory Where?The execution unit may also operate in response to the aperture write instruction and/or as a result of the aperture write instruction to store data 574 received from source operand 572 within aperture 532 at host physical memory address 576. The aperture can be similar or identical to the aperture previously described. For example, the aperture can broadly represent a protected range, area, or other portion of the memory that can be used to securely store data. In some embodiments, the aperture may represent such a portion of the memory that the VMM allows or restricts access to. In some embodiments, the host physical memory address of the aperture (eg, all host physical memory addresses for the entire aperture) may not be accessible or reachable through address translation and/or through a second level hierarchical paging structure. For example, in some embodiments, the second level hierarchical paging structure may have no conversion to the host physical memory address trapped within the aperture.In some embodiments, host physical memory address 576 to which data 534 is stored within aperture 532 may not be obtained by address translation (eg, may not be obtained from the TLB, or by performing page table walks, etc.). That is, in some embodiments, the aperture can be accessed without performing address translation from the virtual address to the actual physical memory address (eg, host physical memory address) without accessing the first level or second level hierarchical paging structure ( For example, the extended page table) does not have to access or attempt to use the TLB used to cache the translation to the host physical address. Rather, the host physical memory address 576 can be stored somewhere, retrieved when the aperture write instruction is executed, and used to store data directly into the aperture without further address translation. The aperture write instruction can have special privileges within the logic on the processor's die to bypass such address translation.However, conventional general purpose memory access instructions (eg, reading instructions from memory, storing instructions to memory, collecting instructions, scatter instructions, etc.) may not have such attributes that allow them to bypass address translation. Rather, such conventional general purpose memory access instructions would need to undergo address translation and a second level hierarchical paging structure and/or TLB cache conversion that has been obtained from such a second level hierarchical paging structure. However, in some embodiments, such a second level hierarchical paging structure may be configured to have no conversion to host physical memory addresses trapped within the aperture, as described above. In this way, the aperture may be inaccessible to such conventional general purpose memory access instructions. Moreover, in some embodiments, it may not be necessary to exchange EPT or other second level page tables. Such an exchange of EPT or other second level page tables (which is not required) may otherwise have, for example, serializing the processor, flushing the TLB (eg, due to its storage being independent of the new second level page table) Obsolete information), the cost of rebuilding TLB with updated information.In some embodiments, the host physical memory address 530 read from accessing the protected structure 528 can optionally be the base address of the aperture 532. In some embodiments, the aperture write instruction may also have an offset that can be applied to the base address of the aperture to indicate the host physical memory address 576 where the data 534 is to be stored. In other embodiments, the host physical memory address 530 read from the access protected structure 528 can optionally address physical memory locations that store one or more other host physical memory addresses, and in these other host physical memory addresses One may be the host physical memory address 576 or may be the base address of the aperture 532 to which the offset provided by the instruction may be applied to obtain the host physical memory address 576. For example, host physical memory address 530 can lead to a list or other set of multiple host physical memory addresses, each of which corresponds to a base address of a different aperture. In such cases, the aperture write command may further specify or indicate a number, index or other aperture selector to select one of these apertures. The host physical memory address for the base address of the selected aperture can then be read and the offset provided by the instruction can be applied to the base address to obtain the host physical memory address 576. These are just a few illustrative examples. Other methods will be apparent to those skilled in the art and benefit from this disclosure. In general, host physical memory address 530 read from access protected structure 528 may represent at least one host physical memory address required to access aperture 532, and in some cases, one or more additional host physics may optionally be required Memory address.In some embodiments, host physical memory address 530 and host physical memory address 576 may remain secret or confidential within the microarchitecture of the processor, but are never architecturally exposed to software corresponding to the aperture write instruction. For example, a VM executing an aperture write instruction may not be able to learn or know these host physical memory addresses. However, the VM may be able to utilize the aperture by performing an access write instruction, even though the VM may not know where the aperture is located in system memory. The VMM may know the host physical memory address because it may have stored the host physical address in an access protected structure, but the microarchitecture implementing the aperture write instruction may not even disclose the host physical address to the VMM under certain circumstances. Rather, in some embodiments, the microarchitecture can keep the host physical memory address secret and/or invisible to all software.To avoid obscuring the description, a relatively simple processor 512 has been shown and described. However, the processor may optionally include other processor components. For example, various different embodiments may include various different combinations and configurations of components illustrated and described with respect to any of Figures 11-13. All components of the processor can be coupled together to allow them to operate as intended. As an example, consider FIG. 11B, instruction cache 1134 can cache instructions, instruction fetch unit 1138 can fetch instructions, decode unit 1140 can decode instructions, scheduler unit 1156 can schedule associated operations, and execution unit 1162 can execute instructions. The retiring unit 1154 can retid the instruction or the like.6 is a block flow diagram of an embodiment of a method 682 of an embodiment of performing an aperture read command. In various embodiments, the method can be performed by a processor, an instruction processing device, a digital logic device, or an integrated circuit.The method includes receiving an aperture read command at block 683. In various aspects, the instructions may be received at a processor or a portion thereof (eg, an instruction fetch unit, a decode unit, a bus interface unit, etc.). In various aspects, sources from outside the processor and/or off-core (eg, from memory, interconnects, etc.) or from sources on the processor and/or die (eg, from instruction cache, instructions) Queue, etc.) Receive instructions.The aperture read instruction may specify (eg, explicitly specified by a field or other set of bits), or otherwise indicate (eg, implicitly indicate) a destination operand. In some embodiments, the destination operand may optionally be in memory. In other embodiments, the destination operand may optionally be in a register of the processor. In other embodiments, other destination storage locations may alternatively be used.At block 684, in response to the aperture read instruction, the host physical memory address can be read from the access protected structure, which is associated with the aperture in the system memory. Access to protected structures can be similar or identical to those discussed above. In some embodiments, the host physical memory address may not be architecturally exposed to software corresponding to the aperture read instruction and/or may remain invisible to such software.Then, at block 685, data from the host physical memory address within the aperture can be read in response to the aperture read instruction. In some embodiments, the host physical memory address from which data is read may not be obtained by address translation (eg, may not be obtained from the TLB or by performing page table walks, etc.). As previously described, the host physical memory address read at block 684 may be the same or different than the host physical memory address used to read the data at block 685.At block 686, data read from the aperture may be stored to the destination operand in response to the aperture write instruction. In a specific example embodiment, the amount of data may optionally be sixty-four bytes of data. In other embodiments, the amount of data may optionally be more or less than sixty-four bytes.FIG. 7 is a block diagram of an embodiment of a processor 712 that is operative to perform an embodiment of an aperture read instruction 720. In some embodiments, processor 712 is operative to perform method 682 of FIG. The components, features, and specific optional details described herein for processor 712 and/or instructions 720 are also optionally applicable to method 682. Alternatively, method 682 can be performed by similar or different processors or devices and/or within similar or different processors or devices and/or using similar or different instructions. Moreover, processor 712 is operative to perform a method similar or different than method 682.Processor 712 can be the same, similar, or different than processor 512 of FIG. The processor includes a decoding unit 768 operative to decode the aperture read instruction 720, an execution unit 770 operative to execute the aperture read instruction, and a memory operable to store the destination operand 790 for the aperture read instruction 720 Register 578 of address information 780. In addition to or in connection with these aspects of the aperture read instruction, these components may alternatively be similar or identical to the correspondingly named components of FIG. 5, unless otherwise specified, except as otherwise described with respect to the aperture write instruction 518. For example, have the same or similar characteristics). In addition, aperture read command 720 can cause processor 712 to interact with access protected structure 728 and aperture 732. These components may optionally be similar or identical (e.g., having the same or similar characteristics) to the correspondingly named components of Figure 5, except as otherwise pertaining to any aspect of the aperture read instructions. In order to avoid obscuring the description, different and/or additional features of the processor 712 and its components will be primarily described without repeating all of the features that may alternatively be the same.During operation, processor 712 can receive aperture read command 720. In some embodiments, the aperture read command may explicitly specify or otherwise indicate the destination operand 790. As an example, as shown in the illustrated example embodiment, the destination operand 790 can be stored in the memory 726, and the aperture read instruction can specify or otherwise indicate that one of the set of registers 778 has to be used for addressing. One or more registers of memory address information 780 of the destination operand, although the scope of the invention is not so limited.Decoding unit 768 can receive and decode the aperture read command. Execution unit 770 is coupled to decode unit 768 and coupled to register 778. During operation, the execution unit is operative to couple with memory 726 when deployed in the system. The execution unit may be responsive to the aperture read instruction and/or as a result of the aperture read instruction (eg, in response to the one or more instructions or control signals decoded from the instruction and/or in response to the instruction being decoded and/or in response to the instruction The decoder is provided to operate to access or otherwise receive the host physical memory address 730 associated with the aperture 732. As shown, in some embodiments, host physical memory address 730 can optionally be stored in access protected structure 728. The access protected structure may be similar or identical (eg, having the same or similar characteristics) to those previously described (eg, for access to protected structure 528).The execution unit may also operate to read data 734 from aperture 732 at host physical memory address 776 in response to the aperture read instruction and/or as a result of the aperture read instruction. The apertures may be similar or identical to those previously discussed (eg, for aperture 532) (eg, having the same or similar characteristics). In some embodiments, the host physical memory address 776 from which data 734 is read within aperture 732 may not be obtained by address translation. The aperture read command can have special permissions within the logic on the processor's die to access the aperture by bypassing address translation. As previously described, host physical memory address 776 can be associated with host physical memory address 730 in a variety of different manners.The execution unit may also operate in response to the aperture read instruction and/or as a result of the aperture read instruction to store data 734 received from aperture 732 as data 792 in destination operand 790. In a specific example embodiment, the amount of data may optionally be sixty-four bytes of data. In other embodiments, the amount of data may optionally be more or less than sixty-four bytes.FIG. 8 is a block diagram of a first method for accessing data 834 from aperture 832. In the first method, the aperture access instruction 816 can indicate an offset. The aperture access instruction can be used to access the host physical address 830 of the base address 894 for the aperture 832 from the access protected structure 828. The offset 895 indicated by the instruction may then be applied to the base address 894 to obtain the physical memory address or location of the data 834 within the aperture 832 to be accessed. Offset 895 may allow aperture access instruction 816 to address or otherwise indicate and use different storage locations or portions of aperture 832. The software may never know or learn the actual host physical address for the base address 894 of the aperture, but may be able to indicate different offsets relative to the base address to address different locations within the aperture.9 is a block diagram of a second method for accessing data 934 from aperture 932-4. In this second method, aperture access instruction 916 can indicate an aperture selector and an offset. The aperture access instruction can be used to access the host physical address 930 of the base address 994 for the aperture block from the access protected structure 928. The aperture block can include a plurality of different apertures in physically adjacent memory locations. In the illustrated example, the first aperture 932-1 to the fourth aperture 932-4 are shown, but there may optionally be fewer or more apertures. An aperture selector 996, indicated by the instructions, can be used to select one of the apertures from the aperture block. In this example, the fourth aperture 932-4 is selected. The offset 995 indicated by the instruction can then be applied to the selected base address 999 of the fourth aperture to obtain the physical memory address or location of the data 934 within the selected fourth aperture 932-4 to be accessed. The aperture selector 996 can allow for the selection and use of different apertures. Such different apertures can be used for different purposes. As an example, different apertures may be used for communication between different VMs, for different applications associated with the same or different VMs, or otherwise.10 is a block diagram of a third method for accessing data 1034 from aperture 1032-4. In this third method, the aperture access instruction 1016 can indicate an aperture selector and an offset. The aperture access instruction can be used to access the host physical address 1030 of the base address 1097 for the aperture list from the access protected structure 1028. The aperture list may include a list of host physical addresses, each of which is required to access a different one of a list of different apertures that may/optionally in different non-contiguous physical memory locations.In the illustrated example, a first host physical memory address 1098-1 for the base address of the first aperture is shown to a fourth host physical memory address 1098-4 for the base address of the fourth aperture, but There may optionally be fewer or more. The aperture selector 1096, indicated by the instructions, can be used to select one of the host physical memory addresses for the corresponding aperture from the aperture list. In this example, the fourth host physical memory address 1098-4 is selected. The fourth host physical memory address 1098-4 addresses the base address 1094 of the fourth aperture 1032-4. The offset 1095 indicated by the instruction can then be applied to the base address 1094 of the fourth aperture to obtain the physical memory address or location of the data 1034 within the selected fourth aperture 1032-4 to be accessed.In some embodiments, using any of the methods illustrated in Figures 8-10, a boundary check may optionally be performed to ensure that the offset falls into when applied to the base address (eg, as by the lower and upper bounds) Within the limits of the defined aperture. If the attempted access is not completely within the boundary, then access may not be performed and special handling may be performed (eg, VM exit may be performed, may be objected, etc.).The aperture access instructions, apertures, and other methods described herein can be used for a variety of different applications and purposes. Examples include, but are not limited to, shared data processing, shared server processing, shared cloud computing, shared network processing, and the like. For further explanation, consider a possible application in Network Function Virtualization (NFV). NFV generally involves virtualizing and running network functions (e.g., virtual network functions (VNFs)), such as routing, switching, intelligent packet processing, etc., on a typical commercial off-the-shelf (COTS) system as opposed to fixed-function network appliances. A key driver use is the Network Service Function Link (NSFC) to form a network processing pipeline. For example, a packet may pass through a series of VNFs, such as deep packet inspection, network address translation, and other network functions, before the packet is forwarded to the next destination. Different virtual machines can potentially be used to perform different NSFC tasks, VNFs, and the like. The methods described herein can potentially be used to share data processed by one virtual machine that has performed an NSFC task, VNF, etc. with another virtual machine that is to perform another NSFC task, VNF, and the like. Another possible application is in a Software Defined Network (SDN) where two or more VMs can work together and may want to share data. Other possible applications are in interprocess communication (IPC) libraries, data plane processing applications, and software switching applications. These are just a few illustrative examples. The scope of the invention is not limited to any particular application.Exemplary core architecture, processor, and computer architectureThe processor cores can be implemented in different ways, implemented for different purposes, and implemented in different processors. For example, such core implementations may include: 1) a generic ordered core intended for general purpose computing; 2) a high performance general unordered core intended for general purpose computing; 3) primarily intended for graphics and/or science ( Throughput) A dedicated core for computing. Implementations of different processors may include: 1) including one or more general purpose ordered cores intended for general purpose computing and/or one or more general unordered cores intended for general purpose computing; and 2) including primary A coprocessor intended for one or more dedicated cores for graphics and/or scientific (throughput) calculations. Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) a coprocessor on a separate die in the same package as the CPU; a coprocessor on the same die as the CPU (in this case, such a coprocessor is sometimes referred to as dedicated logic, such as integrated graphics and/or science (throughput) logic, or as a dedicated core); And 4) may include the described CPU (sometimes referred to as the application core(s) or application processor(s) on the same die, the coprocessor described above, and the on-chip functionality with additional functionality. system. The exemplary core architecture is described next, followed by a description of the exemplary processor and computer architecture.Exemplary core architectureOrdered and unordered core block diagram11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline, in accordance with an embodiment of the present invention. 11B is a block diagram illustrating both an exemplary embodiment of an ordered architecture core to be included in a processor and an exemplary register renaming, out-of-order issue/execution architecture core, in accordance with an embodiment of the present invention. The solid lined boxes in Figures 11A-B illustrate the in-order pipeline and the ordered core, while the optional addition of the dashed box illustrates the register renaming, the out-of-order issue/execution pipeline, and the core. Considering that the ordered aspect is a subset of the unordered aspect, the unordered aspect will be described.In FIG. 11A, processor pipeline 1100 includes an acquisition phase 1102, a length decoding phase 1104, a decoding phase 1106, an allocation phase 1108, a rename phase 1110, a scheduling (also referred to as dispatch or issue) phase 1112, a register read/memory read. Stage 1114, execution stage 1116, write back/memory write stage 1118, exception handling stage 1122, and commit stage 1124 are taken.FIG. 11B illustrates a processor core 1190 that includes a front end unit 1130 coupled to an execution engine unit 1150, and both of which are coupled to a memory unit 1170. Core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As a further option, core 1190 can be a dedicated core such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.The front end unit 1130 includes a branch prediction unit 1132 that is coupled to an instruction cache unit 1134 that is coupled to an instruction translation lookaside buffer (TLB) 1136 that converts the lookaside buffer The TFB 1136 is coupled to an instruction fetch unit 1138 that is coupled to the decoding unit 1140. Decoding unit 1140 (or decoder) may decode the instructions and generate one or more micro-ops, microcode entry points, microinstructions, other instructions, or other control signals as outputs that are decoded from the original instructions, or otherwise reflect the original Instructions, or exported from the original instructions. Decoding unit 1140 can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 1190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (eg, in decoding unit 1140 or otherwise within front end unit 1130). Decoding unit 1140 is coupled to renaming/allocator unit 1152 in execution engine unit 1150.Execution engine unit 1150 includes a rename/allocator unit 1152 that is coupled to retirement unit 1154 and a set of one or more scheduler units 1156. The scheduler unit(s) 1156 represent any number of different schedulers, including reservation stations, central instruction windows, and the like. The scheduler unit(s) 1156 are coupled to the physical register file unit(s) 1158. Each of the physical register file units 1158 represents one or more physical register files, wherein the different physical register files store one or more different data types, such as scalar integers, scalar floating points, packed integers , packed floating point, vector integer, vector floating point, state (eg, instruction pointer as the address of the next instruction to be executed), and so on. In one embodiment, physical register file unit(s) 1158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file unit(s) 1158 are overlapped by the retirement unit 1154 to illustrate various ways in which register renaming and out-of-order execution can be implemented (eg, using reorder buffer(s) and (one or more) Retired register file; use (one or more) future files, history buffer(s), and (one or more) retirement register files; use register maps and register pools; etc.). The retirement unit 1154 and the physical register file unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution cluster(s) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164. Execution unit 1162 can perform various operations (eg, shifting, addition, subtraction, multiplication) and perform various types of various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) operating. While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include multiple execution units that perform only one execution unit or all of the functions. The scheduler unit(s) 1156, the physical register file unit(s) 1158, and the execution cluster(s) 1160 are shown as possibly plural, as some embodiments are directed to certain data/ The operation type creates a separate pipeline (for example, a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline, each with its own scheduler unit, ( One or more physical register file units and/or execution clusters - and in the case of a separate memory access pipeline, implement some implementations in which only the pipeline's execution cluster has memory access unit(s) 1164 example). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issued/executed while the rest are ordered.The set of memory access units 1164 are coupled to a memory unit 1170 that includes a data TLB unit 1172 that is coupled to a data cache unit 1174 that is coupled to Level (L2) cache unit 1176. In an exemplary embodiment, memory access unit 1164 can include a load unit, a memory address unit, and a store data unit, each of which is coupled to data TLB unit 1172 in memory unit 1170. Instruction cache unit 1134 is further coupled to level 2 (L2) cache unit 1176 in memory unit 1170. L2 cache unit 1176 is coupled to one or more other levels of cache and is ultimately coupled to main memory.As an example, an exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 1100 as follows: 1) instruction fetch 1138 performs fetch and length decode stages 1102 and 1104; 2) decode unit 1140 performs decode stage 1106; 3) The name/allocator unit 1152 performs an allocation phase 1108 and a rename phase 1110; 11) the scheduler unit 1156 executes the scheduling phase 1112; 5) the physical register file unit(s) 1158 and the memory unit 1170 Execution register read/memory read stage 1114; execution cluster 1160 performs execution stage 1116; 6) memory unit 1170 and physical register file unit(s) 1158 perform write back/memory write stage 1118; 7) in exception Various units may be involved in the handling phase 1122; and 8) the retirement unit 1154 and the physical register file unit(s) 1158 perform the commit phase 1124.The core 1190 can support one or more instruction sets (eg, the x86 instruction set (with some extensions added with newer versions); MIPS Technologies, Inc., MIPS Technologies, Sunnyvale, California; Sene, California ARM's ARM instruction set of ARM Holdings (with optional additional extensions such as NEON), which includes the instruction(s) described herein. In one embodiment, core 1190 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing the use of packed data to perform operations used by many multimedia applications.It should be understood that the core can support multiple threads (performing two or more sets of parallel operations or threads) and can do so in a variety of ways, including time slicing, multi-threading, and simultaneous multi-threading ( Where a single physical core provides a logical core for each of the threads whose physical cores are simultaneously multithreaded) or a combination thereof (eg, time slice acquisition and decoding and subsequent simultaneous multithreading, such as in Intel® Hyper-Threading Technology) That way).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1134/1174 and shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instructions and data, such as For example, level 1 (L1) internal cache or multi-level internal cache. In some embodiments, the system can include a combination of an internal cache and an external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific exemplary ordered core architecture12A-B illustrate a block diagram of a more specific exemplary ordered core architecture that will be one of several logical blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).Figure 12A is a block diagram of a single processor core along with its connection to the on-die interconnect network 1202 and a local subset 1204 along with its level 2 (L2) cache, in accordance with an embodiment of the present invention. In one embodiment, the instruction decoder 1200 supports an x86 instruction set with a packed data instruction set extension. The L1 cache 1206 allows for low latency access to the cache in scalar and vector units. Although (in order to simplify the design) in one embodiment the scalar unit 1208 and the vector unit 1210 use separate register sets (scalar register 1212 and vector register 1214, respectively) and the data transferred between them is written to the memory and then from Level 1 (L1) cache 1206 reads back, but alternative embodiments of the present invention may use different methods (eg, using a single register set or including allowing data to be transferred between the two register files without being written and Read back the communication path).The local subset 1204 of the L2 cache is part of a global L2 cache that is divided into separate local subsets by each processor core. Each processor core has a direct access path to its own local subset 1204 of L2 caches. The data read by the processor core is stored in its L2 cache subset 1204 and can be accessed in parallel, in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 1204 and is cleared from other subsets, if necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. In some embodiments, each circular data path is 1024 bits wide in each direction.Figure 12B is an expanded view of a portion of the processor core of Figure 12A, in accordance with an embodiment of the present invention. FIG. 12B includes the L1 data cache 1206A portion of L1 cache 1206 and more details regarding vector unit 1210 and vector register 1214. In particular, vector unit 1210 is a 16 wide vector processing unit (VPU) (see 16 wide ALU 1228) that performs one or more of integer instructions, single precision floating point instructions, and double precision floating point instructions. The VPU supports the use of the mixing unit 1220 to mix register inputs, the numerical conversion by the value conversion units 1222A-B, and the copying of the memory input by the copy unit 1224. Write mask register 1226 allows prediction of the resulting vector write.Processor with integrated memory controller and graphics components13 is a block diagram of a processor 1300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics elements, in accordance with an embodiment of the present invention. The solid lined box in Figure 13 illustrates a processor 1300 having a single core 1302A, a system agent 1310, a set of one or more bus controller units 1316, and an optional additional illustration of a dashed box with multiple cores 1302A -N, a set of one or more integrated memory controller units 1314 in system agent unit 1310 and an alternate processor 1300 of dedicated logic 1308.Thus, different implementations of processor 1300 can include: 1) a CPU with dedicated logic 1308 and cores 1302A-N, which are integrated graphics and/or scientific (throughput) logic (which can include one or more Core), the cores 1302A-N are one or more general cores (eg, a generic ordered core, a generic unordered core, a combination of the two); 2) a coprocessor having cores 1302A-N, Cores 1302A-N are a number of dedicated cores primarily intended for graphics and/or scientific (throughput) computation; and 3) coprocessors with cores 1302A-N, which are a large number of general purpose ordered cores. Thus, processor 1300 can be a general purpose processor, coprocessor, or special purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), a high throughput crowd integration core (MIC) Coprocessor (including 30 or more cores), embedded processor, etc. The processor can be implemented on one or more chips. Processor 1300 can be part of one or more substrates and/or can be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the core, a set of one or more shared cache units 1306, and an external memory (not shown) coupled to a set of integrated memory controller units 1314. The set of shared cache units 1306 can include one or more intermediate caches (such as level 2 (L2), level 3 (L3), level 4 (L4) or other level of cache), last level cache (LLC) And / or a combination thereof. Although in one embodiment the ring-based interconnect unit 1312 interconnects the integrated graphics logic 1308, the set of shared cache units 1306, and the system proxy unit 1310/(one or more) integrated memory controller units 1314, an alternate implementation Any number of well known techniques can be used for interconnecting such units. In one embodiment, consistency is maintained between one or more cache units 1306 and cores 1302A-N.In some embodiments, one or more of the cores 1302A-N are capable of multi-threading. System agent 1310 includes those components that coordinate and operate cores 1302A-N. System agent unit 1310 can include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components required to adjust the power states of cores 1302A-N and integrated graphics logic 1308. The display unit is used to drive one or more externally connected displays.In terms of a set of architectural instructions, cores 1302A-N may be isomorphic or heterogeneous; that is, two or more of cores 1302A-N may be capable of executing the same set of instructions, while other cores may be capable of performing the Only a subset of the instruction set or a different instruction set.Exemplary computer architecture14-17 are block diagrams of exemplary computer architectures. For laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, Other system designs and configurations known in the art of set top boxes, microcontrollers, cellular telephones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processors and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 14, a block diagram of a system 1400 in accordance with one embodiment of the present invention is shown. System 1400 can include one or more processors 1410, 1415 that are coupled to controller hub 1420. In one embodiment, controller hub 1420 includes a graphics memory controller hub (GMCH) 1490 and an input/output hub (IOH) 1450 (which may be on separate chips); GMCH 1490 includes memory 1440 and coprocessor 1445 The memory and graphics controller coupled to; the IOH 1450 couples an input/output (I/O) device 1460 to the GMCH 1490. Alternatively, one or both of the memory and graphics controller are integrated within a processor (as described herein), and memory 1440 and coprocessor 1445 are directly coupled to processor 1410 and to a single chip having IOH 1450 Controller hub 1420.The optional nature of the additional processor 1415 is indicated by dashed lines in FIG. Each processor 1410, 1415 can include one or more of the processing cores described herein and can be a certain version of the processor 1300.Memory 1440 can be, for example, a dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1420 is connected via a multi-drop bus (such as a front side bus (FSB)), a point-to-point interface (such as a fast path interconnect (QPI)), or the like 1495 The processor(s) 1410, 1415 are in communication.In one embodiment, coprocessor 1445 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like. In one embodiment, controller hub 1420 can include an integrated graphics accelerator.There may be multiple differences between physical resources 1410, 1415 in terms of the metric range of metrics including architecture, microarchitecture, heat, power consumption characteristics, and the like.In one embodiment, processor 1410 executes instructions that control general types of data processing operations. Embedded in the instruction can be a coprocessor instruction. The processor 1410 identifies these coprocessor instructions as having the type that should be executed by the attached coprocessor 1445. Accordingly, processor 1410 sends these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1445 on a coprocessor bus or other interconnect. The coprocessor(s) 1445 accepts and executes the received coprocessor instructions.Referring now to Figure 15, a block diagram of a first, more specific, exemplary system 1500 in accordance with an embodiment of the present invention is shown. As shown in FIG. 15, multiprocessor system 1500 is a point-to-point interconnect system and includes a first processor 1570 and a second processor 1580 that are coupled via a point-to-point interconnect 1550. Each of the processors 1570 and 1580 can be a certain version of the processor 1300. In one embodiment of the invention, processors 1570 and 1580 are processors 1410 and 1415, respectively, and coprocessor 1538 is coprocessor 1445. In another embodiment, processors 1570 and 1580 are processor 1410 and coprocessor 1445, respectively.Processors 1570 and 1580 are shown to include integrated memory controller (IMC) units 1572 and 1582, respectively. Processor 1570 also includes point-to-point (P-P) interfaces 1576 and 1578 as part of its bus controller unit; similarly, second processor 1580 includes P-P interfaces 1586 and 1588. Processors 1570, 1580 can exchange information via point-to-point (P-P) interface 1550 using P-P interface circuits 1578, 1588. As shown in FIG. 15, IMCs 1572 and 1582 couple the processors to respective memories, namely memory 1532 and memory 1534, which may be portions of the main memory that are locally attached to the respective processors.Processors 1570, 1580 can each exchange information with chipset 1590 via separate P-P interfaces 1552, 1554 using point-to-point interface circuits 1576, 1578, 1586, 1588. Chipset 1590 can optionally exchange information with coprocessor 1538 via high performance interface 1539. In one embodiment, coprocessor 1538 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like.A shared cache (not shown) may be included in either processor or external to both processors, but connected to the processor via a PP interconnect such that if the processor is placed in a low power mode then either or both The local cache information of the processors can be stored in the shared cache.Chipset 1590 can be coupled to first bus 1516 via interface 1596. In one embodiment, the first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, but the scope of the present invention is not So limited.As shown in FIG. 15, various I/O devices 1514 can be coupled to a first bus 1516 along with a bus bridge 1518 that couples the first bus 1516 to a second bus 1520. In one embodiment, such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator such as, for example, a graphics accelerator or digital signal processing (DSP) unit, a field programmable gate array, or any other processor One or more additional processors 1515 are coupled to the first bus 1516. In one embodiment, the second bus 1520 can be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1520, including, for example, a keyboard and/or mouse 1522, a communication device 1527, and a storage unit 1528, such as a disk drive or other mass storage device, which in one embodiment may include instructions /code and data 1530. Additionally, audio I/O 1524 can be coupled to second bus 1520. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 15, the system can implement a multipoint bus or other such architecture.Referring now to Figure 16, a block diagram of a second, more specific, exemplary system 11300 in accordance with an embodiment of the present invention is shown. Similar elements in Figures 15 and 16 have similar reference numerals, and certain aspects of Figure 15 have been omitted from Figure 16 in order to avoid obscuring other aspects of Figure 16.16 illustrates that processors 1570, 1580 can include integrated memory and I/O control logic ("CL") 1572 and 1582, respectively. Thus, CL 1572, 1582 includes an integrated memory controller unit and includes I/O control logic. Figure 16 illustrates that not only memory 1532, 1534 is coupled to CL 1672, 1682, but I/O device 1614 is also coupled to control logic 1672, 1682. Traditional I/O device 1615 is coupled to chipset 1590.Referring now to Figure 17, a block diagram of a SoC 1700 in accordance with an embodiment of the present invention is shown. Like elements in Figure 13 have similar reference numerals. Also, the dashed box is an optional feature on more advanced SoCs. In Figure 17, the interconnect unit(s) 1702 is coupled to: an application processor 1710 that includes a set of one or more cores 202A-N and one or more shared cache units 1306; A proxy unit 1310; a bus controller unit(s) 1316; an integrated memory controller unit(s) 1314; a set of one or more coprocessors 1720, which may include integrated graphics logic, an image processor An audio processor and video processor; a static random access memory (SRAM) unit 1730; a direct memory access (DMA) unit 1732; and a display unit 1740 for coupling to one or more external displays. In one embodiment, coprocessor(s) 1720 includes a dedicated processor such as, for example, a network or communication processor, a compression engine, a GPGPU, a high throughput MIC processor, an embedded processor, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation methods. Embodiments of the invention may be implemented as a computer program or program code comprising at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements), at least one input device, and at least Execution on a programmable system of an output device.Program code, such as code 1530 illustrated in Figure 15, may be applied to input instructions to perform the functions described herein and to generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high level procedural or object oriented programming language to communicate with the processing system. The program code can also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment can be implemented by a representative instruction stored on a machine readable medium representing various logic within a processor that, when read by a machine, causes the machine to be used to make The logic to perform the techniques described herein. Such representations, referred to as "IP cores", may be stored on a tangible, machine readable medium and supplied to various guests or manufacturing facilities for loading into a production machine that actually manufactures the logic or processor.Such a machine-readable storage medium may include, without limitation, a non-transitory tangible item arrangement made or formed by a machine or device, including a storage medium such as a hard disk, any other type of disk (including floppy disks, optical disks, compact disks only) Read memory (CD-ROM), compact disk rewritable device (CD-RW) and magneto-optical disk), semiconductor devices such as read only memory (ROM), random access memory (RAM) (such as dynamic random access memory ( DRAM), static random access memory (SRAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM), phase change memory (PCM), magnetic card or Optical card, or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine readable media containing instructions or design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processes described herein. And/or system characteristics. Such an embodiment may also be referred to as a program product.Simulation (including binary conversion, code transformation, etc.)In some cases, an instruction converter can be used to convert an instruction from a source instruction set to a target instruction set. For example, an instruction converter can (eg, use static binary conversion, dynamic binary conversion including dynamic compilation) to convert instructions, morph the instructions, emulate instructions, or otherwise convert to one or more other instructions to be processed by the core. . The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction converter can be on the processor, external to the processor, or partially on the processor and partially external to the processor.18 is a block diagram of a use of a software instruction converter that compares binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter can be implemented in software, firmware, hardware, or various combinations thereof. 18 illustrates that a program employing high level language 1802 can be compiled using x86 compiler 1804 to generate x86 binary code 1806, which can be natively executed by processor 1816 having at least one x86 instruction set core. Processor 1816 having at least one x86 instruction set core represents any processor that can perform substantially the same functions as an Intel processor having at least one x86 instruction set core by: performing or otherwise performing (1) Intel A significant portion of the instruction set of the x86 instruction set core or (2) is the target code version of an application or other software running on an Intel processor having at least one x86 instruction set core for implementation and having at least one x86 instruction The core Intel processor has basically the same result. The x86 compiler 1804 represents a compiler operable to generate x86 binary code 1806 (eg, object code), which may be in the presence of at least one x86 instruction set core with or without additional link processing. Executed on processor 1816. Similarly, FIG. 18 illustrates that a program that employs high-level language 1802 can be compiled using an alternate instruction set compiler 1808 to generate alternate instruction set binary code 1810, which can be composed of no at least one x86 instruction set. The core processor 1814 (for example, a processor with the MIPS instruction set that executes MIPS Technologies Inc. of Sunnyvale, Calif., and/or the core of the ARM instruction set of ARM Holdings, Inc., Sunnyvale, Calif.) Execution. The instruction converter 1812 is operative to convert the x86 binary code 1806 into code that can be executed natively by the processor 1814 that does not have the x86 instruction set core. The converted code is unlikely to be identical to the alternate instruction set binary code 1810 because the instruction converter capable of doing this is difficult to manufacture; however, the converted code will perform the general operation and consist of instructions from the alternate instruction set. . Thus, the instruction converter 1812 represents software, firmware, hardware, or a combination thereof that allows the processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1806 by analog, analog, or any other process.The components, features, and details described with respect to any of the processors disclosed herein may optionally be applied to any of the methods disclosed herein, which may optionally be performed by and/or with such a processor in an embodiment. Any of the processors described herein in the embodiments can optionally be included in any of the systems disclosed herein. Any of the instructions disclosed herein in an embodiment may optionally be performed by and/or utilizing any of the processors disclosed herein, optionally having any of the micro-architectures shown herein in some embodiments. And optionally included in any of the systems shown herein in some embodiments. Thus, in some embodiments, features and details described with respect to any of the instructions disclosed herein may thus be readily applicable to any of the processors and/or systems disclosed herein that may be used to execute those instructions and/or Or system.A processor component disclosed herein can be said to be operative to perform operations, to be configured to perform operations, to perform operations, or to perform operations. For example, the decoder will be able to decode the instructions and the execution unit will be able to store the results and so on. For the sake of clarity, it is to be understood that these expressions do not imply that the processor components are in operation or use, but rather that the processor components can or can be performed while they are in operation, but in the device claims, these processor components Not in operation.In the description and claims, the terms "coupled" and/or "connected", along with their derivatives, may be used. These terms are not intended as synonyms for each other. Rather, in an embodiment, "connected" can be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical and/or electrical contact with each other. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. For example, an execution unit may be coupled to a register and/or a decoding unit by one or more intervening components. In the figures, arrows are used to illustrate the connections and couplings.The components disclosed herein and the methods depicted in the foregoing figures may be implemented in hardware (eg, transistors, gates, circuits, etc.), firmware (eg, non-volatile memory that stores microcode or control signals), (eg, stored in Software on a non-transitory computer readable storage medium, or a combination thereof, is implemented. In some embodiments, the logic, modules, or units may include at least some or primarily a mixture of hardware and/or firmware that is potentially combined with some optional software.The term "and/or" may be used. As used herein, the term "and/or" means one or the other or both (eg, A and/or B means both A or B or both A and B).In the above description, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. The scope of the present invention is not to be determined by the specific examples provided, but only by the following claims. In other instances, well-known circuits, structures, devices, and operations have been shown in the form of a block diagram and/or in detail, in order to avoid obscuring the understanding of the description. Where considered appropriate, the end portions of the reference numerals or the reference numerals are repeated between the figures to indicate corresponding or similar elements, which may optionally have similar or identical characteristics, unless otherwise specified or clearly apparent. .Some operations may be performed by hardware components or may be embodied in machine-executable or circuit-executable instructions, which may be used to cause and/or cause a machine, circuit, or hardware component (eg, a processor, a portion of a processor) , circuitry, etc.) are programmed with instructions to perform the described operations. The operations may also optionally be performed by a combination of hardware and software. A processor, machine, circuit, or hardware may comprise specific or specific circuitry or other logic (e.g., hardware that is potentially combined with firmware and/or software), operative to perform and/or process instructions and store results in response to the instructions.Some embodiments include an article of manufacture (eg, a computer program product) comprising a machine readable medium. The medium can include a mechanism that provides, for example, information stored in a machine readable form. The machine-readable medium can be provided with or stored thereon with a sequence of instructions or instructions, if executed by a machine and/or when executed by a machine, to cause the machine to perform and/or cause the machine to perform one of the methods disclosed herein. Or multiple operations, methods, or techniques.In some embodiments, a machine-readable medium can include a tangible and/or non-transitory machine-readable storage medium. For example, non-transitory machine-readable storage media may include floppy disks, optical storage media, optical disks, optical data storage devices, CD-ROMs, magnetic disks, magneto-optical disks, read only memory (ROM), programmable ROM (PROM), erasable In addition to programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), flash memory, phase change memory, phase change data Storage material, non-volatile memory, non-volatile data storage device, non-transitory memory, non-transitory data storage device, and the like. Non-transitory machine readable storage media do not include transitory signals. In some embodiments, the storage medium can include a tangible medium comprising a solid material or material such as, for example, a semiconductor material, a phase change material, a magnetic solid material, a solid data storage material, and the like. Alternatively, a non-tangible transitory computer readable transmission medium may be optionally utilized, such as, for example, electrical, optical, acoustic or other forms of propagation signals such as carrier waves, infrared signals, and digital signals.Examples of suitable machines include, but are not limited to, general purpose processors, special purpose processors, digital logic circuits, integrated circuits, and the like. Other examples of suitable machines include computer systems or other electronic devices including processors, digital logic circuits, or integrated circuits. Examples of such computer systems or electronic devices include, but are not limited to, desktop computers, laptop computers, notebook computers, tablet computers, netbooks, smart phones, cellular phones, servers, network devices (eg, routers and switches), mobile Internet devices (MID), media player, smart TV, internet machine, set-top box and video game controller.For example, references to "one embodiment", "an embodiment", "one or more embodiments", "some embodiments" or "an" or "an" It is included in the practice of the invention. In the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the present disclosure and the understanding of various aspects of the invention. However, the method of the present disclosure should not be construed as reflecting that the invention requires more features than those specifically recited in the claims. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Therefore, the claims following the detailed description are hereby expressly incorporated into the specific embodiments, and each of the claimsExample embodimentThe following examples relate to further embodiments. The details in the examples can be used anywhere in one or more embodiments.Example 1 is a processor that includes a decoding unit for decoding an aperture access instruction, and an execution unit coupled to the decoding unit. The execution unit, in response to the aperture access instruction, reads from the access protected structure a host physical memory address that will be associated with the aperture that will be in system memory, and within the access aperture at a host physical memory address that will not be obtained by address translation. The data.Example 2 includes the processor of Example 1, optionally where the aperture will represent a portion of the system memory that will not be accessible by address translation.Example 3 includes the processor of Example 1, optionally wherein the decoding unit is to decode an aperture access instruction, which will be an aperture write instruction, optionally wherein the aperture write instruction will indicate a source operand, and Optionally, the execution unit will receive data from the source operand in response to the aperture write instruction and will store data from the source operand to the host physical memory address within the aperture.Example 4 includes the processor of Example 3, optionally wherein the source operand is to be in system memory, and optionally wherein the execution unit is to perform address translation in response to the aperture write instruction to obtain for receipt from the source operand The host physical memory address of the data.Example 5 includes the processor of example 1, optionally wherein the decoding unit is to decode an aperture access instruction, which will be an aperture read instruction, optionally wherein the aperture read instruction will indicate a destination operand, And optionally wherein the execution unit reads data from the host physical memory address within the aperture in response to the aperture read instruction and will store the data read from the aperture to the destination operand.Example 6 includes the processor of any of examples 1 to 5, optionally wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the access protected structure comprising virtual machine control structure.Example 7 includes the processor of any of examples 1 to 5, optionally wherein the decoder will decode at least one load from the memory instruction that, if executed, will not be allowed to read from accessing the protected structure Take the host physical memory address associated with the aperture.Example 8 includes the processor of any of examples 1 to 5, optionally wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being stored in the system In memory, and optionally where the decoding unit will decode aperture access instructions that will not indicate any architecturally visible memory address information for access to the protected structure.Example 9 includes the processor of any of examples 1 to 5, optionally wherein the decoding unit will decode the aperture access instruction that will indicate the offset, and optionally wherein the execution unit will be in the host physics in response to the aperture access instruction The data within the aperture is accessed at the memory address, the host physical memory address being offset from the host physical memory address corresponding to the base address of the aperture by the offset.Example 10 includes the processor of any of examples 1 to 5, optionally wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being representative for A host physical memory address that includes the base address of a plurality of aperture blocks of adjacent apertures.Example 11 includes the processor of Example 10, optionally wherein the decoding unit will decode an aperture access instruction that will indicate an aperture selector to select one of the plurality of apertures.Example 12 includes the processor of any of examples 1 to 5, optionally wherein the execution unit reads the host physical memory address from the access protected structure in response to the aperture access instruction, the host physical memory address being representative for the aperture The host physical memory address of the base address of the list, and optionally where the aperture list will store a plurality of host physical memory addresses, each for a base address of a different one of a plurality of potentially non-contiguous apertures.Example 13 includes the processor of Example 12, optionally wherein the decoding unit will decode an aperture access instruction that will indicate an aperture selector to select one of the plurality of apertures.Example 14 is a method performed by a processor, comprising receiving an aperture write instruction at a processor, the aperture write instruction indicating a source operand; reading from a protected protected structure and system memory in response to an aperture write instruction The host physical memory address associated with the aperture; and in response to the aperture write instruction, data received from the source operand within the aperture is stored to a host physical memory address that is not obtained by address translation.Example 15 includes the method of example 14, further comprising receiving an aperture read instruction at the processor, the aperture read instruction indicating a destination operand; reading from the protected protected structure and the aperture in the system memory in response to the aperture read instruction An associated host physical memory address; in response to the aperture read instruction, reading data from the aperture at a host physical memory address not obtained by address translation; and storing data read from the aperture to a destination operand.Example 16 includes the method of example 15, further comprising issuing an aperture write instruction from the first virtual machine, issuing an aperture read instruction from the second virtual machine, and optionally wherein the aperture write instruction and the aperture read instruction are used in the Data is shared between a virtual machine and a second virtual machine.Example 17 includes the method of Example 14, further comprising preventing a host physical memory address storing data received from the source operand from being reachable by the second level hierarchical paging structure.Example 18 includes the method of Example 14, executed by a virtual machine, and optionally wherein the virtual machine is prevented from knowing a host physical memory address that stores data received from the source operand.Example 19 is an article of manufacture comprising a non-transitory machine readable storage medium storing instructions that, if executed by a machine, cause the machine to perform an operation comprising the steps of: assigning a system to an aperture a memory area; storing the host physical memory address to be associated with the aperture in the access protected structure; and making the host physical memory address of the aperture inaccessible through the second level hierarchical paging structure.Example 20 includes the article of Example 19, optionally wherein the instructions to store the host physical memory address further comprise instructions that, if executed by the machine, cause the machine to perform operations comprising: storing the host physical memory address in a virtual Access to the machine control structure is protected in the structure.The example 21 includes the article of any one of examples 19 to 20, optionally wherein the instructions for storing the host physical memory address further comprise instructions that, if executed by the machine, cause the machine to perform an operation comprising the steps of: placing the host physical memory address Stored in an access protected structure that will correspond to the first virtual machine; and store the host physical memory address in a second access protected structure that will correspond to the second virtual machine.Example 22 is a system that includes interconnects for processing instructions. The system also includes a processor coupled to the interconnect and a dynamic random access memory (DRAM) coupled to the interconnect, the processor to receive an aperture access instruction, the processor to read from the access protected structure in response to the aperture access instruction The data within the aperture will be accessed at the host physical memory address associated with the aperture that will be in system memory, and at the host physical memory address that will not be obtained by address translation.Example 23 includes the system of Example 22, optionally where the aperture will represent a portion of the system memory that will not be accessible by address translation.Example 24 includes the system of any one of examples 22 to 23, optionally wherein the aperture access instruction is to be an aperture write instruction, optionally wherein the aperture write instruction will indicate a source operand, and optionally wherein the processor is responsive to The aperture write instruction will receive data from the source operand and will store data from the source operand to the host physical memory address within the aperture.Example 25 includes the processor of any one of examples 1 to 13, further comprising an optional branch prediction unit for predicting a branch, and an optional instruction fetch unit coupled with the branch prediction unit, the instruction fetch unit for acquiring the instruction Instructions. The processor can also optionally include an optional level 1 (L1) instruction cache coupled to the instruction fetch unit, the L1 instruction cache for storing instructions, an optional L1 data cache for storing data, and for storing Optional level 2 (L2) cache for data and instructions. The processor may also optionally include an instruction fetch unit coupled to the decode unit, the L1 instruction cache, and the L2 cache to, in some cases, fetch instructions from one of the L1 instruction cache and the L2 cache and provide the instructions to Decoding unit. The processor may also optionally include a register renaming unit for renaming registers, an optional scheduler for scheduling one or more operations that have been decoded from instructions for execution, and an execution result for submitting the instructions. Optional submission unit.Example 26 includes a system on a chip comprising at least one interconnect, a processor of any of Examples 1 to 3 coupled to the at least one interconnect, and an optional graphics processing unit coupled to the at least one interconnect ( GPU), an optional digital signal processor (DSP) coupled to the at least one interconnect, an optional display controller coupled to the at least one interconnect, and an optional memory control coupled to the at least one interconnect An optional wireless modem coupled to the at least one interconnect, an optional image signal processor coupled to the at least one interconnect, and an optional universal serial bus (USB) coupled to the at least one interconnect A 3.0 compatible controller, an optional Bluetooth 4.1 compatible controller coupled to the at least one interconnect, and an optional wireless transceiver controller coupled to the at least one interconnect.Example 27 is a processor or other device that operates to perform the method of any of Examples 14-18.Example 28 is a processor or other device that includes the components for performing the method of any of Examples 14-18.Example 29 is a processor or other device that includes modules and/or units and/or logic and/or any combination of circuits and/or components that operate to perform the methods of any of Examples 14-18.Example 30 is an optionally non-transitory and/or tangible machine readable medium optionally storing or otherwise providing instructions including a first instruction, if processed by a processor, a computer system, The electronic device performs or is otherwise executed by a machine and/or when executed by a processor, computer system, electronic device, or other machine to cause the machine to perform the method of any of the examples 14-18.Example 31 is a processor or other device substantially as described herein.Example 32 is a processor or other device that operates to perform substantially any of the methods described herein.Example 33 is a processor or other device that operates to perform substantially any of the instructions as described herein.
A method of manufacturing an integrated circuit includes providing an amorphous semiconductor material including germanium, annealing the amorphous semiconductor material, and doping to form a source location and a drain location. The semiconductor material containing germanium can increase the charge mobility associated with the transistor.
What is claimed is: 1. A method of manufacturing an integrated circuit, comprising:providing an amorphous semiconductor material including germanium above a bulk substrate of semiconductor material; laser annealing the amorphous semiconductor material to form a single crystalline semiconductor layer containing germanium; doping the single crystalline semiconductor layer and the substrate at a source location and a drain location to form a source region and a drain region, whereby a channel region between the source region and the drain region includes a thin semiconductor germanium region; and siliciding the source region and the drain region to form a silicide layer, the silicide layer extending into the substrate. 2. The method of claim 1 further comprising:before the doping step, providing a cap layer above the amorphous semiconductor layer. 3. The method of claim 2 further comprising:after the providing a cap layer step, providing a gate structure between the source location and the drain location. 4. The method of claim 3, wherein the cap layer is an amorphous semiconductor layer.5. The method of claim 4, further comprising:before the doping step, annealing the cap layer. 6. The method of claim 4, wherein the amorphous semiconductor layer includes silicon.7. The method of claim 1, wherein the bulk substrate includes single crystalline silicon.8. The method of claim 1, wherein the amorphous semiconductor material includes silicon germanium.9. The method of claim 7, wherein the amorphous semiconductor material includes silicon germanium.10. The method of claim 9, wherein the annealing step takes place at a temperature sufficient to melt the amorphous semiconductor layer and is below the melting temperature of the substrate.11. The method of claim 1, further comprising:providing a second amorphous semiconductor material above the amorphous semiconductor material including germanium after the laser annealing step; performing another laser annealing step to form a second single crystalline semiconductive layer from the second amorphous semiconductor material; and wherein the siliciding step forms the silicide layer so that the depth of the silicided layer is deeper than the second single crystalline semiconductor layer. 12. A method of manufacturing an ultra-large scale integrated circuit including a transistor, the method comprising steps of:depositing an amorphous silicon germanium material above a top surface of a semiconductor substrate; first annealing the amorphous silicon germanium material; depositing an amorphous silicon material above the silicon germanium material; second annealing the amorphous silicon material; and providing a source region and a drain region for the transistor, the source region and the drain region being deeper than a combined thickness of the silicon germanium material and the silicon material. 13. The method of claim 12, further comprising:providing a gate structure before providing a source region and a drain region step. 14. The method of claim 12, further comprising:providing an oxide layer over the silicon material after the second annealing step. 15. The method of claim 12, wherein the silicon germanium material is a single crystalline layer after the first annealing step.16. The method of claim 12, wherein the silicon material is a single crystalline layer after the second annealing step.17. The method of claim 12, wherein the silicon material is 100-150 Å thick.18. The method of claim 12, wherein the annealing temperature for the first and second annealing steps is at or above 1100[deg.] C. and below 1400[deg.] C.19. A process of forming a transistor with a silicon germanium channel region, the process comprising:depositing a thin amorphous silicon germanium material above a top surface of a semiconductor substrate; annealing the silicon germanium material to form single crystalline silicon germanium material; depositing a thin amorphous silicon material above the single crystalline silicon germanium material; annealing the silicon material to form single crystalline silicon material; providing a source region and a drain region for the transistor, the source region and the drain region extending into the substrate; and forming a conductive region in the source reunion or the drain region. 20. The process of claim 19, wherein the silicon germanium material is 200-500 Å thick.21. The process of claim 20, wherein the silicon material is 100-150 Å thick.22. The process of claim 19, wherein the annealing steps are excimer laser annealing steps.23. The process of claim 22, wherein the excimer laser annealing steps use a wavelength of 308 nanometers.24. The process of claim 23, the source and drain regions each including an extension.25. A method of manufacturing a transistor comprising a source and drain region and a channel region, the source and drain regions being at least partially disposed in a bulk semiconductor substrate, the channel region being disposed between the source and drain regions, the channel region including a silicon germanium layer and a silicon cap layer, the method comprising:providing an amorphous semiconductor material including germanium above a bulk substrate of semiconductor material; providing an amorphous silicon layer above the amorphous semiconductor material; annealing the amorphous semiconductor material and the amorphous silicon layer to form the silicon germanium layer and the silicon cap layer, the silicon germanium layer and the silicon cap layer are single crystalline; doping the single crystalline semiconductor layer and the substrate at a source location and a drain location to form a source region and a drain region, whereby the channel region between the source region and the drain region includes at least a portion of the semiconductor germanium layer covered by the silicon cap layer; and forming a conductive region in the source region or the drain region. 26. The method of claim 25, wherein the source and drain regions are silicided in the forming step to relieve any effect of germanium in the source and drain regions.27. The method of claim 12 further comprising:siliciding the source and the drain region to form a silicide layer, wherein the silicide layer extends deeper than the combined thickness.
CROSS REFERENCE TO RELATED APPLICATIONSThe present application entitled, "A Process For Manufacturing Transistors Having Silicon/Germanium Channel Regions," is related to U.S. application Ser. No. 09/599,270, filed on an even date herewith by Yu.BACKGROUND OF THE INVENTIONThe present invention relates generally to integrated circuits (ICs) and methods of manufacturing integrated circuits. More particularly, the present invention relates to a method of manufacturing integrated circuits having transistors with specialized channel regions.Integrated circuits (ICs), such as, ultra-large scale integrated (ULSI) circuits, can include as many as one million transistors or more. The ULSI circuit can include complementary metal oxide semiconductor (CMOS) field effect transistors (FETS). The transistors can include semiconductor gates disposed above a channel region and between drain and source regions. The drain and source regions are typically heavily doped with a P-type dopant (boron) or an N-type dopant (phosphorous).The drain and source regions generally include a thin extension that is disposed partially underneath the gate to enhance the transistor performance. Shallow source and drain extensions help to achieve immunity to short-channel effects which degrade transistor performance for both N-channel and P-channel transistors. Short-channel effects can cause threshold voltage roll-off and drain-inducted barrier-lowering. Shallow source and drain extensions and, hence, controlling short-channel effects, are particularly important as transistors become smaller.Conventional techniques utilize a double implant process to form shallow source and drain extensions. According to the conventional process, the source and drain extensions are formed by providing a transistor gate structure without sidewall spacers on a top surface of a silicon substrate. The silicon substrate is doped on both sides of the gate structure via a conventional doping process, such as, a diffusion process or an ion implantation process. Without the sidewall spacers, the doping process introduces dopants into a thin region just below the top surface of the substrate to form the drain and source extensions as well as to partially form the drain and source regions.After the drain and source extensions are formed, silicon dioxide spacers, which abut lateral sides of the gate structure, are provided over the source and drain extensions. With the silicon dioxide spacers in place, the substrate is doped a second time to form deep source and drain regions. During formation of the deep source and drain regions, further doping of the source and drain extensions is inhibited due to the blocking characteristic of the silicon dioxide spacers. The deep source and drain regions are necessary to provide sufficient material to connect contacts to the source and drain regions.As transistors become smaller, it is desirous to increase the charge carrier mobility in the channel region. Increasing charge carrier mobility increases the switching speed of the transistor. Channel regions formed from materials other than silicon have been proposed to increase charge carrier mobility. For example, conventional thin film transistors which typically utilize polysilicon channel regions have been formed on a silicon germanium (Si-Ge) epitaxial layer above a glass (SiO2) substrate. The Si-Ge epitaxial layer can be formed by a technique in which a semiconductor thin film, such as, an amorphous silicon hydride (a-Si:H), an amorphous germanium hydride (a-Ge:H) or the like is melted and crystallized by the irradiation of pulse laser beams.In a bulk type device, such as, a metal oxide semiconductor field effect transistor (MOSFET), the use of Si-Ge materials could be used to increase charge carrier mobility, especially hole type carriers. A channel region containing germanium can have carrier mobility 2-5 times greater than a conventional Si channel region due to reduced carrier scattering and due to the reduced mass of holes in the germanium-containing material. According to conventional Si-Ge formation techniques for bulk-type devices, a dopant implanted molecular beam epitaxy (MBE) technique forms a Si-Ge epitaxial layer. However, the MBE technique requires very complicated, very expensive equipment and is not feasible for mass production of ICs.Thus, there is a need for an integrated circuit or electronic device that includes channel regions with higher channel mobility. Further still, there is a need for transistors with a thin Si-Ge channel region and deep source and drain regions. Even further still, there is a need for a method of manufacturing a transistor having a thin Si-Ge channel region on a bulk-type semiconductor substrate.SUMMARY OF THE INVENTIONAn exemplary embodiment relates to a method of manufacturing an integrated circuit. The method includes providing an amorphous semiconductor material, annealing the amorphous semiconductor material, and doping to form source and drain regions. The amorphous semiconductor material contains germanium and is provided above a bulk substrate of semiconductor material. Excimer laser annealing the amorphous semiconductor material forms a single crystalline semiconductor layer containing germanium. The source and drain regions can be formed by doping the single crystalline semiconductor layer and the substrate at a source location and a drain location. A channel region between the source region and the drain region includes a thin semiconductor germanium region.Another exemplary embodiment relates to a method of manufacturing an ultra-large scale integrated circuit including a transistor. The method includes steps of depositing a silicon germanium material above a top surface of a semiconductor substrate, annealing the silicon germanium material, depositing a silicon material above the silicon germanium material, annealing the silicon material, and providing a source region and a drain region for the transistor. The source region and the drain region are deeper than a combined thickness of the silicon germanium material and the silicon material.Still another embodiment relates to a process of forming a transistor with a silicon germanium channel region. The process includes depositing a thin amorphous silicon germanium material, annealing the silicon germanium material, depositing a thin amorphous silicon material, annealing the silicon material, and providing a source region and a drain region for the transistor. The thin amorphous silicon germanium material is provided above a top surface of a semiconductor substrate. Annealing the silicon germanium material forms single crystalline silicon germanium material. The thin amorphous silicon material is provided above the single crystalline silicon germanium material. Annealing the silicon material forms single crystalline silicon material. The source and drain region extend into the substrate.Yet another embodiment relates to a transistor. The transistor includes source and drain regions disposed in a bulk semiconductor substrate. The transistor also includes a silicon-germanium channel region between the source and drain regions.BRIEF DESCRIPTION OF THE DRAWINGSExemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a cross-sectional view of a portion of an integrated circuit in accordance with an exemplary embodiment, the integrated circuit including a transistor provided on a semiconductor substrate, the transistor having a channel region which includes a semiconductor and germanium material;FIG. 2 is a cross-sectional view of the portion of the semiconductor substrate illustrated in FIG. 1;FIG. 3 is a cross-sectional view of the portion of the semiconductor substrate illustrated in FIG. 2, showing a semiconductor-germanium deposition step;FIG. 4 is a cross-sectional view of the portion of the semiconductor substrate illustrated in FIG. 3, showing a laser annealing step;FIG. 5 is a cross-sectional view of the portion of the semiconductor substrate illustrated in FIG. 4, showing a semiconductor deposition step; andFIG. 6 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 1, showing a laser annealing step.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSWith reference to FIG. 1, a portion 10 of an integrated circuit (IC) includes a transistor 12 which is disposed on a semiconductor substrate 14, such as, a wafer. Semiconductor substrate 14 is preferably a bulk P-type single crystalline (001) silicon substrate. Alternatively, substrate 14 can be an N-type well in a P-type substrate, or a semiconductor-on-insulator (SOI) substrate, (preferably silicon-on-glass) or other suitable material for transistor 12.Transistor 12 can be a P-channel or N-channel metal oxide semiconductor field effect transistor (MOSFET). Transistor 12 includes a gate structure 18, a source region 22, and a drain region 24. Regions 22 and 24 extend from a top surface 27 of portion 10 to a bottom 55 in substrate 14. Regions 22 and 24 are preferably 50 nanometers (nm)-120 nm thick from surface 27 to bottom 55 (junction depth) and include a source extension 23 and a drain extension 25. For an N-channel transistor, regions 22 and 24 are heavily doped with N-type dopants (e.g., 5*10<19>-1*10<20 >dopants per cubic centimeter). For a P-channel transistor, regions 22 and 24 are heavily doped with P-type dopants (e.g., 5*10<19>-1*10<20 >dopants per cubic centimeter).Extensions 23 and 25 are preferably shallow extensions (e.g., junction depth is less than 50 nm (15-40 nm)), which are thinner than regions 22 and 24. Extensions 23 and 25 are connected to regions 22 and 24, respectively, and are disposed partially underneath gate structure 18. Extensions 23 and 25 can be ultra-shallow to help transistor 12 achieve substantial immunity to short-channel effects. Short-channel effects can degrade performance of transistor 12 as well as the manufacturability of the IC associated with transistor 12.Regions 22 and 24 and extensions 23 and 25 have a concentration of 10<19 >to 10<20 >dopants per cubic centimeter. An appropriate dopant for a P-channel transistor is boron, boron diflouride, or iridium, and an appropriate dopant for an N-channel transistor is arsenic, phosphorous, or antimony.Gate stack or structure 18 includes a gate dielectric layer 34 and a gate conductor 36. Dielectric layer 34 is preferably comprised of a thermally grown, 15-25 Å thick silicon dioxide material. Alternatively, deposited silicon dioxide, nitride (Si3N4) material, or high K gate dielectric materials can be utilized.Gate structure 18 can also include a pair of spacers 38. Spacers 38 can be manufactured in a conventional deposition and etch back process. Preferably, spacers 38 are manufactured from silicon dioxide and are 800-1200 Å in height (thick) and 500-1000 Å wide. Alternatively, other insulative material such as nitride can be utilized to form spacers 38.Conductor 36 is preferably deposited as polysilicon by chemical vapor deposition (CVD) and etched to form the particular structure for transistor 12. Conductor 36 is preferably doped polysilicon. Alternatively, conductor 36 can be metal, such as a refractory metal, or include germanium to adjust the work function of transistor 12. Gate structure 18 has a height or thickness of 800-1200 Å.Gate structure 18 is disposed over a channel region 41. Channel region 41 is specialized to have increased charge carrier mobility. Channel region 41 has a width slightly less than the gate length (e.g., 35 nm-100 nm) and advantageously includes a semiconductor containing germanium. Channel region 41 can include a thin silicon cap layer 43 and a thin silicon germanium layer 45. Alternatively, semiconductor material other than silicon can be utilized in layers 43 and 45. Thus, channel region 41 is comprised of a compound structure including layers 43 and 45. Layer 43 advantageously protects the integrity of layer 34 from the effects of germanium in layer 45. Thus, layer 43 can serve as a cap layer or protection layer above layer 45.In a preferred embodiment, layer 45 is 200-500 Å thick, and layer 43 is 100-150 Å thick. Therefore, layer 45 is located from 100-150 Å below top surface 27 of portion 10. Region 41 is preferably less than 60 percent of the depth of regions 22 and 24.Channel region 41 including layers 43 and 45 is preferably almost as deep as extensions 23 and 25. Channel region 41 is significantly shallower than the deep regions (contact locations) associated with source region 22 and drain region 24. Accordingly, sufficient depth is available for making contact to source region 22 and drain region 24 and yet a thin channel region 41 including silicon germanium layer 45 is attained. The use of layer 45 including germanium allows the mobility of carriers to be approximately 2-5 times larger than a channel region 41 comprised solely of silicon material.The interface between layer 45 and substrate 14 is preferably extremely sharp in the vertical direction. An ideal design has a very clearly defined border between layer 45 and substrate 14. The mechanical stress associated with layer 45 increases mobility for channel 31 (e.g., stress-enhanced mobility).A silicide layer, such as, regions 82, can be formed in regions 22 and 24. Regions 82 can be deposited or sputtered on top of source region 22 and drain region 24 for connection to contacts. Metal contacts can be coupled to regions 22 and 24 via regions 82. Conventional metal silicidation techniques can be utilized. For example, titanium silicide, cobalt silicide, tungsten silicide, and other silicides can be utilized.Siliciding regions 22 and 24 to form regions 82 can consume the portion of regions 22 and 24 that includes germanium (associated with layer 45). Thus, the performance of regions 22 and 24 is not adversely impacted by the presence of germanium.With reference to FIGS. 1-6, the fabrication of transistor 12, including channel region 41, is described below as follows. The advantageous process allows channel region 41 to include germanium and yet does not require MBE equipment. The process also allows deep source and drain regions 22 and 24 to be formed and yet allows a thin germanium silicon channel region 41 to be formed.In FIG. 2, a single crystalline bulk semiconductor substrate 14 is provided. Substrate 14 can be provided as part of a semiconductor wafer. Substrate 14 is preferably several hundred microns thick (for an eight inch wafer).In FIG. 3, low pressure chemical vapor deposition (LPCVD) is utilized to deposit or provide a very thin amorphous semiconductor germanium layer such as an amorphous silicon germanium layer 64 on a top surface 66 of substrate 14. Preferably, layer 64 is deposited as a 200-500 Å thick amorphous silicon germanium layer at a temperature of 400-450[deg.] C.In FIG. 4, after layer 64 is deposited, layer 64 is subjected to an annealing process. The annealing process changes the structure of layer 64 from an amorphous state to a single crystalline state (e.g., melts layer 64 which subsequently recrystalizes). Preferably, the annealing process is an excimer laser process (e.g., 308 nanometer wavelength) for a pulse duration of several nanoseconds.The process can raise the temperature of layer 64 to the melting temperature of layer 64 (1100[deg.] C. for silicon germanium). The melting temperature of layer 64 in the amorphous state is significantly lower than that of substrate 14 which is in a crystalline state. For example, the melting temperature of amorphous silicon germanium is 1100[deg.] C. and the melting temperature of a single crystalline silicon substrate (C-Si) is 1400[deg.] C. Preferably, the laser fluence is controlled so that layer 64 is fully melted and substrate 14 is not melted. After the laser beam is removed, layer 64 is recrystallized as a single crystalline material. Layer 64 corresponds to silicon germanium layer 45 (channel region 41 in FIG. 1).In FIG. 5, after layer 64 is recrystallized, LPCVD is utilized to provide a very thin amorphous layer 74. Layer 74 is preferably deposited at a temperature of 400-450[deg.] C. and preferably is a 100-150 Å thick amorphous silicon layer. Layer 74 is provided on a top surface 65 of layer 64.In FIG. 6, after layer 74 is deposited, layer 74 is subjected to an annealing process. The annealing process changes the structure of layer 74 from an amorphous state to a single crystalline state (e.g., melts layer 74 which subsequently recrystalizes). Preferably, the annealing process is an excimer laser annealing process (e.g., 308 nanometer wavelength for a pulse duration of several nanoseconds). The annealing process can raise the temperature of layer 74 to the melting temperature of layer 74 (1100[deg.] C.). The melting temperature of layer 74 in the amorphous state is significantly lower than that of layer 64 in the single crystalline state. The melting temperature of amorphous silicon is 1100[deg.] C. and the melting temperature of single crystalline silicon-germanium is 1400[deg.] C. Preferably, the laser fluence is controlled so that layer 74 is fully melted and layer 64 is not melted. After the laser beam is removed, layer 74 is recrystallized as single crystalline material. Layer 74 advantageously serves as a cap layer above layer 64. Layer 74 corresponds to cap layer 43 (channel region 41 in FIG. 1).In FIG. 1, transistor 12 can be substantially completed by conventional semiconductor processing techniques to include gate structure 18 and source and drain regions 22 and 24.Gate structure 18 is comprised of layer 34 and gate conductor 36. Gate conductor 36 preferably is 800-1200 Å thick, undoped polysilicon material. Conductor 36 is preferably deposited by a chemical vapor deposition (CVD) process on top of layer 34 which is thermally grown above surface27 (surface 75 of layer 74 in FIG. 6). Layer 34 can be thermally grown on substrate 14.After structure 18, including layers 36 and 34 are formed, substrate 14 can be doped according to a two step doping process to form regions 22 and 24 including extensions 23 and 25. After the first doping step, spacers 38 are formed followed by a second doping step to form the deeper portions of regions 22 and 24. Preferably, the deeper portions of regions 22 and 24 are 500-1200 Å deep (e.g., 800-1000 Å below surface 27 of substrate 14). In another alternative embodiment, an amorphousizing and doping technique can be utilized to form regions 22 and 24 including extension 23 and 25.After regions 22 and 24 are formed, a silicidation process forms silicide regions 82 within regions 22 and 24. Regions 82 can be formed by depositing a metal layer and siliciding the metal layer. Generally, sixty-percent of the thickness of the metal layer consumes substrate 14. Preferably, regions 82 extend 25 nm into substrate 14.After regions 82 are formed, transistor 12 and integrated circuit 10 can be subjected to conventional CMOS processes to form contacts and interconnects. In addition, insulating layers can be provided over transistor 12 to otherwise complete the fabrication of portion 10.It is understood that while the detailed drawings, specific examples, material types, thicknesses, dimensions, and particular values given provide a preferred exemplary embodiment of the present invention, the preferred exemplary embodiment is for the purpose of illustration only. The method and apparatus of the invention is not limited to the precise details and conditions disclosed. For example, although specific types of capping layers and semiconductor germanium layers are shown, other structures can be utilized. Various changes may be made to the details disclosed without departing from the spirit of the invention which is defined by the following claims.
The present invention relates to methods for integrateing replacement metal gate structures. Methods and associated structures of forming a microelectronic device are described. Those methods comprise providing a substrate comprising a first transistor structure comprising an n-type gate material and second transistor structure comprising a p-type gate material, selectively removing the n-type gate material to form a recess in the first gate structure, and then filling the recess with an n-type metal gate material.
1.A method for integrating and replacing a metal gate structure includes:Providing a substrate including a first transistor structure containing n-type gate material and a second transistor structure containing p-type gate material;Selectively removing n-type gate material to form a recess in the first gate structure, wherein both the n-type gate material and the p-type gate material are exposed to a selective removal process; andFill the recess with n-type metal gate material;Wherein selectively removing the n-type gate material includes: wet etching the n-type gate material by applying a mixture of about 2 to about 30 percent ammonium hydroxide in deionized water and applying from about 0.5 MHz Ultrasonic degradation up to about 1.2 MHz to selectively remove n-type gate material.2.The method of claim 1, wherein providing a substrate including a first transistor structure including an n-type gate material and a second transistor structure including a p-type gate material includes providing a gate transistor material including an n-doped polysilicon The substrate of the NMOS transistor structure and the PMOS transistor structure containing p-doped polysilicon gate material.3.The method of claim 2, wherein providing a substrate including an NMOS transistor structure including n-doped polysilicon gate material and a PMOS transistor structure including p-doped polysilicon gate material includes providing a substrate including n-doped polysilicon gate material The substrate of the NMOS transistor structure and the PMOS transistor structure containing a p-doped polysilicon gate, wherein the PMOS transistor structure includes a source region and a drain region containing a silicon germanium alloy.4.The method of claim 1, wherein wet etching the n-type gate material with a mixture of about 10 percent to about 20 percent ammonium hydroxide in deionized water comprises: at a temperature of from about 10 degrees Celsius to about 40 degrees Celsius At a temperature, the n-type gate material is wet etched with a mixture of about 10 percent to about 20 percent ammonium hydroxide in deionized water.5.The method of claim 1, wherein selectively removing the n-type gate material comprises: wet etching the n-type gate with a mixture of about 15 percent to about 30 percent tetramethylammonium hydroxide in deionized water Polar material and ultrasonic degradation applied from about 0.8 MHz to about 1.2 MHz.6.The method of claim 5, wherein wet-etching the n-type gate material with a mixture of about 15% to about 30% tetramethylammonium hydroxide in deionized water comprises: from about 60 degrees Celsius to about 90 At a temperature of Celsius degrees, the n-type gate material is wet-etched with a mixture of about 15% to about 30% tetramethylammonium hydroxide in deionized water.7.The method of claim 1, wherein selectively removing the n-type gate material comprises selectively removing the n-type gate material without substantially removing the p-type gate material.8.The method of claim 1, wherein selectively removing the n-type gate material to form a recess in the first gate structure further comprises: selectively removing the first gate disposed under the n-type gate material Dielectric layer.9.The method of claim 8, wherein selectively removing the first gate dielectric layer disposed under the n-type gate material further comprises: forming a second gate dielectric layer within the recess.10.The method of claim 9, wherein forming the second gate dielectric layer in the recess includes forming a high-k gate dielectric layer in the recess.11.The method of claim 9, wherein selectively removing the first gate dielectric layer disposed under the n-type gate material further comprises: forming in the recess selected from the group consisting of hafnium oxide, zirconium oxide, titanium oxide, and oxide A group of high-k gate dielectric layers of aluminum and / or combinations thereof.12.The method of claim 1, wherein filling the recess with an n-type metal gate material comprises: filling the recess with a metal gate material selected from the group consisting of hafnium, zirconium, titanium, tantalum and aluminum, and / or combinations thereof Into.13.A method of forming a microelectronic structure includes:A substrate including an n-type transistor structure including an n-type polysilicon gate material and a p-type transistor structure including a p-type polysilicon gate material is provided, wherein the first dielectric layer is provided on the n-type and p-type Above the gate structure;Removing a portion of the first dielectric layer to expose the n-type polysilicon gate material;Selectively removing n-type polysilicon gate material to form a recess, wherein both the n-type gate material and the p-type gate material are exposed to a selective removal process; andFill the recess with n-type metal gate material;Wherein selectively removing the n-type gate material includes: wet etching the n-type gate material by applying a mixture of about 2 to about 30 percent ammonium hydroxide in deionized water and applying from about 0.5 MHz Ultrasonic degradation up to about 1.2 MHz to selectively remove n-type gate material.14.The method of claim 13, wherein filling the recess with an n-type metal gate material further comprises: forming a second dielectric layer on the n-type metal gate material.15.The method of claim 13, wherein selectively removing the n-type polysilicon gate material includes selectively removing the n-type polysilicon gate material without substantially removing the p-type polysilicon gate material.16.A microelectronic structure, including:A substrate including an n-type transistor structure including an n-type metal gate material, wherein the n-type metal gate material is selected from the group including hafnium, zirconium, and combinations thereof, and wherein the n-type transistor The structure includes a single layer of n-type metal gate material; andA p-type transistor structure including a p-type polysilicon gate material, wherein the p-type transistor structure further includes a source region and a drain region including a silicon-germanium alloy, and wherein the source region and the drain region are placed in the substrate BottomWherein the n-type gate material on the n-type transistor structure is wet etched by using a mixture of about 2% to about 30% ammonium hydroxide in deionized water and applied from about 0.5MHz to about 1.2MHz Ultrasonic degradation to selectively remove n-type gate material to form a recess, and then fill the recess with n-type metal gate material to form an n-type metal gate.17.The structure of claim 16, wherein the n-type transistor structure further includes a high-k gate dielectric layer selected from the group consisting of hafnium oxide, zirconium oxide, titanium oxide, and aluminum oxide, and / or combinations thereof.
Method for integrated replacement of metal gate structureThis application is a divisional application for a patent application with an application date of December 21, 2004, application number 200480039439.X, and an invention titled "Method for Integrated Replacement of Metal Gate Structure".Technical fieldThe present invention relates to the field of microelectronic devices, and more specifically to a method of manufacturing a metal gate transistor.Background techniqueMicroelectronic devices are usually manufactured in and on silicon wafers and other substrates of other types. Such integrated circuits may include millions of transistors, such as metal oxide semiconductor (MOS) field effect transistors known in the art. The MOS transistor generally includes a source region, a drain region, and a gate region, where the gate material may generally include polysilicon. However, the polysilicon gate may be susceptible to depletion effects, where the electric field applied to the polysilicon gate sweeps away carriers (holes in p-type doped polysilicon, or n-type doped polysilicon Electrons) to establish the depletion of carriers in the polysilicon gate region near the lower gate dielectric of the transistor. The depletion effect increases the total gate dielectric thickness in the MOS device. Recently, polysilicon gates have been used to combine silicon germanium source and drain regions in transistors. The strained lattice of the silicon germanium region enhances the electron and hole mobility in the channel of such transistors, which greatly improves this The performance of transistors is well known in the art.On the other hand, metal gates are not as susceptible to depletion effects as gates including polysilicon. However, typical prior art microelectronic processes do not incorporate both metal gates and polysilicon gates in the same device or integrated circuit. This is partly due to the complexity and cost of developing microelectronic processes that can reliably form metal gate structures and polysilicon gate structures within the same microelectronic device or integrated circuit. Therefore, it is advantageous to combine the metal gate structure and the polysilicon gate structure with the silicon germanium source and drain regions. The method and structure of the present invention provide such a process.Summary of the inventionAccording to a first aspect of embodiments of the present invention, there is provided a method including: providing a substrate including a first transistor structure including n-type gate material and a second transistor structure including p-type gate material; Removing n-type gate material to form a recess in the first gate structure, wherein both the n-type gate material and the p-type gate material are exposed to a selective removal process; and n is used -Type metal gate material fills the recess.According to a second aspect of the embodiments of the present invention, there is provided a method of forming a microelectronic structure, comprising: providing an n-type transistor structure including an n-type polysilicon gate material and a p-type transistor structure including a p-type polysilicon gate material -Type transistor structure substrate, wherein the first dielectric layer is disposed above the n-type and p-type gate structures; a portion of the first dielectric layer is removed to expose the n-type polysilicon gate material; selectively Removing n-type polysilicon gate material to form a recess, wherein both the n-type gate material and the p-type gate material are exposed to a selective removal process; and filled with n-type metal gate material The recessed.According to a third aspect of the embodiments of the present invention, there is provided a structure including a substrate including an n-type transistor structure including an n-type metal gate material, wherein the n-type metal gate material is selected from the group consisting of Hafnium, zirconium, and combinations thereof, and wherein the n-type transistor structure includes a single layer of n-type metal gate material; and a p-type transistor structure including p-type polysilicon gate material, wherein the p- The transistor structure further includes a source region and a drain region including a silicon germanium alloy, and wherein the source region and the drain region are placed in the substrate.BRIEF DESCRIPTIONAlthough the specification ends with what is specifically pointed out and clearly claimed as the claims of the present invention, the advantages of the present invention can be more easily determined from the following description of the present invention when read in conjunction with the drawings, in which:Figures 1a-1e show a structure according to an embodiment of the invention.2a-2e show a structure according to an embodiment of the invention.detailed descriptionIn the following detailed description, reference is made to the accompanying drawings showing by way of example specific embodiments in which the invention may be implemented. These embodiments are described in sufficient detail to enable those skilled in the art to implement the present invention. It should be understood that although various embodiments of the invention are different, they are not necessarily mutually exclusive. For example, the specific features, structures, or characteristics described herein and related to one embodiment may be implemented in other embodiments without departing from the spirit and scope of the present invention. In addition, it should be understood that the position or arrangement of the various elements can be modified within each disclosed embodiment without departing from the spirit and scope of the present invention. Therefore, the following detailed description is not intended to be limiting, and the scope of the present invention is limited only by the properly interpreted appended claims and the equivalents of the full scope to which the claims are given. In the figures, similar numbers refer to the same or similar functionality throughout the figures.Describes the method of forming microelectronic structures and related structures. Those methods include providing a substrate including a first gate transistor including an n-type gate material and a second gate transistor including a p-type gate material, and selectively removing the n-type gate material to A recess is formed in the transistor structure, and then the recess is filled with n-type metal gate material. Using silicon germanium source and drain regions, the method of the present invention can combine NMOS metal gate transistors and PMOS polysilicon transistors in the same microelectronic device.Figures 1a-1e show an embodiment of a method and related structure that incorporates an NMOS metal gate transistor and a PMOS polysilicon transistor. FIG. 1 a shows a cross-section of a portion of substrate 100 which may preferably include silicon substrate 100. The silicon substrate 102 may include materials such as, but not limited to, silicon, silicon on insulator, germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, or combinations thereof.The substrate 100 may include a first transistor structure 102 that is preferably an n-type transistor structure 102 (ie, an NMOS transistor) known in the art. The substrate 100 may also include a second transistor structure 104 that is preferably a p-type transistor structure 104 (ie, a PMOS transistor) well known in the art. The n-type transistor structure 102 may include a gate dielectric layer 106, a source region 112, a drain region 114, and a spacer 110, which are well known in the art. The n-type transistor structure 102 may further include an n-type gate material 108 which is preferably polysilicon gate material 108 and is disposed on the gate dielectric layer 106. For example, the n-type gate material 108 may preferably be doped with an n-type dopant, such as phosphorus.The p-type transistor structure 104 may include a p-transistor gate dielectric layer 116, a source region 122, a drain region 124, and a spacer 120, which are well known in the art. The source region 122 and the drain region 124 may preferably include a silicon germanium alloy material. The p-type transistor structure 104 may further include a p-type gate material 118 preferably a p-type polysilicon gate material 118 and disposed on the p-transistor gate dielectric layer 116. For example, the p-type gate material 118 may preferably be doped with p-type dopants, such as boron.The dielectric layer 126 may be disposed above and above the n-type and p-type gate structures, and may include an interlayer dielectric (ILD) known in the art. By preferably using a chemical mechanical process (CMP), a portion 128 of the dielectric layer 126 may be removed, for example, to expose the p-type gate material 118 and the n-type gate material 108 (see FIG. 1b).After the p-type gate material 118 and the n-type gate material 108 are exposed, the n-type gate material can be selectively removed from the n-type transistor structure 102 to form the recess 130 (FIG. 1c). The n-type gate material 108 can be selectively removed by using wet etching that preferably includes ammonium hydroxide etching. In one embodiment, the ammonium hydroxide etch may include about 2 percent to about 30 percent ammonium hydroxide in deionized water, and the art may be varied from about 0.5 MHz to about 1.2 MHz. The known ultrasonic degradation is applied to the mixture. The temperature of the wet etching may preferably vary from about 10 degrees Celsius to about 40 degrees Celsius.In another embodiment, the wet etching may include a mixture of about 15% to about 30% tetramethylammonium hydroxide (TMAH) in deionized water, and apply from about 0.5MHz to about 1.2MHz Sonication and temperature from about 60 degrees Celsius to about 90 degrees Celsius. The specific parameters of the removal process may depend on the specific application, but any such removal process with high selectivity to n-type gate material 108 may be utilized, ie, it substantially removes n-type gate material 108 while making p The type material 118 is basically kept as it is. Alternatively, in addition to being less desirable because it involves additional lithography steps, the p-type device can be masked to expose only the n-type device, thereby eliminating the need for etch selectivity between the two types of devices.For example, the recess 130 may be filled with an n-type metal gate material 132 such as hafnium, zirconium, titanium, tantalum, or aluminum, or a combination thereof (see FIG. 1d). The recess 130 may be filled with PVD ("physical vapor deposition"), CVD ("chemical vapor deposition") or ALD ("atomic layer deposition") known in the art. In this way, the n-type polysilicon gate material 108 can be replaced with an n-type metal gate material 132, which greatly enhances the performance of the n-type transistor prepared according to the method of the present invention. The method of the present invention can also realize the integration of an n-type (NMOS) metal gate transistor and a p-type polysilicon transistor (PMOS) in the same device, which may preferably include a silicon germanium source region and a drain region.Referring to FIG. 1e, after filling the recess 130 with the n-type metal material 132, a second dielectric layer 134 may be formed on the n-type metal gate material 132 and the p-type gate material 118 (ie, the Cover ILD layer).In another embodiment (see FIG. 2a), the substrate 200 (similar to the substrate 100 of FIG. 1) may include a first transistor structure 202 preferably an n-type transistor structure 202 and a preferably p-type transistor structure 204的 second transistor structure 204. The n-type transistor structure 202 may include a first gate dielectric layer 206, a source region 212, a drain region 214, and a spacer 210. The n-type transistor structure 202 may further include a recess 230 similar to the recess 130 of FIG. 1c.The p-type transistor structure 204 may include a p-transistor gate dielectric layer 216, a source region 222, a drain region 224, and a spacer 220. The source region 222 and the drain region 224 may preferably include a silicon germanium alloy material. The p-type transistor structure 204 may further include a p-type gate material 218, which is preferably a p-type polysilicon gate material 218 and disposed on the p-transistor gate dielectric layer 216.The first gate dielectric layer 206 of the n-type transistor structure 202 can be removed by using techniques well known in the art, such as wet chemical etching (see FIG. 2b). Then, a second gate dielectric layer 207 may be formed in the recess 230 of the n-type transistor structure 202 (FIG. 2c). The second gate dielectric layer 207 may preferably include a high-k gate dielectric layer, and may include materials such as hafnium oxide, zirconium oxide, titanium oxide, and aluminum oxide, and / or combinations thereof. By reducing the gate leakage current of the device thus prepared, the use of the high-k second gate dielectric layer 207 can enhance the performance of the n-type transistor structure 202, which is well known in the art.Referring to FIG. 2d, the recess 230 can then be filled with an n-type metal material 232 (similar to the n-type metal gate material 132 of FIG. 1d), and a p-type gate can be formed on the n-type metal gate material 232 A second dielectric layer 234 (similar to the second dielectric layer 134 of FIG. 1e) is formed on the electrode material 218 (see FIG. 2e).Therefore, the current embodiment of the present invention can use a p-type polysilicon gate material and an n-type metal gate material including a high-k dielectric gate layer.As described above, the present invention provides methods and related structures that provide a substrate including a first transistor structure containing n-type gate material and a second transistor structure containing p-type gate material, selectively removed n-type gate material to form a recess in the first transistor structure and then fill the recess with n-type metal gate material. The method of the present invention can replace the p-type polysilicon gate material with an n-type metal gate material, which greatly enhances the performance of the n-type transistor prepared according to the method of the present invention. The method of the present invention can also integrate an n-type metal gate transistor and a p-type polysilicon transistor in the same device, which may preferably include a silicon germanium source region and a drain region.Although the foregoing description has specified some steps and materials that can be used in the method of the present invention, those skilled in the art will recognize that various modifications and substitutions can be made. Therefore, it means that all such modifications, changes, substitutions, and additions are considered to fall within the spirit and scope of the present invention as defined by the appended claims. In addition, it should be recognized that microelectronic devices such as transistors are well known in the art. Therefore, it should be appreciated that the figures provided herein only show portions of exemplary microelectronic devices suitable for implementing the present invention. Therefore, the present invention is not limited to the structure described herein.
A thermal interface material is described for thermal coupling of an electronic component to a thermally conductive member. The thermal interface material includes a viscoelastic polymer matrix material, fusible solder particles in the matrix material, and filler particles in the matrix material. The solder particles have a melting temperature below a selected temperature (e.g. 157°C for indium) and the filler particles have a melting temperature substantially above the selected temperature (e.g. 961°C for silver). The filler particles keep the thermal interface material intact under adverse thermal and stress conditions.
CLAIMS What is claimed: 1. A thermal interface material for thermal coupling of an electronic component to a thermally conductive member, comprising: a viscoelastic polymer matrix material; and fusible solder particles in the matrix material, having a melting temperature below a selected temperature. 2. The thermal interface material of claim 1 wherein the matrix material comprises between 1 and 20% by weight. 3. The thermal interface material of claim 2 wherein the matrix material comprises approximately 8% by weight. 4. The thermal interface material of claim 1 wherein the matrix material is selected from the group consisting of a silicone, an amino epoxy, and acrylate, an olifin resin, a low-viscosity vinyl and a phase-change material. 5. The thermal interface material of claim 4 wherein the matrix material is silicone. <Desc/Clms Page number 10> 6. The thermal interface material of claim 5 wherein the solder particles comprise between 1 and 99% by weight. 7. The thermal interface material of claim 6 wherein the solder particles comprise at least 5% by weight. 8. The thermal interface material of claim 7 wherein the solder particles comprise between 25 and 90% by weight. 9. The thermal interface material of claim 1 wherein the solder particles are selected from the group consisting of In, InSn, InAg, SnAg, SnAgCu, SnBi, InSnBi, InTi, InZr, InTiCeSe, and InAgTiSeCe. 10. The thermal interface material of claim 1 wherein the matrix material is silicone and the solder particles do not substantially attack the silicone when the solder particles melt. 11. The thermal interface material of claim 1 wherein the solder particles have a melting temperature between 60 and 300 C. 12. The thermal interface material of claim 11 wherein the solder particles have a melting temperature of approximately 157 C. <Desc/Clms Page number 11> 13. The thermal interface material of claim 1 wherein the solder particles have widths of between 0.2 and 100 microns. 14. The thermal interface material of claim 1, further comprising: filler particles in the matrix material having a melting temperature above the selected temperature. 15. The thermal interface material of claim 14 wherein the filler particles comprise between 1 and 95% of the thermal interface material by weight. 16. The thermal interface material of claim 15 wherein the filler particles comprise at least 10% by weight. 17. The thermal interface material of claim 16 wherein the filler particles comprise approximately 15% by weight. 18. The thermal interface material of claim 16 wherein the solder particles and the filler particles comprise between 50 and 95% by weight. 19. The thermal interface material of claim 18 wherein the solder particles and the filler particles comprise approximately 92% by weight. 20. The thermal interface material of claim 16 wherein the filler particles are <Desc/Clms Page number 12> selected from the group consisting of Ni, Cu, Ag, Ag/Cu, Sn, graphite and Al. 21. The thermal interface material of claim 20 wherein the filler particles are Al. 22. The thermal interface material of claim 16 wherein the filler particles have a melting temperature above 350 C. 23. The thermal interface material of claim 16 wherein the filler particles have a melting temperature which is at least 100 C above a melting temperature of the solder particles. 24. The thermal interface material of claim 16 wherein the filler particles have a melting temperature which is at least 200 C above a melting temperature of the solder particles. 25. A thermal interface material for thermal coupling of an electronic component to a thermally conductive member, comprising: a viscoelastic polymer matrix material; fusible solder particles in the matrix material, having a melting temperature below 200 C and do not substantially attack the matrix material when the solder particles are melted; and filler particles in the matrix material, having a melting temperature above <Desc/Clms Page number 13> 400 C. 26. The thermal interface material of claim 25 wherein the matrix material is silicone. 27. The thermal interface material of claim 26 wherein the filler particles are aluminum. 28. An electronic assembly comprising: an electronic component which generates heat when operated; a thermally conductive member spaced from the electronic component; and a thermal interface material between the electronic component and the thermally conductive member, the thermal interface material including a viscoelastic polymer matrix material, solder particles that are fused together so as to provide an unbroken thermal path for heat to conduct from the electronic component and the thermally conductive member and having a melting temperature below a selected temperature, and filler particles in the matrix material having a melting temperature above the selected temperature. 29. The electronic assembly of claim 28 wherein the filler particles have a melting temperature which is at least 100 C above a melting temperature of the solder particles. <Desc/Clms Page number 14> 30. The electronic assembly of claim 29 wherein at least one of the filler particles is in contact with and entirely surrounded by one of the solder particles.
<Desc/Clms Page number 1> THERMAL INTERFACE MATERIAL AND ELECTRONIC ASSEMBLY HAVING SUCH A THERMAL INTERFACE MATERIAL BACKGROUND OF THE INVENTION 1). Field of the Invention [0001] This invention relates to a thermal interface material for thermal coupling of an electronic component to a thermally conductive member, and to an electronic assembly which has such a thermal interface material. 2). Discussion of Related Art [0002] Integrated circuits are manufactured on semiconductor wafers, which are subsequently sawed or"diced"into individual dice. Such a die may have solder bump contacts on the integrated circuit. The solder bump contacts are located downwardly onto contact pads of a package substrate. Electronic signals can be provided through the solder bump contacts to and from the integrated circuit. Operation of the integrated circuit causes heating thereof. Heat is conducted to an upper surface of such the die and has to be conducted or convected away to maintain the temperature of the integrated circuit below a predetermined level for the purpose of maintaining functional integrity of the integrated circuit. [0004] A heat spreader is usually located above the die and thermally coupled to the die by a fluid thermal interface material such as thermally conductive grease. <Desc/Clms Page number 2> However, the thermally conductive grease has only a moderate thermal conductivity and thus provides a substantial thermal barrier for heat transferring from the die to the heat spreader. BRIEF DESCRIPTION OF THE DRAWINGS [0005] The invention is described by way of example with reference to the accompanying drawings wherein: [0006] Figure 1 is a cross-sectional side view with a thermal interface material located between an electronic component and a thermally conductive member; [0007] Figure 2 is a view similar to Figure 1 after heating and subsequent cooling of the thermal interface material to cause fusing and agglomeration of solder particles thereof; [0008] Figure 3 is a cross-sectional side view of an electronic assembly including the thermal interface material; and [0009] Figure 4 is a graph of temperature against time illustrating an example of a cure and reflow temperature profile of the thermal interface material. DETAILED DESCRIPTION OF THE INVENTION General Description [0010] Figure 1 of the accompanying drawings illustrates a thermal interface material which is inserted between and used for thermal coupling of an electronic <Desc/Clms Page number 3> component 12 to a thermally conductive member 14. The thermal interface material 10 includes a viscoelastic polymer matrix material 16, fusible solder particles 18 in the matrix material 16, and filler particles 20 in the matrix material 16. The solder particles 18 have a melting temperature below a selected temperature and the filler particles 20 have a melting temperature above the selected temperature. The solder particles 18 will thus melt when the temperature increases to above the selected temperature but the filler particles 20 will not melt. The matrix material 16 may comprise between 1% and 20% of the thermal interface material 10 by weight, and preferably comprises approximately 8% by weight. [0012] The matrix material 16 may be a silicone, an amino epoxy, an acrylate, or olefin resin, a low-viscosity vinyl or a phase-change material, and is preferably silicone. The solder particles 18 may comprise between 1% and 99% of the thermal interface material 10 by weight, preferably at least 5% by weight, and more preferably between 25% and 90% by weight. The solder particles 18 preferably have a melting temperature of between 60 and 300 C. The solder particles 18 may be made of pure solder compositions such as indium (In) with a melting temperature of 157 C or a solder alloy, such as indium tin (InSn) with a eutectic melting temperature of 118 C, indium silver (InAg) with a eutectic melting temperature of 139 C, tin silver (SnAg) or tin silver copper (SnAgCu) with eutectic melting temperatures of 217 C, tin bismuth (SnBi) with a eutectic melting temperature of 203 C, indium tin bismuth (InSnBi) with a <Desc/Clms Page number 4> melting temperature of between 60 C and 140 C, or indium titanium (InTi), indium zirconium (InZr), indium titanium cerium selenium (InTiCeSe), indium silver titanium cerium selenium (InAgTiSeCe), with melting temperatures between 145 C to 165 C, etc. [0015] The solder particles 18 may have diameters of between 0.2 and 100. The solder particles 18 may be a mixture of fine and coarse particles [0016] The filler particles 20 may comprise between 1% and 95% of the thermal interface material 10 by weight, more preferably at least 10% by weight. [0017] The solder particles 18 and the filler particles 20 together preferably comprise between 50% and 99% of the thermal interface material 10 by weight, and preferably comprise approximately 92% by weight. [0018] The filler particles 20 (either fusible, non-fusible or ceramic particles) preferably have a melting temperature above 350 C and more preferably between 800 C and 1200 C. The filler particles 20 preferably have a melting temperature which is at least 100 C, more preferably at least 200 C, above a melting temperature of the solder particles 18. The filler particles 20 may be nickel (Ni), copper (Cu) with a melting temperature of 1084 C, silver (Ag) with a melting temperature of 961 C, silver copper (Ag/Cu), tin (Sn), and graphite, and preferably are aluminum (Al) with a melting temperature of 660 C. Examples of non-fusible fillers would be boron nitride, aluminum nitride, silicon carbide, aluminum oxide, graphite, carbon fiber, carbon nanotubes, or diamond. The whole assembly, including the electronic component 12, the thermally conductive member 14, and the thermal interface material 10, is <Desc/Clms Page number 5> inserted into a furnace which heats the assembly from room temperature to a temperature above which the solder particles 18 melt. The solder particles 18 fuse and agglomerate together as shown in Figure 2. Agglomeration is initiated by the fine ones of the solder particles 18. The temperature to which the assembly is heated is, however, maintained below a temperature at which the filler particles 20 melt. The assembly is then cooled to a temperature below the melting temperature of the solder particles 18 so that they solidify. [0020] The temperature is further lowered to a selected temperature above room temperature at which the matrix material 16 cures. Cross-linking occurs between polymer chains of the matrix material 16 while it is being cured to enhance the viscoelastic properties of the matrix material 16. The matrix material 16 may be non-curable resins such as phase change materials, which crystallize and thereby solidify at room temperatures. [0021] The temperature is then further lowered to room temperature. In the resulting structure, the solder particles 18 are agglomerated together and have large surfaces contacting both the electronic component 12 and the thermally conductive member 14 so as to provide an unbroken path through which heat can conduct from the electronic component 12 through the now-consolidated solder particles 18 to the thermally conductive member 14. The matrix material 12 has the ability to absorb stresses on the material. However, without the filler particles 20, the thermal interface material 10 may tend to flow out from between the electronic component 12 and the thermally conductive member 14 during thermal cycling and/or when exposed to high humidity. The filler particles 20 provide <Desc/Clms Page number 6> the necessary strength to prevent the thermal interface material 10 from flowing out from between the electronic component 12 and the thermally conductive member 14 under such conditions. The filler particles 20 thus keep the thermal interface material 10 intact during adverse stress and thermal conditions. [0022] Figure 3 illustrates an assembly 30 including the electronic component 12, the thermally conductive member 14, and the thermal interface material 10. The electronic component 12 is a semiconductor die (hereafter referred to as a"die 12") having an integrated circuit formed in and on a lower surface thereof. Solder bump contacts 32 are formed on the integrated circuit. The assembly 30 further includes a package substrate 34 having contact pads (not shown) on an upper surface thereof. Each contact 32 is located on a respective contact pad. The combination of the package substrate 34 and the die 12 is then inserted into a furnace so that the contacts 32 melt, and is then cooled so that the contacts 32 secure the die 12 to the package substrate 34. The thermally conductive member 14 is made of metal or ceramic and forms part of a metal cap having sides 36 extending downward from edges of the thermally conductive member 14 past the die 12 to the substrate 34. The thermal interface material 10 is in the form shown in Figure 1 when the cap is located over the die 12. Only then is the assembly 30 located in a furnace so as to transform the thermal interface material 10 into the form shown in Figure 2. Example [0024] An example of the thermal interface material 10 is now given. <Desc/Clms Page number 7> [0025] The matrix material 16 is silicone comprising 8% of the thermal interface material 10 by weight. The solder material 18 is indium comprising 77% of the thermal interface material 10 by weight. Indium has a melting temperature of 157 C and does not attack silicone when melted at a temperature above 157 C. The filler particles 20 are made of aluminum comprising 15% of the thermal interface material 10 by weight. The solder particles 18 and the filler particles 20 thus comprise approximately 92% of the thermal interface material 10 by weight. Aluminum has a melting temperature of approximately 1200 C. The filler particles 20 thus melt at a temperature which is 1043 C higher than the melting temperature of the solder particles 18. [0026] Heat is generated by the die 12 and transferred through the solder particles 18 to the thermally conductive member 14. Differences in thermal expansion of the die 12 and the thermally conductive member 14 cause stresses on the material that are primarily absorbed by the viscoelastic matrix material 14. [0027] Figure 4 illustrates a thermal cycle of the silicone/indium/aluminum composition. The composition is heated from room temperature of about 30 C to approximately 170 C, which is above the melting temperature of indium so that the indium solder particles 18 melt. The composition is maintained at 170 C for approximately two minutes, i. e. , until sufficient agglomeration has occurred. The composition is then cooled to a temperature of approximately 125 C, which is below the solder material's melting point, and the solder particles solidify. The silicone polymer cures at 125 C for about one hour. Curing time and temperature may be varied and are related to one another. <Desc/Clms Page number 8> [0028] While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.
PROBLEM TO BE SOLVED: To provide a system and method of booting a wireless device.SOLUTION: The system and method includes reading a factory test mode memory item when the wireless device is powered on, determining whether the factory test mode memory item is set to yes, and preventing an operating system of the wireless device from booting when the factory test mode memory item is set to yes. The system and method further includes remaining in a boot loader mode, enumerating a port as a diagnostic serial port, and receiving one or more diagnostic packets. Moreover, the system and method includes allowing the operating system of the wireless device to boot when the factory test mode memory item is set to no.
A method of booting a wireless device, wherein when the wireless device is powered on, reading a factory test mode memory item and setting the factory test mode memory item to a first state. And determining if the factory test mode memory item is set to the first state, inhibiting booting of the wireless device's operating system.The method of claim 1, further comprising remaining in boot loader mode.The method according to claim 2, further comprising enumerating the port as a diagnostic serial port and receiving one or more diagnostic packets.The method of claim 1, further comprising: allowing the wireless device operating system to boot if the factory test mode memory item is set to a second state.Determining if the number of boots is greater than or equal to a threshold if the factory test mode memory item is set to the second state, and if the number of boots is greater than or equal to the threshold The method of claim 1, comprising: authorizing a boot of an operating system of the wireless device.6. The method according to claim 5, further comprising changing the boot number if the boot number is not greater than the threshold, and setting the factory test mode memory item to the first state. the method of.A memory and a processor connected to the memory, the processor reading a factory test mode memory item when the wireless device is powered on; and the factory test mode memory item Determining whether the device is set to a first state, and inhibiting booting of the operating system of the wireless device if the factory test mode memory item is set to the first state A wireless device operable to performThe wireless device of claim 7, wherein the processor is operable to perform staying in a boot loader mode.9. The wireless device of claim 8, wherein the processor is operable to perform, as a diagnostic serial port, enumerating the port and receiving one or more diagnostic packets.The processor is operable to perform authorization to boot the operating system of the wireless device when the factory test mode memory item is set to a second state. The wireless device as described in.The processor determines whether the boot count is equal to or greater than a threshold when the factory test mode memory item is set to the second state, and the boot count is equal to or greater than the threshold. 8. The wireless device of claim 7, wherein the wireless device is operable to perform: allowing an operating system boot of the wireless device.The processor performs changing the boot number and setting the factory test mode memory item to the first state if the boot number is not greater than or equal to the threshold value. The method according to claim 11, wherein the method is operable.Means for reading a factory test mode memory item when the wireless device is powered on; means for determining whether the factory test mode memory item is set to a first state; and Means for inhibiting boot of the operating system of the wireless device when a test mode memory item is set to the first state.14. The wireless device of claim 13, further comprising means for remaining in boot loader mode.15. The wireless device of claim 14, further comprising: means for enumerating the port as a diagnostic serial port; and means for receiving one or more diagnostic packets.The wireless device of claim 13, further comprising: means for authorizing booting of an operating system of the wireless device when the factory test mode memory item is set to a second state.A means for determining if the boot count is greater than or equal to a threshold if the factory test mode memory item is set to a second state; and if the boot count is greater than or equal to the threshold The wireless device of claim 13, further comprising: means for authorizing booting of an operating system of the wireless device.The apparatus of claim 17, further comprising: means for changing the boot number if the boot number is not greater than the threshold; and means for setting the factory test mode memory item to the first state. Wireless device.A computer program product comprising a computer readable medium, the computer readable medium being at least one instruction for reading a factory test mode memory item when the wireless device is powered on. And at least one instruction for determining whether the factory test mode memory item is set to the first state, and the factory test mode memory item is set to the first state In some cases, a computer program product comprising at least one instruction to inhibit boot of an operating system of the wireless device.20. The computer program product of claim 19, wherein the computer readable medium further comprises at least one instruction to remain in boot loader mode.21. The computer readable medium according to claim 20, further comprising at least one instruction for enumerating a port as a diagnostic serial port and at least one instruction for receiving one or more diagnostic packets. Computer program product.The computer readable medium further comprises at least one instruction for authorizing booting of an operating system of the wireless device when the factory test mode memory item is set to a second state. The computer program product according to item 19.The computer readable medium further comprises at least one instruction for determining if the boot count is greater than or equal to a threshold if the factory test mode memory item is set to a second state. 20. The computer program product of claim 19, comprising at least one instruction for authorizing booting of an operating system of the wireless device if the boot number is greater than or equal to the threshold.The processor sets at least one instruction to change the boot number and the factory test mode memory item to the first state if the boot number is not greater than or equal to the threshold. A computer program product according to claim 23, comprising at least one instruction of
System and method for reducing factory program time for wireless devicesThe present disclosure relates generally to wireless devices, and more particularly, to systems and methods for reducing factory program time for wireless devices.In manufacturing a wireless device such as a cellular telephone, a manufacturer may perform a number of tests on the wireless device. These tests include calibrating the wireless device, configuring the wireless device, loading software, or a combination thereof. Generally, the wireless device needs to be rebooted after each test. In addition, the manufacturer programs a large number of wireless devices and stores those programmed wireless devices until delivery. Over time, it becomes necessary to reprogram the stored wireless devices with newer software or carrier specific software. When reprogramming a stored wireless device, it is necessary to reboot the stored wireless device several times. If there are a large number of wireless devices, the reboot time will be very long.Thus, there is a need for systems and methods for reducing program time associated with a wireless device.In the figures, like numerals indicate like parts throughout the various figures unless otherwise indicated.FIG. 1 is a diagram of a system for testing a wireless device.FIG. 2 is a diagram of a wireless device.FIG. 3 is a diagram of a processor system associated with a wireless device.FIG. 4 is a flow chart illustrating a method of testing a wireless device.FIG. 5 is a flow chart illustrating a method of booting a wireless device.FIG. 6 is a flow chart illustrating a method of monitoring the boot count associated with a wireless device.The term "exemplary" is used herein to mean "serving as an example, instance, or illustration." Embodiments described herein as "exemplary" are not necessarily to be construed as preferred or advantageous over other aspects.In this description, the term "application" also includes files with executable content, such as, for example, object code, scripts, byte code, markup language files, patches. In addition, the "application" presented herein may also include inherently executable files, such as documents that need to be opened, and other data files that need to be accessed.The term "content" may also include, for example, object code, scripts, byte code, markup language files, files with executable content such as patches. In addition, "content" as described herein may also include inherently executable files, such as documents that need to be opened and other data files that need to be accessed.In this description, the terms "communication wireless device", "wireless device", "wireless telephone" and "wireless communication wireless device" are used interchangeably. With the advent of third generation ("3G") wireless technologies, the availability of more bandwidth has enabled electronic wireless devices with more wireless capabilities. Thus, the wireless device may be a cellular phone, a pager, a PDA, a smart phone, a navigation wireless device or a computer with a wireless connection.Referring first to FIG. 1, a wireless device test system is shown, generally designated 100. As shown, system 100 may include a first test station 102, a second test station 104, and an Nth test station 106. The first test station 102 may include a first computer 108 that includes a processor 110 and a memory 112. The second test station 104 also includes a second computer 114, which may include a processor 116 and a memory 118. Further, the Nth test station 106 may include the Nth computer 120. As shown, the Nth computer 120 may include a processor 122 and a memory 124.As illustrated in FIG. 1, system 100 may include a test server 126 connected to computers 108, 114, 120 by a network interface 128. The test server 126 may include a processor 130 and a memory 132 coupled thereto.FIG. 1 further shows that the system 100 can include at least one wireless device 134 in the first test station 102. The wireless device 134 is installed in the first test station 102 and connected or otherwise coupled to the first computer 108 in the first test station 102. During testing, the first computer 108 may transmit a diagnostic signal to the wireless device 134, for example via a universal serial bus ("USB") connection or any other wired or wireless communication connection. The diagnostic signal may be used to test the wireless device 134, calibrate the wireless device 134, configure the wireless device 134, or a combination thereof.During operation of system 100, processors 110, 116, 122 may execute program instructions stored in memories 112, 118, 124 to perform one or more of the various method steps described herein. . For example, in one aspect, system 100 may perform one or more of the functions described herein when processor 110, 116, 122 executes program instructions stored in memory 112, 118, 124. . In another aspect, the program instructions may be, for example, floppy disks, compact disks (CDs), memory cards, flash memory wireless devices, ROMs, or any other type of memory wireless device. It is stored on a computer readable medium. Program instructions may be loaded by the network interface 128 from the test server 134 into the memories 112, 118, 124.Referring to FIG. 2, an exemplary non-limiting aspect of a wireless telephone is shown, generally designated 220. The wireless telephone 220 may include an on-chip system 222 that further includes a digital signal processor 224 and an analog signal processor 226 coupled together. As shown in FIG. 2, display controller 228 and touch screen controller 230 may be coupled to digital signal processor 224. Similarly, a touch screen display 232 outside of on-chip system 222 may be coupled to display controller 228 and touch screen controller 230.FIG. 2 further shows, for example, a Phase Inverted Line ("PAL") encoder, a Sequential Color Memory ("SECAM") encoder, or a National Television System Committee ("NTSC") encoder. , Video encoder 234 is shown coupled to digital signal processor 224. Additionally, video amplifier 236 may be coupled to video encoder 234 and touch screen display 232. Additionally, video port 238 may be coupled to video amplifier 236. As illustrated in FIG. 2, a universal serial bus (“USB”) controller 240 may be coupled to the digital signal processor 224. In addition, USB port 242 may be coupled to USB controller 240. Memory 244 and a subscriber identity module (“SIM”) card 246 may be coupled to digital signal processor 224. In particular aspects, memory 244 includes memory items in a factory test mode ("FTM"), such as, for example, NV_FTM_MODE. In addition, memory 244 includes a boot number, such as, for example, NV_FTM_MODE_BOOT_COUNT. Further, as shown in FIG. 2, digital camera 248 may be coupled to digital signal processor 224. In the exemplary embodiment, digital camera 248 is a charge coupled wireless device ("CCD") camera or a complementary MOS ("CMOS") camera.As shown in FIG. 2, stereo audio CODEC 250 may be coupled to analog signal processor 226. Further, an audio amplifier 252 may be coupled to the stereo audio CODEC 250. Typically, the first stereo speaker 254 and the second stereo speaker 256 may be coupled to the audio amplifier 252 in a non-limiting manner. FIG. 2 further shows that a microphone amplifier 258 can also be coupled to the stereo audio CODEC 250. Additionally, a microphone 260 can be coupled to the microphone amplifier 258. In particular aspects, a frequency modulation (FM) radio tuner 262 may be coupled to stereo audio CODEC 250. Additionally, an FM antenna 264 can be coupled to the FM radio tuner 262. Further, it shows that stereo headphones 266 can be coupled to stereo audio CODEC 250.FIG. 2 also illustrates that a radio frequency (“RF”) transceiver 268 can be coupled to the analog signal processor 226. An RF switch 270 may be coupled to the RF transceiver 268 and the RF antenna 272. As shown in FIG. 2, keypad 274 may be coupled to analog signal processor 226. Additionally, monophonic headphone with microphone 276 may be coupled to analog signal processor 226. Additionally, vibrator wireless device 278 may be coupled to analog signal processor 226. FIG. 2 further illustrates that power supply 280 can be coupled to on-chip system 222. In particular aspects, power supply 280 is a direct current ("DC") power supply that provides power to the various components of wireless telephone 220 that require power. Further, in certain aspects, the power supply is a rechargeable DC power or DC battery derived from an AC / DC converter connected to an alternating current ("AC") power source.As illustrated in FIG. 2, a touch screen display 232, a video port 238, a USB port 242, a camera 248, a first stereo speaker 254, a second stereo speaker 256, a microphone 260, an FM antenna 264, a stereo headphone 266, an RF switch 270, an RF antenna 272, a keypad 274, a monaural headset 276, a vibrator 278, and a power supply 280, all on chip It is outside the system 222.In one or more aspects, processors 224, 226 can include logic to execute machine-readable instructions. In other words, the processors 224, 226 may operate as a means for executing one or more computer programs that may include the method steps disclosed herein. One or more computer programs may be stored in memory 244 accessible to the processors 224, 226. In particular aspects, memory 244 may be random access memory ("RAM"), read only memory ("ROM"), flash memory, electrically erasable read only memory ("EEROM"), or any other arbitrary. It may include any suitable type of memory, or a combination thereof.Referring to FIG. 3, a processor system is shown, generally designated 300. As shown, processor system 300 may include a first processor 302 and a second processor 304. The first processor 302 may include system software 306, such as, for example, all mode system software ("AMSS") produced by Qualcomm Incorporated of San Diego, California. System software 306 may include slave diagnostics (“Diag”) task 308. Additionally, the first processor 302 may include an original original equipment product manufacturer's secondary boot loader ("OEMSBL") 310.As shown in FIG. 3, the second processor 304 may include an operating system (“OS”) 312. OS 312 may control the operation of the wireless device in which processor system 300 is installed. The second processor 304 may include a boot loader 314, such as, for example, an e-boot. Additionally, the second processor 304 may include or be connected to a universal serial bus ("USB") port 316.In particular aspects, when the wireless device in which the processor system 300 is installed is booted, the second processor 304, eg, the boot loader 314 therein, may have the wireless device in a factory test mode ("FTM"). In order to determine if there is a non-volatile ("NV") item stored in memory accessible to the boot loader 314 is read. When the NV item is set to FTM, or if the NV item is absent, the second processor 304 may remain in boot loader mode or enumerate USB port 316 as a diag serial port. The diagnostic packet may then be forwarded from the second processor 304 to the first processor 302 to which the slave diag task 308 belongs. If the NV item is not set to FTM, the boot loader 314 may continue to boot the OS 312.Referring to FIG. 4, a method of testing a wireless device is shown, generally designated 400. Beginning at block 402, a wireless device may be installed at the Nth test station 402. In particular aspects, the wireless device is the wireless device 134 shown in FIG. 1, the wireless device 220 shown in FIG. 2, or any other wireless device, or a combination thereof. At block 404, the wireless device may be connected to a personal computer ("PC") in the test station. Moving to block 406, the do loop is entered. Here, when the wireless device is powered on, one or more of the following steps may be performed. In particular aspects, one or more method steps may be performed by a wireless device, an external computer wireless device connected to the wireless device, or a combination thereof.At block 408, the wireless device may read the location of the FTM memory. Thereafter, at decision step 410, the wireless device may determine if the wireless device is an FTM. The wireless device is an FTM by reading the FTM memory item and determining whether the FTM memory item is set to yes or no. It can be determined.If the wireless device is an FTM, method 400 moves to block 412 where the wireless device prohibits booting of the OS in the wireless device. Thus, at block 414, the wireless device receives one or more test signals, calibration signals, configuration signals, software, or a combination thereof from the PC. The test signal may be used, for example, to adjust a wireless device such as a radio frequency ("RF") antenna associated with the wireless device. These test signals ensure that the RF antenna contains the correct sensitivity level and that it is tuned to the appropriate frequency band. A wireless device may include a number of items that need to be tested, calibrated, or otherwise configured.From block 414, method 400 moves to decision step 416 where the wireless device may determine if it has been rebooted. If so, method 400 may return to block 408 and continue as described herein. At decision step 416, if the wireless device has not been rebooted, method 400 moves to block 418 where the wireless device is powered off. Moving to decision step 420, it may be determined whether the wireless device should move to the next test station for further testing, calibration, and the like. If it is to move to the next test station, method 400 may return to block 402 and continue as described herein. If not, method 400 may end at state 422.Returning to decision step 410, if the wireless device is not an FTM, method 400 may proceed to block 424 where the wireless device may determine the number of boots associated with the wireless device. Proceeding to decision step 426, the wireless device may determine if the boot number is equal to a threshold. If the number of boots is equal to the threshold, method 400 proceeds to block 430 where the wireless device allows booting of the OS in the wireless device. Accordingly, method 400 may proceed to decision step 420 and continue as described herein.Returning to decision step 426, if the boot count is not equal to the threshold, method 400 may move to block 432. At block 432, the wireless device may change or otherwise change the boot count by one unit. For example, the wireless device may increment the boot number by one unit or decrement the boot number by one unit. Thus, at block 434, the wireless device may turn on the FTM, for example, by setting the FTM memory item to yes. From block 434, method 400 proceeds to decision step 420 and continues as described herein.In certain aspects, the method steps of block 434 are optional. For example, the wireless device manufacturer may set the count to 20, but only test the phone in a manner that requires 15 boots. The manufacturer may store the wireless device, for example, on a shelf, in anticipation of the need for an upgrade later. However, no upgrade is required, and the manufacturer can ship the wireless device directly, without removing it from storage or resetting its boot count. Optional settings to make the phone FTM can be managed by a computer, such as a PC, connected to the wireless device.FIG. 5 illustrates a method, generally designated 500, for booting a wireless device. As illustrated, method 500 begins at block 502 with a do loop. Here, when the wireless device is powered on, one or more of the following steps may be performed. In particular aspects, one or more of the method steps may be performed by a wireless device, an external computer wireless device connected to the wireless device, or a combination thereof. At block 504, the wireless device, eg, a processor therein, may read the NV memory item for the FTM indicator. At decision step 506, the wireless device may determine whether the FTM indicator is set to yes, or not present, eg, for flash erase. If the FTM indicator is not set to yes, eg, no, then method 500 may proceed to block 508. At block 508, the wireless device may boot an operating system within the wireless device. Accordingly, method 500 may end at state 510.Returning to decision step 506, if the FTM indicator is set to yes, or absent, method 500 continues to block 512. At block 512, the wireless device may remain in boot loader mode, eg, e-boot. Thereafter, at block 514, the wireless device may enumerate the USB port as a diagnostic serial port. It can be appreciated that the wireless device can enumerate any wired or wireless communication port as a diagnostic serial port. At block 516, the wireless device may receive one or more diag / FTM packets. Moving to block 518, the wireless device may forward the diag / FTM packet to a slave diag task in the wireless device, eg, in another processor of the wireless device. Thereafter, method 500 may end at state 510.With reference to FIG. 6, a method of monitoring a boot number associated with a wireless device is illustrated and generally represented as 600. In particular aspects, method steps may be performed by a wireless device, by an external computer wireless device connected to the wireless device, or a combination thereof. As illustrated, the method 600 begins with a do loop. Here, one or more of the following steps may be performed for the first N boots. At block 604, the wireless device enters FTM. Thus, at decision step 606, the wireless device, eg, a processor therein, may determine whether the FTM NV item is set on. If set to on, method 600 moves to block 608 and the wireless device remains in FTM. Thereafter, method 600 may move to block 610 where the boot number may be decremented when the wireless device is reset. Method 600 may then end at state 612.Returning to decision step 606, if the FTM NV item is not set on, the method 600 proceeds to block 614 and the wireless device is, for example, 1 second, 2 seconds, 3 seconds, 4 seconds, 5 seconds, etc. For a period of time, wait for an FTM NV item to be set by the NV Support Tool, eg, Qualcomm Product Support Tool ("QPST") NV Tool. In another aspect, the FTM NV item may be configured by some other external support tool. Next, at decision step 616, the wireless device may determine if the FTM NV item is configured by the NV assist tool. If so, method 600 may proceed to block 608 and continue as described herein. If the FTM NV item is not configured by the NV support tool, method 600 may move to block 618 and the wireless device may boot the operating system in the wireless device. Accordingly, method 600 may move to block 610 and continue as described herein.It can be appreciated that the computer program may comprise the method steps described above. Additionally, the computer program may be executed within the wireless device to control boot of an operating system within the wireless device. For example, the operating system is prevented from booting during testing when the wireless device is in FTM. If the wireless device is being tested repeatedly and is being reset (ie, rebooted), preventing the operating system from being booted on each reset (ie, reboot) is during the manufacturing and testing process. , Can provide considerable time savings. Furthermore, the boot number may be set to a specific value, such as 10, for example. The boot number is decremented each time the wireless device is powered on, and the wireless device automatically prohibits OS booting unless the boot number is zero. However, when the boot count reaches zero, the wireless device may allow the OS to boot. In addition, the boot count may be reset to a new value and the wireless device may continue to inhibit OS boot until the boot count reaches zero.During the service, after the wireless device is sold to the user, the wireless device may be FTM in the service center to be tested. As described herein, the wireless device may be FTMed using the QPST NV tool.It should be understood that the method steps described herein need not necessarily be performed in the order as described. Furthermore, terms such as "follow", "after", "next" etc. are not intended to limit the order of the steps. These terms are only used to guide the reader through the description of these method steps.In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted by computer readable medium as one or more instructions or code. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media is any available media that can be accessed by a computer. By way of example and not limitation, such computer readable media may be RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage wireless devices, or It may comprise any other medium used to convey or store desired program code in the form of instructions or data structures and processed by a computer. Also, any connection is properly termed a computer-readable medium. For example, using a wireless technology such as coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or infrared, wireless, and microwave, a website, server, or When software is transmitted from other remote sources, coaxial technology, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwave are included in the definition of medium . Disks and discs as used herein may be compact discs (CDs), laser discs (discs), optical discs, digital versatile discs (DVDs) (disks). And Floppy® disks and Blu-ray® disks. Here, a disk normally reproduces data magnetically, while a disk optically reproduces data using a laser. Combinations of the above should also be included within the scope of computer readable media.While selected embodiments have been illustrated and described in detail, it is understood that various substitutions and modifications may be made without departing from the spirit and scope of the invention as defined by the following claims. It will be done.
Briefly, in accordance with one embodiment of the invention, an integrated circuit comprises a first stage that provides differential outputs in one mode and substantially equal outputs in another mode.
What is claimed is: 1. An apparatus having an integrated circuit, the integrated circuit comprising;a first stage adapted to provide a first output voltage potential and a second output voltage potential that are substantially equal when the integrated circuit is in a first operational mode and differential when the integrated circuit is in a second operational mode, and wherein the first stage further comprises an n-channel transistor and at least two p-channel transistors having a gate terminal adapted to receive an enable signal; and a second stage coupled to the first stage and comprising at least two transistors, wherein the second stage is adapted to provide a third output voltage potential and a fourth output voltage potential that are substantially equal when the integrated circuit is in the first operational mode. 2. The apparatus of claim 1, wherein the first stage is further adapted so that the first output voltage potential and the second output voltage potential are approximately equal to zero volts when the integrated circuit is in the first operational mode.3. The apparatus of claim 1, wherein the first stage includes a first stack having a first transistor coupled to a second transistor, wherein the gate of the first transistor is adapted to receive a first input signal and the gate of the second transistor is adapted to receive a second input signal.4. The apparatus of claim 3, wherein the first stage includes a second stack having a third transistor coupled to a fourth transistor, wherein the gate of the third transistor is adapted to receive the logical complement of the first input signal and the fourth transistor is adapted to receive the logical complement of the second input signal.5. The apparatus of claim 1, wherein the second stage includes a first stack adapted to receive the first output voltage potential and a second stack adapted to receive the second output voltage potential.6. The apparatus of claim 5, wherein the first stack comprises a first transistor coupled to a second transistor, and wherein the gate terminal of first transistor is adapted to receive a first input signal and the gate terminal of the second transistor is adapted to receive a second input signal.7. The apparatus of claim 6, wherein the second stack comprises a third transistor coupled to a fourth transistor, and wherein the gate terminal of the third transistor is adapted to receive the logical complement of the first input signal and the gate terminal of the fourth transistor is adapted to receive the logical complement of the second input signal.8. The apparatus of claim 6, wherein the second stage further comprises a p-channel transistor coupled to the first stack.9. An integrated circuit comprising:a first stage adapted to receive at least two input signals, wherein the first stage is adapted to provide a first output signal and a second output signal, the first output signal being differential with respect to the second output signal, and wherein the first stage further comprises at least two p-channel metal oxide semiconductor (PMOS) transistors having a gate terminal adapted to receive an enable signal; and a second stage adapted to receive at least two input signals, the first output signal, and the second output signal, wherein the second stage is further adapted to provide a third output signal and a fourth output signal, and wherein the first output signal, the second output signal, the third output signal, and the fourth output signal are substantially equal when the integrated circuit is in a first mode for all combinations of values of the at least two input signals of the first stage and second stage. 10. The integrated circuit of claim 9, wherein the first stage and the second stage are adapted so that the third output signal represents the exclusive-or of the first plurality of input signals and the second plurality of input signals when the integrated circuit is in a second mode.11. The integrated circuit of claim 9, further comprising a third stage adapted to receive at least two input signals, the third output signal, and the fourth output signal, wherein the third stage is adapted to provide a fifth output signal and a sixth output signal, and wherein the first output signal, the second output signal, the fifth output signal, and the sixth output signal are substantially equal when the integrated circuit is in the first mode.12. A method of reducing the flow of current through a first stage and a second stage of an integrated circuit, comprising:selectively disabling the flow of current through at least a portion of the first stage of the integrated circuit, the first stage including two p-channel transistors; and driving a pair of differential outputs of the first stage and the second stage so that a pair of differential outputs of the first stage and the second stage are substantially equal. 13. The method of claim 12, wherein selectively disabling the flow of current includes disabling an n-channel transistor in a stack in the first stage of the integrated circuit.14. The method of claim 13, wherein disabling an n-channel transistor includes applying a non-periodic enable signal to a gate terminal of the n-channel transistor.15. The method of claim 12, wherein driving a pair of differential outputs includes applying a non-periodic enable signal to a portion of the first stage of the integrated circuit.16. The method of claim 15, further comprising enabling the first stage so that the pair of differential outputs of the first stage are the logical complement of each other.
This is a continuation of application Ser. No. 09/602,667 filed on Jun. 26, 2000.BACKGROUNDTo improve the performance capability of a microprocessor, it is often desirable to increase the speed or clock rate at which the microprocessor operates. Higher clock rates may be possible by reducing the amount of time it takes for a particular circuit within the microprocessor to process its input signals and provide its output signal. To this end, various dynamic logic circuits have been developed that have improved performance characteristics as compared to some traditional complementary metal-oxide semiconductor (CMOS) circuits. Examples of such dynamic logic circuits include domino logic circuits, skew tolerant domino circuits, latched domino circuits, differential domino circuits, and the like.However, as the operational speed of the circuits within an integrated circuit is increased, race conditions within the integrated circuit may occur. To address the problems associated with race conditions, a clock is often used to synchronize the operation of circuits within the integrated circuit relative to each other. Examples of such circuits are summarized in Chapter Four of "Low-Voltage CMOS VLSI Circuit," by Luo et al. (1999) and Chapter 5.7 of "Circuit Design for CMOS VLSI," by John P. Uyemyra (1992).The use of a clock may reduce the risk that the output provided by a fast circuit to a slower circuit changes before the slower circuit is able to properly process the output of the faster circuit. However, the use of a clock may also regulate or delay the operation of the fastest sub-circuits within an integrated circuit. This, in turn, may not allow the faster sub-circuits within the integrated circuit to take advantage of time-borrowing (e.g., process input signals as soon as the input signals are provided to the sub-circuit). Furthermore, a clock may also be used to hold a sub-circuit in a precharge state until the sub-circuit is allowed to process information. Thus, the use of a clock may result in a circuit being in a precharge condition during a portion of the clock cycle. This may result in the integrated circuit consuming more current, which is generally not desirable in low-power applications. Thus, there is a continuing need for better ways to improve the performance of an integrated circuit while reducing its power consumption.BRIEF DESCRIPTION OF THE DRAWINGSThe subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:FIG. 1 is a schematic representation of a portion of a portable device in accordance with an embodiment of the present invention; andFIG. 2 is a circuit in accordance with a particular embodiment of the present invention.It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.DETAILED DESCRIPTIONIn the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, component, and circuits have not been described in detail so as not to obscure the present invention. Note, in this description a "#" symbol is used to indicate the logical complement of a signal. For example, if BL is a logic "1," then BL# is a logic "0," although this invention is not limited to any particular signaling scheme. However, the "#" symbol does not mean that the signal must be the logical complement at all times. There may be alternative operational modes where BL ;nd BL# are substantially equal as explained in more detail hereinafter.Turning to FIG. 1, an embodiment 100 in accordance with the present invention is described. Embodiment 100 may comprise a portable device such as a mobile communication device (e.g., cell phone), a two-way radio communication system, a one-way pager, a two-way pager, a personal communication system (PCS), a portable computer, or the like. Although it should be understood that the scope and application of the present invention is in no way limited to these examples.Embodiment 100 includes an integrated circuit 10 that may comprise, for example, a microprocessor, a digital signal processor, a microcontroller, a memory array, such as static random access memory (SRAM), or the like. However, it should be understood that the scope of the present invention is not limited to these examples. Embodiment 100 may also include a memory 15, such as Flash memory, read-only memory, or the like, that may provide integrated circuit 10 with instructions or information during the operation of embodiment 100. However, it should be understood that the scope of the present invention is not limited to the type of memory used with integrated circuit 10, nor is the scope limited so as to require that instructions or data be provided by a memory. In alternative embodiments, integrated circuit 10 may contain the instructions and or data to be processed, or the information may be provided from an external source such as from a user as indicated in FIG. 1 as user input.As shown in FIG. 1, integrated circuit 10 may comprise, among other things, an input block 30 that may be used to receive input signals and provide them to a core logic 50. Integrated circuit 10 may also include an output block 35 that may be used to provide the results of the processing performed by core logic 50 of integrated circuit 10. For example, output block 35 may provide information to other portions of embodiment 100 or to an output device such as a display (not shown), although the scope of the present invention is not limited in this respect. As explained in more detail hereinafter, embodiments of the present invention attempt to reduce the amount of power that is consumed by integrated circuit 10 by placing all or portions of core logic 50 into a non-conducting, standby mode when those portions of core logic 50 are not in use. By reducing the amount of current that is consumed by core logic 50, it may be possible to reduce the amount of power consumed by integrated circuit 10 during its operation.Referring now to FIG. 2, a circuit in accordance with a particular embodiment of the present invention is provided. In this particular embodiment, the circuit illustrated is intended to perform an exclusive-or (XOR) logic operation based on the logic value of nine inputs (labeled I1-I9). The XOR operation is often used to provide multiply-and-accumulate (MAC) instructions in DSP's and is illustrated in this particular embodiment because the XOR logic operation is considered by many to be one of the more difficult logic operations to implement. By presenting a particular embodiment using one of the more complex logic operations, it will be apparent to those skilled in the art how alternative embodiments of the present invention may implement other logic operations such as AND, NOR, OR, NAND, and the like. Again, it should be understood that the scope of the present invention is not limited to circuits that implement the XOR operation or to circuits that have nine inputs.The circuit shown in FIG. 2 comprises a first stage 51, a second stage 52, a third stage 53, and a fourth stage 54 that may co-operate to perform a nine-input XOR operation. Again, the circuit shown in FIG. 2 may only represent a relatively small portion of core logic 50 (see FIG. 1). In this particular embodiment, stages 51-54 comprise transistors that receive one or more inputs signals and provide one or more output signals. The output signals of one stage may then be used by subsequent stages to perform additional logic operations. For example, the output voltage potentials of first stage 51 may be received by the drain terminals of transistors 264-267 in second stage 52, although this is not intended as a limitation upon the scope of the present invention.In this particular embodiment, stages 51-54 may include two stacks that are arranged to provide, at least in part, the XOR logic operation. For example, the two stacks of first stage 51 may be provided by transistors 60-69 that are adapted to receive input signals (e.g., signals labeled I1-I3 and I1#-I3#). In this particular embodiment, a stack refers to the series of n-channel transistors (e.g., transistors 60-69) that provides a current path from the power supply 20 voltage to the ground potential. The current path provides, at least in part, one of the output voltage potentials (e.g. OUT1 or OUT1#). Of course in this particular embodiment the actual current path is determined, at least in part, by the logical value of input signals I1-I3 and I1#-I3#. As explained below, during normal operation, the other stack provides the other differential output voltage potential. It should be understood that the scope of the present invention is not limited by the number of stacks or by the number of transistors in a particular stack. In alternative embodiments, the stacks and the number of transistors in a stack may be arranged to provide other logic operations.As mentioned earlier, a clock signal is often used in conventional microprocessors to control to flow of information through the microprocessors and to synchronize the operation of sub-circuits within the microprocessor. In contrast, embodiments of the present invention do not require the use a periodic clock signal to control the operation of stages 51-54. Rather, an enable signal may be used to selectively control the operation, at least in part, of stages 51-54. As explained in more detail below, this may allow portions of the circuits within core logic 50 to take advantage of time-borrowing and operate faster than conventional circuits. Nonetheless, as shown in FIG. 1, portions of core logic 50, such as first stage 51, may receive input signals from external sources or from other circuits that are controlled by a clock, although the scope of the present invention is not limited to the source of the input signals or by the use of a clock signal to store input data.As shown in FIG. 2, first stage 51 may receive input signals from a latch 31 that may be a D flip-flop, or the like. Latch 31 may store the input signals received by input block 30 (see FIG. 1) that are to be provided to core logic 50. A clock signal may be used to latch the value of the input signals (e.g., INPUT1-INPUT3). The use of a latching device and a clock signal may be desirable because input signals INPUT1-INPUT3 may be available to core logic 50 for a limited amount of time. In alternative embodiments, latch 31 may provide input signals INPUT1-INPUT3, labeled I1-I3, respectively, and their logical complement as well (e.g., l1#-I3#).To reduce the amount of power consumed by the transistors in core logic 50 (e.g., transistors 60-69), it may be desirable to design integrated circuit 10 so that it may operate in at least two different operational modes. For example, one mode where the transistors of core logic 50 are powered up and in normal operation, and another mode where all or some of the transistors are disabled so that they do not consume power (e.g., a standby mode). This may be achieved, at least in part, through the use of n-channel transistors 70-71 that are placed in series in the stack with transistors 60-69. Transistors 70-71 may be n-channel transistors, although the scope of the present invention is not limited in this respect. As shown, the gate terminal of transistors 70-71 may receive an enable signal, ENABLE, that may be used to control the operational mode of all or some of the transistors in core logic 50. ENABLE may be provided from a variety of sources, including, but not limited to, from a direct request from the user to enter standby mode or from a state machine that has determined that at least a portion of core logic 50 may enter standby mode.For example, if first stage 51 is not currently in use, ENABLE may be deactivated to indicate that first stage 51 may enter into a low-power, standby mode. A sufficiently low voltage potential on the gate terminal of transistors 70-71 may disconnect the stacks of first stage 51 from the ground voltage potential, and thus, create an electrical "open." Consequently, the flow of current through first stage 51 may be disabled and the overall power consumption of core logic 50 may be reduced. In contrast, if first stage 51 is active and in use, enable signal ENABLE may be asserted so that first stage 51 may return to normal operation.First stage 51 may also include p-channel transistors 75-76 that may be used to provide, at least in part, the outputs signals of first stage 51 when first stage 51 is in the deactivated standby mode. As shown, the gate terminal of p-channel transistors 75-76 may also receive the ENABLE signal so that when ENABLE is deactivated (e.g., a sufficiently low voltage potential), p-channel transistors 75-76 may connect the stacks of first stage 51 to the power supply voltage (e.g. Vdd). In this particular embodiment, first stage 51 includes inverters 77-78 that invert the voltage potential provided by transistors 60-69 and 75-76. Consequently, when ENABLE is deactivated (e.g., indicating that first stage 51 is to enter a low-power mode of operation) the output voltage potentials of first stage 51, labeled OUT1 and OUT1# may be substantially equal and approximately equal the ground voltage potential (e.g., zero volts or Vss).First stage 51 may also include p-channel transistors 80-81 that may be used to connect inverters 77-78 to a power supply voltage (e.g., Vdd) through the use of the ENABLE# signal. For example, if first stage 51 is in normal operation, the enable signal ENABLE may be active and ENABLE# may be deactivated. Consequently, transistors 80-81 may be turned on so that the stacks of first stage 51 may be connected to the power supply voltage (e.g., Vdd). When first stage 51 enters a standby operational mode, ENABLE# may be activated and transistors 80-81 may be turned off. In standby mode, transistors 80-81 and the p-channel devices of inverters 77-78 may isolate the output voltage potentials of first stage 51 (e.g., OUT1 and OUT1#) from the power supply voltage.The use of two transistors to isolate the output signal (e.g., OUT1 and OUT1#) of first stage 51 from the power supply voltage (e.g., VDD) may significantly reduce the amount of leakage current that may flow through inverters 77-78 and the stacks of first stage 51. In addition, in particular embodiments of the present invention, stacked devices that have their gate terminals driven to the same non-conducting potential (e.g., transistors 264-267 of second stage 52) may have reduced leakage current. For example, particular embodiments of the present invention may reduce the leakage current through a stage by as much as ninety percent as compared to a circuit that did not include transistors 80-81 and stages with stacked devices.To resume normal operation, ENABLE may be activated and ENABLE# may be deactivated so that transistors 60-69 may be connected between power supply voltage (e.g., Vdd) and the ground voltage potential (e.g., Vss). As shown in FIG. 2, both stacks of first stage 51 include at least one transistor that is enabled by an input signal (e.g., I1-I3 and I1-I3). Consequently, regardless of the logic values of I1-I3, there may be a current path between transistors 60-69 that may permit one side or stack of first stage 51 to sink current. As a result, one of transistors 85-86 may be enabled so as to connect the other side or stack of first stage 51 to the power supply voltage (e.g., Vdd). Thus, during normal operation, first stage 51 may provide differential outputs (e.g., logical complements) that represent the XORing of I1-I3, although the scope of the present invention is not limited in this respect. In alternative embodiments, the arrangement of transistors 60-69 may be changed as desired to provide other logic operations.Thus, when first stage 51 is in a low-power operational mode, its outputs (e.g., OUT1 and OUT1# may both driven to the ground voltage potential even though the labeling convention might suggest that the signals are always the logical complement of each other. To be clear, OUT1 and OUT1# are not the logical complement of each other when first stage 51 is in a standby mode. Rather, both output voltage potentials are substantially equal.However, when ENABLE is activated and first stage 51 returns to normal operation, the output of first stage 51, namely OUT1 and OUT1#, are the logical complement of each other.The circuit shown in FIG. 2 also comprises additional stages 52-54 that may be adapted to receive the output of the previous stage as well as other input signals (e.g., I4-I9 and their logical complement I4#-I9#). In this particular embodiment, stages 52-54 may be the same or similar to first stage 51. For example, transistors 260-267, 360-367, and 460-467 may have the same or similar purpose as transistors 60-67 of first stage 51.However, one notable exception is that stages 52-53 do not include an equivalent to n-channel channel transistors 70-71 because second stage 52, third stage 53, and fourth stage 54 may not need to be electrically isolated from the power supply voltages. This is due, at least in part, because in this particular embodiment it is assumed that the input signals I4-I9, and their logical complement I4#-I9#, are provided by another stage (not shown) that is the same or similar to first stage 51, although the scope of the present invention is not limited in this respect. Since, I4-I9 and their logical compliments are provided by other stages that are similar to first stage 51, it may be assumed that the output of those other stages are also substantially equal to the ground voltage potential when the ENABLE signal is deactivated.Thus, when ENABLE is deactivated, it may be assumed that I4-I9 and I4#-I9# are all substantially equal to the ground voltage potential. Consequently, transistors 260-267, 360-367, and 460-467 are disabled and the flow of current through stages 52-54 may also be disabled. Furthermore, if ENABLE is disabled, then the combination of transistors 275-276, 375-376, 475-476 with inverters 277-278, 377-378, and 477-478, respectively, sets both outputs of stages 52-54, namely OUT2, OUT2#, OUT3, OUT3#, OUT4, and OUT4#, substantially equal to the ground voltage potential.When the ENABLE signal is activated, stages 51-54, along with the other stages that provide input signals 14-19 are enabled so that core logic 50 (see FIG. 1) may provide the desired logic operation. In this particular embodiment, the operation of stages 51-54 is not regulated through the use of a periodic clock signal, and thus, stages 51-54 are permitted to provide their output signals without having to wait a prerequisite amount of time. Thus, particular embodiments of the present invention may be able to take advantage of time-borrowing (e.g., a stage may provide its output signals as soon as the input signals are provided and processed).In addition, particular embodiments of the present invention may also offer an advantage in power savings by selectively disabling the flow of current through some or all of the stages. When it is determined that all or a portion of core logic 50 may enter a standby mode, the ENABLE signal may be deactivated so that the outputs of a stage may be driven to a known value that reduces the flow of current through subsequent stages. Thus, the use of a non-periodic enable signal may increase the speed at which an integrated circuit may operate and reduce the amount of power consumed when all or a portion of the integrated circuit is in a standby mode.While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
A processor includes a register to store an encoded pointer to a memory location in memory and the encoded pointer is to include an encrypted portion. The processor further includes circuitry to determine a first data encryption factor based on a first data access instruction, decode the encoded pointer to obtain a memory address of the memory location, use the memory address to access an encrypted first data element, and decrypt the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element. The first inputs include the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.
A method, comprising:storing, in a register, an encoded pointer to a memory location in memory, wherein the encoded pointer is to include an encrypted portion;determining a first data encryption factor based on a first data access instruction;decoding the encoded pointer to obtain a memory address of the memory location;using the memory address to access an encrypted first data element; anddecrypting the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element, the first inputs including the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.The method of Claim 1, wherein the encoded pointer further includes first metadata.The method of Claim 2, wherein the first metadata is a memory allocation size of a data structure.The method of Claim 3, wherein the memory address corresponds to a base address of the data structure.The method of any one of Claims 3-4, wherein the first data encryption factor includes a first data type of the encrypted first data element inferred from the first data access instruction, wherein the data structure contains the encrypted first data element having the first data type and an encrypted second data element having a second data type.The method of any one of Claims 1-5, further comprising:in response to a second data access instruction, decoding a second encoded pointer to obtain a second memory address of a second memory location;using the second memory address to access an encrypted second data element;determining a third data encryption factor based on the second data access instruction; anddecrypting the encrypted second data element using the cryptographic algorithm with second inputs, the second inputs including the third data encryption factor based on the second data access instruction and a fourth data encryption factor from the second encoded pointer.The method of any one of Claims 1-6, wherein the first data encryption factor and the second data encryption factor are included in a data tweak as one of the first inputs for the cryptographic algorithm to decrypt the encrypted first data element.The method of any one of Claims 1-4, 6 or 7, wherein the first data encryption factor includes a first data type derived from the first data access instruction, the method further comprising:inferring the first data type based on an op code of the first data access instruction.The method of any one of Claims 1-8, wherein the first data encryption factor further includes a displacement value derived from the first data access instruction.The method of any one of Claim 1-4, 6, 7, or 9, wherein the circuitry is further to:determine that the first data access instruction includes a prefix; and determine the first data encryption factor based on information included in the prefix.The method of any one of Claims 1-10, wherein the memory location is in heap memory or stack memory.The method of any one of Claims 1-11, wherein the decoding the encoded pointer includes:decrypting the encrypted portion of the encoded pointer using a second cryptographic algorithm with third inputs, the third inputs including the first data encryption factor associated with the first data access instruction.An apparatus, the apparatus comprising:a register to store an encoded pointer to a memory location in memory, wherein the encoded pointer is to include an encrypted portion; andmeans to perform one or more elements of the method of any one of Claims 1-12.The apparatus of Claim 13, wherein the means to perform the method comprises at least one processor and at least one memory element.At least one machine readable storage medium comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of the method of any one of Claims 1-12.
TECHNICAL FIELDThis disclosure relates in general to the field of computer systems, and more particularly, to cryptographic computing.BACKGROUNDCryptographic computing may refer to computer system security solutions that employ cryptographic mechanisms inside of processor components to protect data stored by a computing system. The cryptographic mechanisms may be used to encrypt the data itself and/or pointers to the data using keys, tweaks, or other security mechanisms. Cryptographic computing is an important trend in the computing industry, with the very foundation of computing itself becoming fundamentally cryptographic. Cryptographic computing represents a sea change, a fundamental rethinking of systems security with wide implications for the industry.BRIEF DESCRIPTION OF THE DRAWINGSTo provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, where like reference numerals represent like parts, in which:Fig. 1 is a simplified block diagram of an example computing device configured with secure memory access logic according to at least one embodiment of the present disclosure;Fig. 2 is a simplified environment diagram illustrating an application of the secure memory access logic of Fig. 1 according to at least one embodiment of the present disclosure;Fig. 3A is flow diagram illustrating a process of binding a generalized encoded pointer to encryption of data referenced by that pointer according to at least one embodiment of the present disclosure;Fig. 3B is flow diagram illustrating a process of decrypting data bound to a generalized encoded pointer according to at least one embodiment of the present disclosure;Fig. 4 is a diagram of an example pointer according to at least one embodiment of the present disclosure;Fig. 5 is a simplified flow diagram of at least one embodiment of a process for providing security for a pointer according to an embodiment;Fig. 6 is a simplified flow diagram of at least one embodiment of a process for verifying a previously encoded pointer according to an embodiment;Fig. 7 is flow diagram illustrating an example process of binding one embodiment of a cryptographically encoded pointer to the encryption of a variable referenced by that pointer according to at least one embodimentFig. 8 is a simplified block diagram illustrating a compiler embedding information into compiled code according to at least one embodiment;Fig. 9A is flow diagram illustrating an example process of binding a cryptographically encoded pointer to the encryption of the data referenced by that pointer according to at least one embodiment;Fig. 9B is flow diagram illustrating an example decryption process for encrypted data that is referenced by a cryptographically encoded pointer according to at least one embodiment;Fig. 10 is a flow diagram of an example process related to a write operation according to an embodiment;Fig. 11 is a flow diagram of an example process related to a read operation according to an embodiment;Fig. 12 is a block diagram illustrating an example cryptographic computing environment according to at least one embodiment;Fig. 13 is a block diagram illustrating an example processor according to at least one embodiment;Fig. 14A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with certain embodiments;Fig. 14B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor in accordance with certain embodiments;Fig. 15 is a block diagram of an example computer architecture according to at least one embodiment; andFig. 16 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the present disclosure.DETAILED DESCRIPTIONThis disclosure provides various possible embodiments, or examples, for implementations of fine-grained protection in both stack and heap memory allocations using cryptographic computing. Fine-grained stack protection embodiments can include encoding pointers with variable base and bound information and using the precise bound encoding to protect sensitive variables. Another fine-grained protection for data in both stack and heap memory allocations relates to data type based encodings. In these embodiments, data type of a variable or data element can be encoded in a pointer to the variable and can be used in the encryption and decryption of the data element. The data type of a particular variable may be inferred from the instructions accessing and potentially manipulating the data.At least some embodiments disclosed in this specification, including read and write operations, are related to pointer based data encryption and decryption in which a pointer to a memory location for data or code is encoded with a tag and/or other metadata (e.g., security context information) and may be used to derive at least a portion of tweak input to data or code cryptographic (e.g., encryption and decryption) algorithms. Thus, a cryptographic binding can be created between the cryptographic addressing layer and data/code encryption and decryption. This implicitly enforces bounds since a pointer that strays beyond the end of an object (e.g., data) is likely to use an incorrect tag value for that adjacent object. In one or more embodiments, a pointer is encoded with a linear address (also referred to herein as "memory address") to a memory location and metadata. In some pointer encodings, a slice or segment of the address in the pointer includes a plurality of bits and is encrypted (and decrypted) based on a secret address key and a tweak based on the metadata. Other pointers can be encoded with a plaintext memory address (e.g., linear address) and metadata.For purposes of illustrating the several embodiments for proactively blocking out-of-bound memory accesses while enforcing cryptographic isolation of memory regions, it is important to first understand the operations and activities associated with data protection and memory safety. Accordingly, the following foundational information may be viewed as a basis from which the present disclosure may be properly explained.Known computing techniques (e.g., page tables for process/kernel separation, virtual machine managers, managed runtimes, etc.) have used architecture and metadata to provide data protection and isolation. For example, in previous solutions, memory controllers outside the CPU boundary support memory encryption and decryption at a coarser granularity (e.g., applications), and isolation of the encrypted data is realized via access control. Typically, a cryptographic engine is placed in a memory controller, which is outside a CPU core. In order to be encrypted, data travels from the core to the memory controller with some identification of which keys should be used for the encryption. This identification is communicated via bits in the physical address. Thus, any deviation to provide additional keys or tweaks could result in increased expense (e.g., for new buses) or additional bits being "stolen" from the address bus to allow additional indexes or identifications for keys or tweaks to be carried with the physical address. Access control can require the use of metadata and a processor would use lookup tables to encode policy or data about the data for ownership, memory size, location, type, version, etc. Dynamically storing and loading metadata requires additional storage (memory overhead) and impacts performance, particularly for fine grain metadata (such as for function as a service (FaaS) workloads or object bounds information).The following disclosure provides various possible embodiments, or examples, for implementation of cryptographic computing. Cryptographic computing is an important trend in the computing industry, with the very foundation of computing itself becoming fundamentally cryptographic. Cryptographic computing represents a sea change, a fundamental rethinking of systems security with wide implications for the industry.Embodiments disclosed in this application are related to pointer based data encryption in which a pointer to a memory location for data is encoded with a tag and/or other metadata and may be used to derive at least a portion of tweak input to data cryptographic (e.g., encryption and decryption) algorithms. Thus, a cryptographic binding is created between the cryptographic addressing layer and data/code encryption and decryption. This implicitly enforces bounds since a pointer that strays beyond the end of an object (e.g., data) is likely to use an incorrect tag value for that adjacent object. In one or more embodiments, a pointer is encoded with a linear address (also referred to herein as "memory address") to a memory location and metadata. A slice or segment of the address in the pointer includes a plurality of bits and is encrypted (and decrypted) based on a secret address key and a tweak that includes the metadata. This encrypted slice of the memory address in the pointer is also referred to herein as "ciphertext" with reference to some embodiments. Binding data encryption and the pointer can be achieved by encrypting the data at the memory location using a pointer-based tweak and secret data key. The pointer-based tweak for encrypting (and decrypting) the data can be derived from the encoded pointer and potentially additional context information. In particular, a pointer-based tweak for data can be created based, at least in part, on the encrypted slice of the address (e.g., the ciphertext) in the encoded pointer and the metadata in the encoded pointer. In other embodiments, the memory address may be decrypted and decoded to create the tweak for encrypting/decrypting the data. In at least some embodiments, context information stored separately from the pointer may also be included in the tweak.Variations of a different tweak for encrypting and decrypting a slice of the memory address to be embedded in the pointer are possible in one or more embodiments. For example, different and/or additional context information such as various types of metadata, cryptographic context identifier, portions of the plaintext memory address, or any suitable combination thereof may be used in the tweak used to encrypt/decrypt the slice of the memory address in the pointer. Similarly, variations of the tweak for encrypting and decrypting the data referenced by the encoded pointer are also possible. In other embodiments, additional parts of the encoded pointer may be used in the pointer-based tweak or the entire encoded pointer may be used as the pointer-based tweak. Furthermore, in at least some embodiments, different and/or additional context information such as metadata, cryptographic context identifier, slices of the plaintext address, or any suitable combination thereof may also be used in the tweak used to encrypt/decrypt the data referenced by the encoded pointer.For purposes of illustrating the several embodiments of pointer based data encryption, it is important to first understand the operations and activities associated with data protection and memory safety. Accordingly, the following foundational information may be viewed as a basis from which the present disclosure may be properly explained.Current computing techniques (e.g., page tables for process/kernel separation, virtual machine managers, managed runtimes, etc.) have used architecture and metadata to provide data protection. For example, in previous solutions, a processor would use lookup tables to encode policy or data about the data for ownership, memory size, location, type, version, etc. Dynamically storing and loading metadata requires additional storage (memory overhead) and impacts performance, particularly for fine grain metadata (such as function as a service (FaaS) workloads or object bounds information).Cryptographic computing can resolve many of the aforementioned issues (and more). Cryptographic computing may make redundant the legacy modes of process separation, user space, and kernel with a fundamentally new fine-grain protection model. With cryptographic computing, protections are cryptographic, with processors and accelerators alike utilizing secret keys and ciphers to provide access control and separation at increasingly finer granularities. Further, instead of virtual machine and process separation in current systems, with cryptographic computing, individual functions may become the boundary, allowing address spaces to be shared via pointers that are encrypted, with the encrypted pointers and keys providing controlled access down to individual data objects.Cryptographic computing embodiments disclosed herein may leverage the concept of a cryptographic addressing layer where the processor decrypts software allocated memory addresses (linear/virtual address space, sometimes referred to as "pointers") based on implicit and explicit metadata (e.g., context information, a cryptographic context identifier, etc.) and/or a slice of the memory address itself (e.g., as a tweak to a tweakable block cipher (e.g., XOR-encrypt-XOR-based tweaked-codebook mode with ciphertext stealing (XTS)). As used herein, a "tweak" may refer to, among other things, an extra input to a block cipher, in addition to the usual plaintext or ciphertext input and the key (e.g., secret key 116(1)). A tweak comprises one or more bits that represent a value. In one or more embodiments, a tweak may compose all or part of an initialization vector (IV) for a block cipher. When decryption of an address is performed, if the information used to create the tweak (e.g., implicit and explicit metadata, plaintext address slice of the memory address, etc.) corresponds to the original allocation of the memory address by a memory allocator (e.g., software allocation method), then the processor can correctly decrypt the address. Otherwise, a random address result will cause a fault and get caught by the processor. These cryptographic addresses (or address slices) may be further used by the processor as a tweak to the data encryption cipher used to encrypt/decrypt data they refer to (data referenced by the cryptographically encoded pointer), creating a cryptographic binding between the cryptographic addressing layer and data/code encryption. It should be noted that a tweak that is used as input to a block cipher to encrypt/decrypt a memory address is also referred to herein as an "address tweak". Similarly, a tweak that is used as input to a block cipher to encrypt/decrypt data is also referred to herein as a "data tweak".By cryptographically encoding metadata into addresses and their referenced data, cryptographic computing may reduce or remove the need for extra separate memory/storage to provide policy and context information/metadata. This can save up to billions of dollars in the computing industry (e.g., in dynamic random access memory (DRAM) expenses) due to the reduction of metadata alone. Customers can reap these savings in memory costs while still getting the security, safety and error-free functionality they want with cryptographic computing. By allowing safe speculation, the fundamentally cryptographic separation policies of cryptographic computing may allow the processor to speculate freely and provide increased performance.In cryptographic computing, where data security is fundamentally linked to cryptographic memory addressing, processing and fine grain cryptographic access controls to data are important. Cryptographic computing transforms all compute vectors from the CPU to GPU, accelerators to FPGAs, etc. With cryptographic computing, protections may be cryptographic, where processors and accelerators alike utilize secret keys and ciphers to provide access control and separation at increasingly fine granularities. Further, instead of virtual machine and process separation, individual functions may become the boundary, address spaces are shared while pointers are encrypted, with keys providing controlled access down to individual data objects. Capabilities may thus become entwined in the cryptographic operations to provide granular access control to data objects while preventing buffer overflows, type confusion and temporal (e.g. use-after-free) vulnerabilities at every level of the system. Cryptographic code may execute natively, safely, and without the need for interpreters or managed runtimes to provide memory and type safety. Memory may move from isolated domains and containers to globally shared memory models where data is accessible based on cryptographic access control mechanisms and gone are difficult-to-scale distributed permissions, paging and associated control structures. Even files may be safely stored directly in memory (e.g., in non-volatile memory modules, such as non-volatile dual-inline memory modules (NVDIMMs)), being individually encrypted, cryptographically sized, and incorruptible from software errors. This may have implications for functional safety, reliability, and multi-tenancy, potentially allowing for more speculation for improving processing performance.Cryptography continues to become faster and lighter. For instance, the Advanced Encryption Standard (AES) has been the mainstay for data encryption for decades, using a 128bit block cipher. Meanwhile, memory addressing is typically 64bits today. Although embodiments herein may be illustrated and explained with reference to 64-bit memory addressing for 64 computers, the disclosed embodiments are not intended to be so limited and can easily be adapted to accommodate 32bits, 128bits, or any other available bit sizes for pointers. Likewise, embodiments herein may further be adapted to accommodate various sizes of a block cipher (e.g., 64bit, 48bit, 32 bit, 16bit, etc. using Simon, Speck, tweakable K-cipher, PRINCE or any other block cipher).Lightweight ciphers suitable for pointer encryption have emerged recently. The PRINCE cipher, for example, can be implemented in 3 clocks requiring as little as 799 µm2 of area in the 10nm process, providing half the latency of AES in a tenth the Silicon area. Cryptographic computing may utilize these new ciphers, as well as others, introducing novel computer architecture concepts including, but not limited to: (i) cryptographic addressing, i.e., the encryption of data pointers at the processor using, as tweaks, contextual information about the referenced data (e.g., metadata embedded in the pointer and/or external metadata), a slice of the address itself, or any suitable combination thereof; and (ii) encryption of the data itself at the core, using cryptographically encoded pointers or portions thereof, non-cryptographically encoded pointers or portion(s) thereof, contextual information about the reference data, or any suitable combination thereof as tweaks for the data encryption. A variety of encryption modes that are tweakable can be used for this purpose of including metadata (e.g. counter mode (CTR) and XOR-encrypt-XOR (XEX)-based tweaked-codebook mode with ciphertext stealing (XTS)). In addition to encryption providing data confidentiality, its implicit integrity may allow the processor to determine if the data is being properly decrypted using the correct keystream and tweak. In some block cipher encryption modes, the block cipher creates a keystream, which is then combined (e.g., using XOR operation) with an input block to produce the encrypted or decrypted block. In some block ciphers, the keystream is fed into the next block cipher to perform encryption or decryption.The "Metadata Wall" may refer to the problem of additionally fetching metadata about memory operations such as access control, object type/size, and version. Today's computer architecture requires the processor to lookup metadata, or data about data, to determine if memory accesses are allowed. The additional memory accesses for metadata can impact performance, additional storage for the metadata is required, and the metadata itself needs to be protected in order to provide security. Some current solutions that add metadata in the form of bounds tables that the hardware would use to detect buffer overflows have been shown to have up to 4X performance impact with 400% memory overheads for some workloads. Similarly, shadow stack metadata enables Control-flow Enforcement Technology, and memory tagging uses metadata for versioning and capabilities add metadata for verifying data types. Memory tagging is not suitable for mitigating type confusion and protecting against uninitialized use variables. In addition, although the overhead of memory tagging may be reduced using error-correcting code bits, it can nevertheless require additional devices, which can increase costs. Capability machines may also use fat pointers to embed security metadata in-line with pointers, imposing substantial memory overheads (e.g., 25% in pointer heavy applications) due to doubling the pointer size.In contrast, cryptographic computing may provide metadata codified as tweaks to cryptographic addressing and data, cryptographic addressing and code, or a combination thereof, removing potential performance and memory overheads caused by the inclusion of such metadata. The resulting ciphertext may need no additional protections beyond the secret key, allowing reuse of the same memory as the data. As further discussed herein, cryptographic computing may solve a myriad of vulnerabilities with the same unified mechanism, using computation instead of memory.Fig. 1 is a simplified block diagram of an example computing device 100 configured with secure memory access logic according to at least one embodiment of the present disclosure. In the example shown, the computing device 100 includes a processor 102 having a set of secure memory access logic 150 and a number of registers 110. The secure memory access logic 150 utilizes metadata about an indirect address 114, which is encoded into unused bits of the indirect address 114 (e.g., non-canonical bits of a 64-bit address, or a range of addresses set aside, e.g., by the operating system, such that the corresponding high order bits of the address range may be used to store the metadata), in order to secure and/or provide access control to memory locations pointed to by the indirect address 114. For example, the metadata encoding and decoding provided by the secure memory access logic 150 can prevent the indirect address 114 from being manipulated to cause a buffer overflow, and/or can prevent program code from accessing memory that it does not have permission to access. Address encoding logic 152 of the secure memory access logic 150 is invoked when memory is allocated (e.g., by an operating system, in the heap) and provided to executing programs in any of a number of different ways, including by using a function such as malloc, alloc, or new; or implicitly via the loader, or statically allocating memory by the compiler, etc. As a result, the indirect address 114, which points to the allocated memory, is encoded with the address metadata.The address metadata can include valid range metadata. The valid range metadata allows executing programs to manipulate the value of the indirect address 114 within a valid range, but will corrupt the indirect address 114 if the memory is accessed using the indirect address 114 beyond the valid range. Alternatively or in addition, the valid range metadata can be used to identify a valid code range, e.g., a range of memory that program code is permitted to access (e.g. the encoded range information can be used to set explicit ranges on registers). Other information that can be encoded in the address metadata includes access (or permission) restrictions on the indirect address 114 (e.g., whether the indirect address 114 can be used to write, execute, or read the referenced memory).In at least some other embodiments that will be further described herein, other metadata (or context information) can be encoded in the unused bits of indirect address 114 such as a size of plaintext address slices (e.g., number of bits in a plaintext slice of a memory address embedded in the indirect address), a memory allocation size (e.g., bytes of allocated memory referenced by the indirect address), a type of the data or code (e.g., class of data or code defined by programming language), permissions (e.g., read, write, and execute permissions of the indirect address), a location of the data or code (e.g., where the data or code is stored), the memory location where the pointer itself is to be stored, an ownership of the data or code, a version of the indirect address (e.g., a sequential number that is incremented each time an indirect address is created for newly allocated memory, determines current ownership of the referenced allocated memory in time), a tag of randomized bits (e.g., generated for association with the indirect address), a privilege level (e.g., user or supervisor), a cryptographic context identifier (or crypto context ID) (e.g., randomized or deterministically unique value for each indirect address), etc. For example, in one embodiment, the address metadata can include size metadata that encodes the size of a plaintext address slice in the indirect address. The size metadata may specify a number of lowest order bits in the indirect address that can be modified by the executing program. The size metadata is dependent on the amount of memory requested by a program. Accordingly, if 16 bytes are requested, then size metadata is encoded as 4 (or 00100 in five upper bits of the pointer) and the 4 lowest bits of the pointer are designated as modifiable bits to allow addressing to the requested 16 bytes of memory. In some embodiments, the address metadata may include a tag of randomized bits associated with the indirect address to make the tag unpredictable for an adversary. An adversary may try to guess the tag value so that the adversary is able to access the memory referenced by the pointer, and randomizing the tag value may make it less likely that the adversary will successfully guess the value compared to a deterministic approach for generating the tag value. In some embodiments, the pointer may include a version number (or other deterministically different value) determining current ownership of the referenced allocated data in time instead of or in addition to a randomized tag value. Even if an adversary is able to guess the current tag value or version number for a region of memory, e.g., because the algorithm for generating the version numbers is predictable, the adversary may still be unable to correctly generate the corresponding encrypted portion of the pointer due to the adversary not having access to the key that will later be used to decrypt that portion of the pointer.Address decoding/decrypting logic 154 verifies the encoded metadata on memory read and write operations that utilize processor instructions such as MOV, where a general purpose register is used as a memory address to read a value from memory (e.g., load) or to write a value to memory (e.g., store), as well as on other operations that involve the "use" of memory (such as arithmetic instructions with memory operands, e.g. ADD, and control transfer instructions, e.g. CALL/JMP etc.). These are considered memory operands, which may specify a location in memory at which the destination address for the control transfer is stored. The example secure memory access logic 150 is embodied as part of processor instructions (e.g., as part of the processor instruction set architecture), or microcode (e.g., instructions that are stored in read-only memory and executed directly by the processor 102). In other embodiments, portions of the secure memory access logic 150 may be embodied as hardware, firmware, software, or a combination thereof (e.g., as programming code executed by a privileged system component 142 of the computing device 100). For example, the secure memory access logic 150 may be embodied in software as an instruction set emulator (e.g., a binary instrumentation tool such as a PIN Tool) that emulates the instruction logic utilizing the encoded addresses as disclosed herein.The secure memory access logic 150 is executable by the computing device 100 to provide security for indirect addresses "inline," e.g., during execution of a program (such as a user space software application) by the computing device 100. As used herein, the terms "indirect address" and "pointer" may each refer to, among other things, an address (e.g. virtual address or linear address) of a memory location at which other data or instructions are stored. In an example, a register that stores an encoded memory address of a memory location where data or code is stored may act as a pointer. As such, the indirect address 114 may be embodied as, for example, a data pointer (which refers to a location of data), a code pointer (which refers to a location of executable code), an instruction pointer, or a stack pointer. Indirect addresses may be referred to by other terminology, such as "pointer," "address pointer," or "pointer address." As used herein, "metadata" may refer to, among other things, information about or relating to an indirect address 114, such as a valid data range, a valid code range, pointer access permissions, a size of plaintext address slice (e.g., encoded as a power in bits), a memory allocation size, a type of the data or code, a location of the data or code, an ownership of the data or code, a version of the indirect address, a tag of randomized bits, version, a privilege level of software, a cryptographic context identifier, etc.As used herein, "memory load " may refer to, among other things, a "MOV", "LOAD", or "POP" instruction or any other instruction that causes data to be read, copied, or otherwise accessed at one storage location, e.g., memory, and moved into another storage location, e.g., registers (where "memory" may refer to main memory or cache, e.g., a form of random access memory, and "register" may refer to a processor register, e.g., hardware), or any instruction that accesses or manipulates memory. Also as used herein, "memory store " may refer to, among other things, a "MOV", "STORE", or "PUSH" instruction or any other instruction that causes data to be read, copied, or otherwise accessed at one storage location, e.g., register, and moved into another storage location, e.g., memory, or any instruction that accesses or manipulates memory.However, the indirect address encoding/decoding technology disclosed herein is not limited to MOV or load/store instructions. For example, control transfer instructions such as call and jump instructions can be adapted to handle encoded indirect addresses in a similar manner as described herein with respect to MOV instructions, wherein code is to execute within a valid address range. Likewise, the instruction pointer (e.g., register) may be range bound given the encoded address specified by the control transfer instruction (e.g. JMP/CALL) results in an encoded address being used for the instruction pointer, thus restricting valid program execution to within a valid address range (effectively, the program counter can increment correctly until it reaches the end of the encoded range). Furthermore, in some architectures, any number of processor instructions may have a memory operand in the form of an indirect address (e.g. arithmetic operations such as ADD, SUB, MUL, AND, OR, XOR, etc. may have a source/destination memory reference in the form of an indirect address and/or a source/destination register operand). In other architectures, however, the format of memory operands may vary. For example, registers may be combined in some way (e.g., by addition) to produce an effective address. Additionally, other parameters may optionally be included, such as a scaling factor that multiplies one of the register values (e.g., the index) and/or a constant displacement value embedded in the instruction that is directly added. Further, it should be noted that while the illustrative embodiments refer to "instructions," such instructions may be embodied as, e.g., processor instructions, operating system routines, or other forms of computer program code.The example secure memory access logic 150 includes address encoding/encrypting logic 152 (which can include logic to perform metadata encoding and address encryption), encryption store logic 156, and decryption read logic 158. Illustratively, the address decoding/decrypting logic 154 (which can include logic for decrypting and forming a linear address from an encoded pointer), can be embodied in encryption store logic 156 and decryption read logic 158, but may be embodied in other processor instructions, or as a separate instruction or series of instructions, or as higher-level code executed by a privileged system component such as an operating system kernel or virtual machine monitor, or as an instruction set emulator. As described in more detail below, the address encoding logic 152 and the address decoding/decrypting logic 154 each operate on an indirect address 114 using metadata (e.g., one or more of valid range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag value, privilege level (e.g., user or supervisor), crypto context ID, etc.) and a secret key (e.g., secret key 116(1)), in order to secure the indirect address 114 at the memory allocation/access level. Also as described in more detail below, the encryption store logic 156 and decryption read logic 158 each operate on data (referenced by indirect address 114) using at least a portion of the indirect address and a secret key (e.g., secret key 116(2)), in order to secure the data at the memory location referenced by the indirect address 114 by binding the data encryption to the indirect address.The example indirect address 114 is embodied as a register 110 (e.g., a general purpose register of the processor 102). Generally, keys 116(1)-116(N) and tweaks 117 can be handled in any suitable manner based on particular needs and architecture implementations. The keys and tweaks may be stored in registers 110 or memory 120.The example secret keys 116(1)-116(N) may be generated by a key creation module 148 of a privileged system component 142, and stored in one of the registers 110 (e.g., a special purpose register or machine specific register (MSR)), or another memory location that is readable by the processor 102. In some embodiments, the secret keys 116(1)-116(N) may be stored in a location that is readable only by the processor. In other embodiments, the secret keys 116(1)-116(N) used to secure indirect addresses, data, and code can be stored in another memory location, such as in firmware, in a secure portion of the data storage device 126 or another data storage device, or another form of memory suitable for performing the functions described herein. In some embodiments, the secret keys 116(1)-116(N) may be transmitted across a secure communications channel and restored by an executive (such as an operating system or a virtual machine monitor, e.g., the privileged system component 142 described below). In virtualized environments in which virtual machines are migrated from one machine to another, and/or in cases in which a virtual machine, process or program running on the computing device 100 begins a sleeping/hibernating mode after an indirect address and the referenced data and/or code are secured using secret keys, and then later resumes, the secret keys will need to be recovered and restored. In these cases, the secret keys can be stored or possibly transmitted across a (secure) communications channel prior to a sleeping/hibernating mode, and then retrieved/restored by an executive (such as an operating system or a virtual machine monitor, e.g., the privileged system component 142).It should be noted that embodiments described herein allow for any number of secret keys to be used for a particular program. In one example, the same secret key may be used for all indirect addresses used in a program. In another example, a different secret key may be used for each indirect address associated with a different memory allocation or for each predefined group of memory addresses associated with different memory allocations. In yet further embodiments, the same secret key used for an address encryption/decryption may also be used for encrypting the data bound to that address. In other embodiments, one secret key may be used for address encryption/decryption, while a different secret key may be used for data encryption/decryption bound to that address. For ease of explanation, embodiments further described herein refer to "secret address key" or "address key" to refer to the use of a secret key in encryption and decryption operations of memory addresses and "secret data key" or "data key" to refer to the use of a secret key in operations to encrypt and decrypt data.On (or during) a memory allocation operation (e.g., a "malloc"), memory allocation logic 146 allocates a range of memory for a buffer and returns the indirect address 114 and the metadata (e.g., one or more of range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag, privilege level, crypto context ID, etc.). For example, the memory allocation logic 146 may encode plaintext range information in the indirect address 114 (e.g., in the unused/non-canonical bits, prior to encryption), or supply the metadata as one or more separate parameters to the instruction, where the parameter(s) specify the range, code permission information, size (power), memory allocation size, type, location, ownership, version, tag, privilege level (e.g., user or supervisor), crypto context ID, or some suitable combination thereof. Illustratively, the memory allocation logic 146 is embodied in a memory manager module 144 of the privileged system component 142. The memory allocation logic 146 initiates the address encoding logic 152. The address encoding logic 152 includes metadata encoding logic 156, which encodes the indirect address 114 with the metadata (e.g., range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag value, privilege level, crypto context ID, some suitable combination thereof, etc.) and potentially an "adjustment," for example if range metadata is encoded, as described below. The address encoding logic 152 stores the metadata in an unused portion of the indirect address 114 (e.g., non-canonical bits of a 64-bit address). For some metadata or combinations of metadata, the indirect address 114 may be encoded in a larger address space (e.g., 128-bit address, 256-bit address) to accommodate the size of the metadata or combination of metadata.To determine valid range metadata, example range rule logic selects the valid range metadata to indicate an upper limit for the size of the buffer referenced by the indirect address 114. Address adjustment logic adjusts the valid range metadata as needed so that the upper address bits (e.g., most significant bits) of the addresses in the address range do not change as long as the indirect address 114 refers to a memory location that is within the valid range indicated by the range metadata. This enables the indirect address 114 to be manipulated (e.g., by software performing arithmetic operations, etc.) but only so long as the manipulations do not cause the indirect address 114 to go outside the valid range (e.g., overflow the buffer).In an embodiment, address encoding/encrypting logic 152 uses the valid range metadata to select a portion (or slice) of the indirect address 114 to be encrypted. In other embodiments, the slice of the indirect address 114 to be encrypted may be known a priori (e.g., upper 32 bits, lower 32 bits, etc.). The address encoding/encrypting logic 152 encrypts the selected slice of the indirect address 114 (and the adjustment, in some embodiments), using the secret address key 116(1) and an address tweak, as described further below. On a memory access operation (e.g., a read, write, or execute operation), the address decoding/decrypting logic 154 decodes the previously-encoded indirect address 114. To do this, the address decoding/decrypting logic 154 decrypts the encrypted slice of the indirect address 114 (and in some embodiments, the encrypted adjustment) using the secret key 116(1) and the address tweak, as described further below.The indirect address 114 is returned to its original (e.g., canonical) form, based on appropriate operations in order to restore the original value of the indirect address 114 (e.g., the true, original linear memory address). To do this in at least one possible embodiment, the address metadata encoded in the unused bits of the indirect address 114 may be removed, e.g., return the unused bits to their original form). If the indirect address 114 decodes successfully, the memory access operation completes successfully. However, if the encoded indirect address 114 has been manipulated (e.g., by software, inadvertently or by an attacker) so that its value falls outside the valid range indicated by the range metadata (e.g., overflows the buffer), the indirect address 114 will be corrupted as a result of the decrypting process performed by the address decoding/decrypting logic 154. A corrupted indirect address will raise a fault (e.g., a general protection fault or a Page Fault if the address is not mapped as present from the paging structures/page tables). One condition that may lead to a fault being generated is a sparse address space. In this scenario, a corrupted address is likely to land on an unmapped page and generate a page fault. In this way, the secure memory access logic 150 enables the computing device 100 to provide indirect address security against buffer overflow attacks and similar exploits. Embodiments of the indirect address security technologies disclosed herein can also be used for software debugging purposes or as an access control mechanism to prevent software from accessing areas of memory for which the software does not have permission. Additionally, in comparison to other buffer overflow mitigation techniques, embodiments of the disclosed indirect address security technologies can operate without any additional memory reads/writes, or without any additional instructions, or without any binary modifications, or without the need to recompile legacy code. Moreover, embodiments of the disclosed technologies are responsive to adversaries that can read memory and overwrite pointer values, as well as adversaries that can create/select arbitrary pointer values. Further, embodiments of the disclosed technologies can scale from very small memory ranges to very large memory ranges, or can cascade memory ranges within other memory ranges by using different encoded pointers. Still further, embodiments of the disclosed technologies are effective with dynamic memory allocation (e.g., due to the ability to programmatically create range encoded pointers inline). Additionally, embodiments of the disclosed technologies can be extended to provide code block (code location) access controls to data. Further, embodiments of the disclosed technologies are compatible with 64-bit versions of the x86 instruction set, as well as ARM, MIPS, PowerPC and other processor architectures, including wider (e.g., greater than 64-bit) address bit architectures and smaller (e.g. 32-bit) architectures by reserving address ranges for the metadata containing addresses.Some embodiments of the disclosed technologies utilize aspects of address adjustment logic and address restoration logic to support legacy code compatibility, as described below. As used herein, "legacy code" may refer to a version of computer code that was designed to work on an earlier, or now-obsolete, or no-longer-supported computer architecture. For example, legacy code may include software that was originally developed for a 32-bit processor, but which is now running on a 64-bit processor. "Legacy code" also refers to a version of computer code designed without using or being adapted to use dedicated instructions for encoding and encrypting indirect addresses as described herein. At least some embodiments disclosed herein can be implemented without using new program instructions and accordingly, without the need for recompiling legacy code.Referring now in more detail to Fig. 1 , the computing device 100 may be embodied as any type of electronic device for performing the functions described herein. For example, the computing device 100 may be embodied as, without limitation, a smart phone, a tablet computer, a wearable computing device, a laptop computer, a notebook computer, a mobile computing device, a cellular telephone, a handset, a messaging device, a vehicle telematics device, a server computer, a workstation, a distributed computing system, a multiprocessor system, a consumer electronic device, and/or any other computing device configured to perform the functions described herein. As shown in Fig. 1 , the example computing device 100 includes at least one processor 102 embodied with the secure memory access logic 150.The computing device 100 also includes memory 122, an input/output subsystem 124, a data storage device 126, a display device 128, a user interface (Ul) subsystem 130, a communication subsystem 132, at least one user space application 134, and the privileged system component 142 (which, illustratively, includes the memory manager module 144 and the key creation module 148). The computing device 100 may include other or additional components, such as those commonly found in a mobile and/or stationary computers (e.g., various sensors and input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the example components may be incorporated in, or otherwise form a portion of, another component. Each of the components of the computing device 100 may be embodied as software, firmware, hardware, or a combination of software and hardware.The processor 102 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 102 may be embodied as a multi-core processor, other multiple-CPU processor or processing/controlling circuit, or multiple diverse processing units or circuits (e.g., CPU and GPU, etc.). The processor 102 has a number of registers 110, which include general purpose registers and special purpose registers. The indirect address 114 and the secret keys 116(1)-116(N) are stored in registers 110. The memory 122 of the computing device 100 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 122 may store various data and software used during operation of the computing device 100, as well as operating systems, applications, programs, libraries, and drivers.The memory 122 is communicatively coupled to the processor 102, e.g., via the I/O subsystem 124. The I/O subsystem 124 may be embodied as circuitry and/or components to facilitate input/output operations with the processor 102, the memory 122, and other components of the computing device 100. For example, the I/O subsystem 124 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 124 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 102, the memory 122, and/or other components of the computing device 100, on a single integrated circuit chip.The data storage device 126 may be embodied as any type of physical device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, flash memory or other read-only memory, memory devices that are combinations of read-only memory and random access memory, or other data storage devices.The display device 128 may be embodied as any type of display capable of displaying digital information such as a liquid crystal display (LCD), a light emitting diode (LED), a plasma display, a cathode ray tube (CRT), or other type of display device. In some embodiments, the display device 128 may be coupled to a touch screen or other human computer interface device to allow user interaction with the computing device 100. The display device 128 may be part of the user interface (Ul) subsystem 130. The user interface subsystem 130 may include a number of additional devices to facilitate user interaction with the computing device 100, including physical or virtual control buttons or keys, a microphone, a speaker, a unidirectional or bidirectional still and/or video camera, and/or others. The user interface subsystem 130 may also include devices, such as motion sensors, proximity sensors, and eye tracking devices, which may be configured to detect, capture, and process various other forms of human interactions involving the computing device 100.The computing device 100 further includes a communication subsystem 132, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 100 and other electronic devices. The communication subsystem 132 may be configured to use any one or more communication technology (e.g., wireless or wired communications) and associated protocols (e.g., Ethernet, Bluetooth™, Wi-Fi™, WiMAX, 3G/LTE, etc.) to effect such communication. The communication subsystem 132 may be embodied as a network adapter, including a wireless network adapter.The example computing device 100 also includes a number of computer program components, such as the user space application 134 and the privileged system component 142. The user space application 134 may be embodied as any computer application (e.g., software, firmware, hardware, or a combination thereof) that interacts directly or indirectly with an end user via, for example, the display device 128 or the UI subsystem 130. Some examples of user space applications 134 include word processing programs, document viewers/readers, web browsers, electronic mail programs, messaging services, computer games, camera and video applications, etc. Among other things, the privileged system component 142 facilitates the communication between the user space applications 134 and the hardware components of the computing device 100. Portions of the privileged system component 142 may be embodied as any operating system capable of performing the functions described herein, such as a version of WINDOWS by Microsoft Corporation, ANDROID by Google, Inc., and/or others. Alternatively or in addition, a portion of the privileged system component 142 may be embodied as any type of virtual machine monitor capable of performing the functions described herein (e.g., a type I or type II hypervisor).The example privileged system component 142 includes a number of computer program components, such as the memory manager module 144 and the key creation module 148. Each of the components of the privileged system component 142 may be embodied as software, firmware, hardware, or a combination of software and hardware. For example, the components of the privileged system component 142 may be embodied as modules of an operating system kernel, a virtual machine monitor, or a hypervisor. The memory manager module 144 allocates portions of memory 122 to the various processes running on the computing device 100 (e.g., as ranges of virtual memory addresses). The memory manager module 144 is embodied as, for example, a loader, a memory manager service, or a heap management service. The key creation module 148 creates the secret keys 116(1)-116(N) (e.g., secret address keys and secret data keys) and writes them to a register or registers to which the processor 102 has read access (e.g., a special purpose register). To create a secret key, the key creation module 148 may execute, for example, a random number generator or another algorithm capable of generating a secret key that can perform the functions described herein.It should be noted that a myriad of approaches could be used to generate or obtain a key for embodiments disclosed herein. For example, although the key creation module 148 is shown as being part of computing device 100, one or more secret keys could be obtained from any suitable external source using any suitable authentication processes to securely communicate the key to computing device 100, which may include generating the key as part of those processes. Furthermore, privileged system component 142 may be part of a trusted execution environment (TEE), virtual machine, processor 102, a co-processor (not shown), or any other suitable hardware, firmware, or software in computing device 100 or securely connected to computing device 100. Moreover, the key may be "secret", which is intended to mean that its value is kept hidden, inaccessible, obfuscated, or otherwise secured from unauthorized actors (e.g., software, firmware, machines, extraneous hardware components, and humans).Fig. 2 is a simplified environment diagram illustrating an application of the secure memory access logic 150 of Fig. 1 according to at least one embodiment of the present disclosure. In some embodiments, the computing device 100 may establish an environment 200 during operation (e.g., native and/or virtual runtime or "execution" environments). The various modules depicted in the example environment 200 may be embodied as hardware, firmware, software, or a combination thereof. In the environment 200, the user space application 134 (or the privileged system component 142, e.g., in loading a user space application 134) may, from time to time, during the operation of the computing device 100, issue a memory allocation 202. In some examples, the memory allocation 202 may be an explicit memory allocation in a program (e.g., for dynamic memory allocation) and may be translated (e.g., compiled or interpreted), as needed, by the memory allocation logic 146 of the privileged system component 142 before being passed on to the processor 102. In other scenarios, the memory allocation may be an implicit request for memory by a certain instructions in a program. For example, calling a function that needs stack memory for local variables, passing parameters to a function, declaring local variables may be implicit requests for memory to be allocated in stack for the particular object or data element needing to be stored (e.g., return address, passed parameter, local variable data).In the processor 102, the address encoding logic 152 is executed in response to the memory allocation 202 (e.g., in place of a conventional "malloc" instruction/function call for dynamic memory allocation, or in place implicit memory allocation operations for stack). The address encoding logic 152 encodes an indirect address 204, including metadata 205 (e.g., the range permission information, size (power), memory allocation size, type, location, ownership, version, tag, privilege level, crypto context ID or key, or any combination thereof, etc.), as described herein, and returns an encoded indirect address 206. The metadata may be embedded in the indirect address or pointer (e.g., a standard 64-bit register or enlarged register such as 128 bits or 256 bits to fit more metadata) in a plaintext format, embedded within another operand that is provided to the pointer encryption/decryption instructions and data access instructions, stored in a control register, stored in a table in memory, or provided via any combination thereof. For example, the size (power) metadata and tag value may be embedded in the pointer and the crypto context ID may be stored in a control register.Similarly, the user space application 134 or the privileged system component 142 may issue a memory store 220 from time to time, which may be handled by the processor 102 as a processor instruction that reads from a register 110 (or other storage unit) and writes to memory 122 or cache using indirect address 114 (e.g. a STORE, MOV instruction, declaration or assignment of a variable). Using the STORE instruction as an example, the encryption store logic 156 stores data when the encoded indirect address has been successfully decoded by address decoding logic (e.g., 154). Encryption store logic 156 also causes the data that is to be stored at a memory location (in heap or stack) pointed to by the indirect address 204 to be encrypted based on a data tweak and secret data key 116(2). Successful execution of address decoding logic 154 is based on successful decryption of ciphertext in the indirect address, where the decryption uses an address tweak and secret address key 116(1) to decrypt the encrypted ciphertext of the encoded indirect address 206.Similarly, the user space application 134 or the privileged system component 142 may issue a memory load 230 from time to time, which may be handled by the processor 102 as a processor instruction that reads from memory 122 (e.g., heap for load, stack for pop) and writes to a register 110 using an indirect address 114 (e.g. a LOAD, MOV, or POP instruction). Using the LOAD instruction as an example, the decryption read logic 158 performs the memory access only after successfully executing the address decoding logic (e.g., 154) to decode the encoded indirect address 206. Successful execution of address decoding logic 154 is based on successful decryption of ciphertext in the indirect address, where the decryption uses an address tweak and secret address key 116(1) to decrypt the encrypted ciphertext of the encoded indirect address 206. Once the indirect address 204 is obtained and memory 122 is accessed to load data from the memory location pointed to by the indirect address 204, the loaded data may be decrypted by decryption read logic 158 based on a data tweak and secret data key 116(2). Successful decryption depends on whether the portions of the indirect address used to create a data tweak to decrypt the data, and the additional metadata (if any) used to create the data tweak, correspond to the original allocation of the memory location pointed to by the indirect address.It should be understood that the address decoding/decrypting logic 154 can be incorporated into the instruction logic (e.g., of an instruction set architecture) or can be embodied as a separate set of instructions or multiple sets of instructions. Further, it should be understood that the address decoding/decrypting logic 154 can be incorporated into or referenced by other types of instructions, alternatively or in addition to the LOAD, STORE, MOV, and POP instructions (e.g., arithmetic instructions with memory operands, call, JMP, etc.). For example, control transfer instructions such as call and JMP can load the encoded pointer address for the code to execute into the processor's program counter register (e.g. instruction pointer) (e.g., the RIP, where RIP is the instruction pointer register in 64-bit code). The instruction pointer register can then be queried by a program and as a result, the current program counter address will be the encoded form (offset to the current program counter location).If the address decoding/decrypting logic 154 successfully decodes the encoded indirect address 206, which includes the successful decryption of the encrypted ciphertext in the encoded indirect address, the original indirect address 204 is returned to the privileged system component 142 and the memory access is completed, or program execution begins at the new program counter location (in the case of control flow changes). If the encoded indirect address 206 does not successfully decode, a fault is raised. Based on the successful completion or failure of memory store 220, an appropriate verification or fault signal 213 is returned to the user space application 134. Similarly, based on the successful completion or failure of memory load 230, an appropriate verification or fault signal 222 is returned to the user space application 134.Fig. 3A is a simplified flow diagram illustrating a general process 300A of cryptographic computing based on embodiments of an encoded pointer 310 (which can also be referred to as an encoded indirect address). Process 300A illustrates storing (e.g., writing, pushing) data to a memory region at a memory address indicated by encoded pointer 310, where encryption and decryption of the data is bound to the contents of the pointer according to at least one embodiment. At least some portions of process 300A may be executed by hardware, firmware, and/or software of the computing device 100. In the example shown, encoded pointer 310 is an example of indirect address 114 and is embodied as an encoded linear address including a metadata portion. The metadata portion is some type of context information (e.g., size/power metadata, tag, version, etc.) and the linear address may be encoded in any number of possible configurations, at least some of which are described herein.Encoded pointer 310 may have various configurations according to various embodiments. For example, encoded pointer 310 may be encoded with a plaintext linear address or may be encoded with some plaintext linear address bits and some encrypted linear address bits. Encoded pointer 310 may also be encoded with different metadata depending on the particular embodiment. For example, metadata encoded in encoded pointer 310 may include, but is not necessarily limited to, one or more of size/power metadata, a tag value, or a version number.Generally, process 300A illustrates a cryptographic computing flow in which the encoded pointer 310 is used to obtain a memory address for a memory region of memory 320 where data is to be stored, and to encrypt the data to be stored based, at least in part, on a tweak derived from the encoded pointer 310. First, address cryptography unit 302 decodes the encoded pointer 310 to obtain a decoded linear address 312. The decoded linear address 312 may be used to obtain a physical address 314 in memory 320 using a translation lookaside buffer 304 or page table (not shown). A data tweak 317 is derived, at least in part, from the encoded pointer 310. For example, the data tweak 317 may include the entire encoded pointer, one or more portions of the encoded pointer, a portion of the decoded linear address, the entire decoded linear address, encoded metadata, and/or external context information (e.g., context information that is not encoded in the pointer).Once the tweak 317 has been derived from encoded pointer 310, a cryptographic computing engine 370 can compute encrypted data 324 by encrypting unencrypted data 322 based on a data key 316 and the data tweak 317. In at least one embodiment, the cryptographic computing engine 370 includes an encryption algorithm such as a keystream generator, which may be embodied as an AES-CTR mode block cipher 372, at a particular size granularity (any suitable size). In this embodiment, the data tweak 317 may be used as an initialization vector (IV) and a plaintext offset of the encoded pointer 310 may be used as the counter value (CTR). The keystream generator can encrypt the data tweak 317 to produce a keystream 376 and then a cryptographic operation (e.g., a logic function 374 such as an exclusive-or (XOR), or other more complex operations) can be performed on the unencrypted data 322 and the keystream 376 in order to generate encrypted data 324. It should be noted that the generation of the keystream 376 may commence while the physical address 314 is being obtained from the encoded pointer 310. Thus, the parallel operations may increase the efficiency of encrypting the unencrypted data. It should be noted that the encrypted data may be stored to cache (e.g., 170) before or, in some instances instead of, being stored to memory 320.Fig. 3B is a simplified flow diagram illustrating a general process 300B of cryptographic computing based on embodiments of encoded pointer 310. Process 300B illustrates obtaining (e.g., reading, loading, fetching, popping) data stored in a memory region at a memory address that is referenced by encoded pointer 310, where encryption and decryption of the data is bound to the contents of the pointer according to at least one embodiment. At least some portions of process 300B may be executed by hardware, firmware, and/or software of the computing device 100.Generally, process 300B illustrates a cryptographic computing flow in which the encoded pointer 310 is used to obtain a memory address for a memory region of memory 320 where encrypted data is stored and, once the encrypted data is fetched from the memory region, to decrypt the encrypted data based, at least in part, on a tweak derived from the encoded pointer 310. First, address cryptography unit 302 decodes the encoded pointer 310 to obtain the decoded linear address 312, which is used to fetch the encrypted data 324 from memory, as indicated at 332. Data tweak 317 is derived, at least in part, from the encoded pointer 310. In this process 300B for loading/reading data from memory, the data tweak 317 is derived in the same manner as in the converse process 300A for storing/writing data to memory.Once the tweak 317 has been derived from encoded pointer 310, the cryptographic computing engine 370 can compute decrypted (or unencrypted) data 322 by decrypting encrypted data 324 based on the data key 316 and the data tweak 317. As previously described, in this example, the cryptographic computing engine 370 includes an encryption algorithm such as a keystream generator embodied as AES-CTR mode block cipher 372, at a particular size granularity (any suitable size). In this embodiment, the data tweak 317 may be used as an initialization vector (IV) and a plaintext offset of the encoded pointer 310 may be used as the counter value (CTR). The keystream generator can encrypt the data tweak 317 to produce keystream 376 and then a cryptographic operation (e.g., the logic function 374 such as an exclusive-or (XOR), or other more complex operations) can be performed on the encrypted data 324 and the keystream 376 in order to generate decrypted (or unencrypted) data 322. It should be noted that the generation of the keystream may commence while the encrypted data is being fetched at 332. Thus, the parallel operations may increase the efficiency of decrypting the encrypted data.Pointer encoding for cryptographic computing has typically been applied to heap, where the whole memory is encrypted and decrypted with the same encryption key (or data key). Pointer encodings for heap memory, however, do not adequately support memory accesses in stack. Whereas pointer encodings for heap do not precisely encode boundary information of a particular memory target, stack can include sensitive data for which precise boundary encoding is needed. For example, one representative 64-bit heap pointer for a particular memory location can involve taking a plaintext input pointer and encoding a power value that determines how many bits of a linear address, which is encoded in the pointer, can be adjustable (e.g., mutable) as an offset of the linear address. Some other bits of the linear address may be fixed and another portion of the linear address may be encrypted. Other metadata (e.g., version number) may also be included in the pointer. For example, a power field in the pointer having a value of 3 could cause the pointer to have 23 or 8 adjustable (mutable) bits as an offset of the linear address. The power encoding, however, does not provide the precise upper and lower boundary to ascertain the exact size of a particular object in memory.. Accordingly, a different type of pointer encoding is described herein to enable pointer based cryptographic encoding to be used for stack.Stack is defined by certain properties that distinguish it from heap. For instance, stack can be allocated when a thread is created, while heap is typically allocated at application startup. The size of stack varies but is much smaller than heap and is maintained by a program. For example, some default stack sizes are 8 Megabytes (e.g., Linux operating system) and some default stack sizes are 1 Megabyte (e.g., Microsoft® Windows). Stack can store various types of data and some examples include local variable data, return addresses for active function calls, and parameters that are used for passing information between functions and/or the between a function and the main program.Vulnerabilities or weaknesses in operating systems, memory, and/or computer architecture are often targeted with exploits know as stack buffer overflows (or overruns) and stack smashing. Stack buffer overflows (or overruns) occur when a program writes to a memory address on the program's call stack outside the intended data structure, which can occur in response to programming error or malware. Such bugs usually result in corrupted data and can cause a program to crash. When a program runs with special privileges or accepts data from an untrusted source (e.g., another function), then a stack buffer overflow bug can become a potential security vulnerability. For example, if a running program calls a function, the program notifies the CPU of which function to run, a return address for the program may be loaded in stack by the CPU. The second function may be given the return address to use once it finishes executing and issues a return instruction to return to the program. If the first function is malicious, it could load untrusted executable code to stack and this code could be injected into the running program and compromise security of data and code associated with the program. Accordingly, securing data stored in stack can be highly desirable to avoid malware and other inadvertent or unintentional programming errors that can arise.Fine-grained stack pointer encoding can solve these issues using new encoding techniques for stack pointers. Because stack has different properties than the heap (e.g., limited in size), the offset bits are limited because it requires fewer bits to represent the entire stack memory for a program. Accordingly, additional bits can be used in the pointer to create a strong encoding for the pointer. One or more embodiments described herein offer precise bounds control of an object stored in stack. Since the stack size is more limited, some of the upper address bits are fixed and do not change for different stack addresses. Therefore, a smaller number of bits can be used to represent an offset (e.g., 23 bits) and the upper address bits can be stored in memory or a register. In addition, some of the upper address bits that are fixed, along with precise size information of the memory allocation referenced by the pointer, can be encrypted using a tweak and an address key (also referred to herein as a "pointer key"). The tweak can include the variable base address (which can be fixed offset bits in the pointer) for a stack frame. The address key input for the encryption algorithm can be a dedicated address key generated for the particular running program (i.e., process) associated with the stack.Fig. 4 is a diagram of an example pointer 410 for an object stored in stack according to at least one embodiment of the present disclosure. Stack objects are typically local variables that are used for a short amount of time during the execution of a program. Examples of data stored in stack can include, but are not necessarily limited to, local variable data, return addresses, and parameters for a function. Memory can be allocated for stack for a program when a program is initialized to run. A stack pointer can be generated and points to the top of the stack. Upper address bits in the stack pointer do not change during the program runtime, but the lower address bits (e.g., offset bits) can be changed depending on which stack frame is active. A new frame in the stack can be allocated for each function or subroutine that needs to use stack for its operations. A stack frame can store its own function or subroutine state information, local variables, return address information for its caller. In one example, the linear address for a particular stack frame, which may be encoded in a stack frame pointer) can include upper address bits that do not change during the program runtime, and lower address bits that point to the top of the stack frame, for example.Fig. 4 shows a cryptographically encoded 64-bit pointer (address) for an object stored in a stack frame in its base format, using exponent (power) metadata. In the example shown, the encoded pointer includes a power size (exponent) metadata portion 412 (e.g., 5 bits in the example shown) indicating a size of an offset portion 418 (e.g., 6 bits in the example shown) of the pointer 410 (e.g., a number of low order address bits that comprise the offset portion 418 of the pointer 410, these bits may be manipulated freely by software for pointer arithmetic). In some embodiments, the power size metadata portion 412 may indicate a number of the offset bits based on a power of 2.As shown in Fig. 4 the power size metadata portion 412 may indicate the number of bits that compose the immutable (or fixed) offset portion 416 and the mutable (or adjustable) offset portion 418. For stack, the total number of bits in full offset portion 415, which includes fixed offset 416 and mutable offset 418, may be a fixed number depending on the particular implementation and architecture. In one example, the offset portion includes 23 bits. The fixed offset 416 does not change for a function or program to which the stack frame belongs, while the mutable offset 418 may change depending on which object the linear address references.In the encoded pointer 410, the total number of bits that make up fixed offset portion 416 and the mutable offset portion 418 may be constant, with the sizes of the respective portions being dictated by the power size metadata portion 412. For example, if the power metadata value is 0, there are no mutable offset bits. In this case, all 23 bits compose the fixed offset 416. As a further example, if the power metadata value is 1, then there is one bit of mutable offset portion 418, if the power metadata value is 2, then there are 2 bits of mutable offset portion 418, and so on, up to the total number of offset bits 415 (e.g., 23 bits) of mutable offset resulting in no fixed offset bits. The mutable offset 418 may be manipulated by software, e.g. for pointer arithmetic or other operations. An address in which all of the mutable offset bits are zero is the starting address for the power-of-two-aligned slot specified by the pointer. Other addresses with some non-zero mutable offset bits are addresses within the slot.The ciphertext portion 414 (e.g., 32 bits in the example shown) of the pointer 410 may be encrypted with a small tweakable block cipher (e.g., a SIMON, SPECK, or tweakable K-cipher at a 32-bit block size, or other variable bit size tweakable block cipher). In one or more embodiments, the fixed offset portion 416 can be used as a tweak to generate ciphertext portion 414 from at least a portion of the upper address bits (e.g., 9 bits in the upper bits of the linear address, where this portion of the upper address bits is also called 'first upper address bits' herein) and a memory allocation size (e.g., 23 bits) for the object referenced by pointer 410. Ciphertext portion 414 can be adjacent to and include more significant bits relative to the fixed offset portion 416.Some address bits compose the fixed offset portion 416 (e.g., 17 bits in the example shown) and may be used as part of the tweak for a tweakable block cipher used to encrypt the ciphertext portion 414. While these bits are also a plaintext (non-encrypted) portion of the address, they cannot be modified by software (e.g. pointer arithmetic) like the bits of mutable offset 418 without causing the ciphertext portion 414 to decrypt incorrectly. The base pointer format shown in Fig. 4 allows for cryptographically precisely defining the bounds of objects and their location in stack. In some cases, the exponent/power size metadata portion 412 could be provided as a separate parameter in addition to the pointer; however, in some cases (e.g., as shown) the bits of the power size metadata portion 412 may be integrated with the pointer 410 to provide legacy compatibility in certain cases.It should also be noted that in an alternative implementations, the power size metadata portion 412 may indicate the number of bits that compose the fixed offset 416, and thus dictate the number of bits remaining to make up the mutable offset 418. For example, if the power metadata value is 0, there are no fixed offset bits (e.g., 416) and all 23 offset bits may be manipulated by software. As a further example, if the power metadata value is 1, then there is one bit of fixed offset, if the power metadata value is 2, then there are 2 bits of fixed offset, up to the maximum number of offset bits (e.g., 23 bits), resulting in no mutable offset (e.g., 418), and thus, no bits that can be manipulated by software.Also, although pointer 410 is illustrated and described based on using 32 bits for the ciphertext portion 414, the pointer format is not intended to be so limited. The address slice to be encrypted may be selected based on readily available 32-bit block encryption ciphers. However, an encryption cipher using any other block size (e.g., 27, 16, variable, etc.), may be used instead. If the number of ciphertext bits is adjusted (upward or downward), the remaining address bits to be encoded (e.g., fixed and mutable offset portions) may be adjusted accordingly.In one or more embodiments, power size metadata portion 412 of pointer 410 may accommodate special values to indicate how the pointer 410 is to be handled by software using the pointer. In one embodiment, special values may be defined to indicate that the pointer is to be treated as a conventional or legacy pointer (e.g., not as a cryptographically encoded pointer). For example, reserved values 11111 and 00000 may indicate the pointer is a conventional or legacy pointer (as these are the legacy non-canonical encodings for the upper linear address bits between user and supervisor space). Any other values can indicate that the pointer is encoded as a cryptographically encoded pointer. Thus, both types of pointers (e.g., conventional and cryptographically encoded) can potentially be used in the same address space. In other embodiments, one or more of the most significant bits in a cryptographically encoded pointer may be reserved to indicate the whether the pointer is a legacy pointer or a cryptographically encoded pointer. For example, the two most significant bits may be encoded as reserved bits. When the reserved bits have the same value, this indicates that the pointer is a legacy pointer. In yet another embodiment, the two most significant bits may be encoded as a tag/version number (e.g., random or deterministically different value).When a processor is running in a cryptographic mode and accessing memory using an encoded pointer (address) (e.g., a pointer formatted in the same or similar manner to pointer 410 of Fig. 4 ) to get the actual linear/virtual address memory location, the processor takes the encoded address format and decrypts the ciphertext portion (e.g., 414 of Fig. 4 ) using the variable number of fixed offset bits (e.g., 416 in Fig. 4 ) determined by the power size/exponent metadata bits (e.g., 412 of Fig. 4 ) and a secret key. In some instances, the power size/exponent metadata 412 and/or other metadata or context information may be included as part of the tweak for decrypting the ciphertext portion 414 (also referred to herein as "address tweak"). If the address decrypts incorrectly, the processor may cause a general protection fault (#GP) or page fault due to the attempted memory access with corrupted linear/virtual address.As used herein, "context information" is intended to include any metadata or other information related to a memory allocation, its associated memory address, its associated pointer, the software for which the memory was allocated, and/or the contents of the allocated memory. For example, context information may include, but is not limited to, one or more of a size indicating the number of bits that compose fixed and mutable offset portions of a pointer, a tag containing randomized bits associated with the memory address, permission information indicating access permissions for the data stored in the allocated memory, a version number of a pointer that may be used for reassigning/revoking pointers that were previously assigned to a program, a type or class of the data stored in the allocated memory, a privilege level indicating a user or supervisor mode of the software for which the memory was allocated, and a crypto (cryptographic) context identifier including a randomized or deterministically unique value for a memory address. One or more pointer encoding embodiments may use any single item of context information as part of a tweak (address tweak or data tweak), or may use any suitable combination of context information items.Context information may be stored in any type of storage, which may be based on particular needs and implementations. For example, one or more items of context information may be embedded in a standard-sized (e.g., 64-bit) pointer, such as pointer 310. In this scenario, the context information may be stored in the upper most bits in place of, or in addition to, the power size metadata. Other example types of storage for context information include, but are not necessarily limited to embedding the context information in a pointer that has been enlarged to fit more or bigger tweaks (e.g., a 128-bit pointer, a 265-bit pointer, etc.), embedding the context information within another operand that is provided to the pointer encryption instructions and to the data access instructions, and/or storing the context information in a control register. A control register may be automatically selected by an instruction to be used as a crypto input (e.g., if there is just one register storing that type of tweak). Otherwise, a control register may be selected using some other instruction operand such as a field in the pointer itself or in a context operand supplied with data access instructions (e.g., special load and store instructions) configured for the particular operand encoding embodiment. For example, an index field of an access instruction could be used to select a register containing a key or tweak for the data (or code). Generally, for tweaks that are only updated when switching contexts, the item(s) used for the tweak may be especially suited for storage in a register. Other tweaks that are more closely associated with a particular pointer may be more suitable for being embedded in the pointer or passed in an instruction operand. As previously noted, however, any item of context information may be embedded or stored in any type of storage.Referring now to Fig. 5, Fig. 5 is a flow diagram 500 illustrating example operations for securing a pointer (e.g., linear address to an object in stack) is shown. An object can be any data that can be stored in memory and manipulated by a program. Examples of objects include, but are not necessarily limited to, data structures, data composites, data elements (e.g., may be within a data structure or data composite), which include any type of primitives or non-primitives. Portions of the process 500 may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing the address encoding/encrypting logic 152, address cryptography unit 104, 302).The process may begin in response to an implicit memory allocation in a program for an object to be stored in stack memory. Examples of an implicit memory allocations include, but are not necessarily limited to, memory allocations for local variables, return addresses to a calling program or function, parameters passed to functions or programs. In one example, a program may declare a local variable, and the memory can be automatically allocated on stack for the variable without an explicit request in the program code.At 502, the size of memory allocation is determined and can be based on the particular variable for which memory is requested. If other metadata is needed to encode the pointer, that metadata may also be obtained. At 504, a linear address to a variable base address in stack where an object is to be stored is obtained. The linear address can be obtained based upon the current stack pointer or frame pointer. The linear address can reference a location in stack, within the current (or active) stack frame.At 506, the upper address bits of the linear address can be saved in memory. For example, the upper address bits may include all of the address bits that are more significant than the fixed offset. The upper address bits can be saved in registers or can potentially be encrypted and saved in lower level memory.At 508, range metadata to define the number fixed and mutable offset bits is determined. In some embodiments, the range metadata includes a power or "exponent" to determine the 2's power of the memory range size (effectively determining the number of fixed and mutable offset bits). In some cases, an "adjustment" is used to force values to the end of the 2's power range. In other embodiments, the adjustment may be used to force the buffer to the beginning of the 2's power range when buffer "underflow" needs to be addressed (as opposed to buffer "overflow"). Using the exponent metadata, any 2's power memory range can be defined (e.g., 2, 5, 8, 16... 2^64).At 510, the power metadata and the memory allocation size can be stored in the non-canonical bits of the linear address and may replace a portion of the upper address bits. For example, a predetermined number of upper address bits (e.g., 9 bits) may be used to generate the ciphertext portion 414 of an encoded pointer. Accordingly, the power metadata and memory allocation size can be stored in bits that are higher than these first upper address bits to be included in the encryption to generate the ciphertext. Although not shown in Fig. 4 , some additional reserved bits (e.g., 2-4 bits) may be used for other purposes as previously described herein (e.g., legacy encoding, tag metadata, version metadata).At 512, the upper address bits in the pointer along with the memory allocation size metadata may be encrypted using a secret address key and an address tweak. An address key may be a key that is defined for a particular running program (or process) to be used for pointer address encryption and decryption. The key may be created in any suitable manner as described herein.As used herein, a "tweak" may refer to, among other things, a second input to a block cipher, in addition to the usual plaintext or ciphertext input and the key (e.g., the secret key 116(1)-116(N)). In at least some embodiments, a tweak may compose all or part of an initialization vector (IV) for a block cipher. Encrypting the memory allocation size metadata along with a portion of the upper address bits of the linear address enables the computing device 100 to detect when the pointer has been illegally changed, because the encryption algorithm will cause the illegally-changed bits to produce a random sequence of bits that are non-deterministic to an adversary, which likely results in a fault when the illegally-changed pointer is used.In at least one embodiment, the portion of the pointer to be encrypted (e.g., the memory allocation size and some portion of upper address bits) is encrypted using a cipher mode encryption algorithm, such as a tweakable block cipher, using the fixed offset (e.g., 416) and the power metadata (e.g., 412) as a tweak. The fixed offset may be padded with zeros to provide a complete initialization vector input for the block cipher Some examples of tweakable block ciphers include: K-cipher, XOR-encrypt-XOR (XEX), Liskov, Rivest, and Wagner (LRW), and XEX-based tweaked-codebook mode with ciphertext stealing (XTS). Other bit diffusion methods in which any single bit change in the cipher text results in changes across the entire decrypted plaintext can be used. If desired, alternative embodiments can trade off security for performance by using non-cryptographic methods that still achieve reasonable bit diffusion analogous to a block cipher.In some embodiments, the cipher has sufficient bit diffusion so that any bit change made to the encrypted address bits will equally affect (cascade through) all bit positions when decrypted. This provides the basis for a corrupted address given any change or bounds violation. Using this method, if the adversary attempts to tamper with the metadata (e.g., the exponent or adjustment values, or the encrypted most significant bits) the resulting decoded address will be corrupted. In the 64-bit address space, address corruption will result in a fault with high probability, thus allowing the address corruption (and pointer access or bounds violation) to be caught by the privileged system component 142 (e.g., an operating system/executive/VMM/alternative mode/debug trace/management processor/subsystem, etc.).At 514, once the appropriate metadata and the portion of upper address bits have been encrypted in the pointer, the resulting cryptographically encoded pointer can be returned to the memory manager to be for accessing the object used in the program. The output may be an encoded pointer that may be the same or similar to encoded pointer 410, for example.Referring now to Fig. 6 , an example process 600 for decoding an encoded pointer is shown. Portions of the process 600 may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing read or write (e.g., PUSH/STORE or POP/LOAD) instructions of a program and/or the address decoding/decrypting logic 154, address cryptography unit 104, 302). Process 600 may begin in response to an implicit memory access request in a program for an object to be stored into stack memory (e.g., pushed) or read from stack memory (e.g., popped). Examples of a memory access request include, but are not necessarily limited to PUSH and POP instructions in program code.At 602, the encoded pointer (e.g., the encoded address 206, which may be obtained from a register 110) to a memory location associated with the memory access request is obtained. At 604, the encrypted portion of the encoded pointer is decrypted using the same secret address key and address tweak as used to perform the encryption at 512 of Fig. 5 . The decryption generates data that includes the memory allocation size for the variable referenced by the pointer, and a portion of the upper address bits of the linear address encoded in the pointer.At 606, the decrypted portion of upper address bits are compared to the corresponding portion of upper address bits that were stored in memory when the pointer was encoded, for example at 506 of Fig. 5 . If the decrypted portion of upper address bits match the stored portion of upper address bits, this serves as a verification that the memory allocation size metadata has not been corrupted. At 608, a determination can be made as to whether the comparison indicated a match. If the decrypted portion of upper address bits and the stored portion of upper address bits do not match, then at 610, a fault can be raised.If it is determined that the decrypted portion of upper address bits and the stored portion of upper address bits match at 608, then at 612, a determination can be made as to whether the memory address (i.e., linear address decoded at 604) is within the bounds allocated for the variable. For example, a check can be performed to determine whether the linear address is less than the variable base address plus the memory allocation size. This is because the variable owns the data (e.g., data element, data structure, etc.) residing from the variable base address to the variable base address plus the memory allocation size of the variable (e.g., <base address, base address + size>). In one example where the variables are not aligned, the variable base address of the memory allocation may be stored in a register as a result of a compiler-added instruction. In this example, a compiler may be modified to emit code to load the variable base address before the variable is used. Thus, when an instruction attempts to access the variable, the cryptographically encoded pointer can be decrypted and decoded to obtain the linear address and the memory allocation size. If the variables are aligned, however, the variable base address does not need to be passed to the processor during pointer decoding.A verification that the linear address is valid can be performed at 612 before the memory access request is performed. If it is determined that the memory address is not valid (e.g., if the linear address is not less than the variable base address + memory allocation size), than at 614, a fault can be raised. Otherwise, at 616, when both the integrity check and the check on the bounds of the memory allocation succeed, the read or write (e.g., pop/load or push/store) request can be completed.Fig. 7 is a more detailed flow diagram illustrating an example process 700 of generating a cryptographically encoded pointer 730 (also referred to herein as "encoded pointer") from an unencoded pointer 710 containing at least a portion of a memory address (or linear address) and other metadata, and binding the cryptographically encoded pointer 730 to encryption of data referenced by the pointer. Encryption of the data is bound to the contents of the pointer according to at least one embodiment. At least some portions of process 700 may be executed by hardware, firmware, and/or software of the computing device 100. In the example shown, pointer 710 is embodied as a 64-bit encoded linear address (before any cryptographic functions are performed) including a 4-bit tag/version portion 701, a 5-bit power size metadata 702, a 23-bit memory allocation size 704, a 9-bit first upper address bits 705, and a plaintext portion 706 of the memory address. Plaintext portion 706 can include a variable-bit fixed offset portion 707, and a variable-bit mutable offset portion 708. In some embodiments, the memory allocation size 704 may be made smaller than the combination of the fixed offset bits 707 and the offset bits 708 to fit a lower limit field within the pointer. The lower limit may be added to the starting address of the power-of-two-aligned slot specified by the pointer to compute the variable base address. Both the memory allocation size 704 and the lower limit may be multiplied by some power of two determined by the power size such that the maximum size and limit can be specified for large allocations.When an encoded pointer 730 is cryptographically encoded for the first time, an instruction to encrypt and encode the pointer 730 (e.g., EncryptPtr instruction) may be used. The instruction can be configured to accept the base address of the memory allocation and the exact size of the memory allocation (e.g., memory allocation size 704) as operands. The power size 702 may be derived from these operands.In this embodiment, the encoded pointer may not have enough room to carry all of the memory address bits. Therefore, upper address bits 715 (which do not change for the stack memory) of the memory address may be stored in a register or other memory to be combined with fixed offset bits 707 and offset 708 when encoded pointer 730 is decoded to form a linear address that can be used for memory accesses. Upper address bits 715 include first upper address bits 705 and second upper address bits 703. The first upper address bits 705 are also stored in unencoded pointer 710 and are encrypted to form part of the encrypted pointer slice 732 of encoded pointer 730. The first upper address bits 705 that are encrypted as part of encrypted pointer slice 732 may be used as an integrity check during memory access operations to verify the integrity of the encrypted pointer slice 732 by comparing the decrypted first upper address bits from the with the corresponding first upper address bits 705 stored in memory. By verifying the integrity of encrypted pointer slice 732, the integrity of memory allocation size 704 can also be verified.Generally, pointer 710 can be used to generate a cryptographically encoded pointer having a similar configuration to other cryptographically encoded pointers described herein (e.g., 410). However, pointer 710 includes a tag/version portion 701, which may be a random or deterministically different value. In other embodiments, the four upper bits may be reserved bits that allow cryptographically encoded pointers to be used concurrently with legacy pointers. For example, the most significant bit can be used to indicate whether the address is located within the supervisor address space (e.g., "1") or within the user mode address space (e.g., "0"). The next most significant bit can be set to the opposite value of the supervisor bit to indicate that the pointer is cryptographically encoded or can be set to the same value of the supervisor bit to indicate that the pointer is not cryptographically encoded. In other embodiments, the legacy encoding may be achieved without dedicated reserved bits. Instead, legacy encoding can be achieved by encoding particular values in the power size metadata portion 702 (e.g., all Is, all 0s). If the pointer 710 includes the tag/version portion 701, then these bits may also be encoded with the particular values (e.g., all 1s, all 0s) to allow legacy and conventional encoding to be used concurrently. In yet other embodiments, legacy encoding may be eliminated entirely if, for example, the concepts are not implemented to be compatible with legacy programs.It should be noted that the power size metadata portion 702 may not be encrypted as it is used to determine the number of bits in the mutable and fixed plaintext portions of the pointer and, therefore, the number of bits used in the address tweak (e.g., fixed offset portion 706). The tag/version portion 701, however, is not used to determine the size of the address tweak. Therefore, the tag/version portion 701 may alternatively be included as part of the encrypted portion of the address (i.e., ciphertext 732) as long as the tag/version portion 701 is not used in the address tweak. In this alternative embodiment, the block cipher would have a correspondingly larger block size to fit the tag/version portion, or the address bits included in the ciphertext would be reduced and a corresponding number of address bits would be included in the plaintext portion (i.e., 706 and 708). Additionally, it should be noted that, although the process 700 is illustrated with the encoding shown in pointer 710, which includes a tag/version (or reserved bits) portion 701, process 700 could be performed with other pointer encodings having a power size metadata portion such as pointer 710, which does not include a tag/version (or reserved bits) portion, or which includes different metadata. In this scenario, the tag/version (or reserved bits) portion may simply be eliminated from the address tweak.The operations of process 700 are identified in three phases: address encryption 770A (Phase I), pointer encoding 770B (Phase II), and data encryption 770C (Phase III). In Phase I, a portion of the unencoded pointer 710 (also referred to herein as "pointer slice") may be encrypted. In this example, the memory allocation size 704 and the first upper address bits 705 embedded in the unencoded pointer 710 are encrypted by a cryptographic algorithm such as a tweakable block cipher 720 using an address key 718 and an address tweak 716. The address tweak 716 can comprise multiple address encryption factors. In one example, a first address encryption factor could include the power size metadata portion 702, and a second address encryption factor could include fixed offset portion 706 which may be padded with zeros. It should be apparent that other context information could also be used in one or more embodiments as additional address encryption factors and may be added as part of address tweak 716 or as a separate input for the cryptographic algorithm.In some embodiments, the address tweak 716 can also include bits of tag/version portion 701. The power size metadata portion 702 is used to determine the number of bits in fixed offset portion 706 and the number bits in mutable offset portion 708, which equals the number of bits for zeroes padding in the address tweak 716. In at least some embodiments, an additional one or more items of variable length metadata may also be used as part of address tweak 716 for the tweakable block cipher 720. For example, the variable length metadata may include other context information or metadata (e.g., permissions, privilege level, type, location, ownership, etc.) as previously described herein. In yet further embodiments, a crypto context identifier register may be used as part of address tweak 716. The crypto context identifier register may contain a unique value (e.g., randomly or deterministically generated) associated with a particular functional group (e.g., processes, subset of processes, virtual machines (VM), subset of VMs, etc.). The block cipher 720 may be any suitable decryption algorithm (e.g., tweakable version of a 32 bit block size cipher such as SIMON, SPECK, K-cipher, or other variable block size cipher, or for larger addresses, PRINCE, XTS-AES block cipher, LRW, AES-CTR mode, etc. may be used) as noted herein.When a ciphertext portion (encrypted pointer slice) 732 has been generated by encrypting selected portions of the unencoded pointer 710 (e.g., memory allocation size 704 and the first upper address bits 705), then an encoded linear address (or encoded pointer) 730 can be formed in Phase II at 770B. In at least one embodiment, the uppermost bits (e.g., tag/version portion 701 and power size portion 702) can be set to the same bit value (e.g., 0 or 1). In addition, the bits of the fixed offset portion 706 and mutable offset portion 708 make up the lower bits of the encoded pointer 730. Generally, the cryptographically encoded pointer 730 has a similar configuration to other cryptographically encoded pointers described herein (e.g., 310). However, as previously described, encoded pointer 730 optionally includes a tag/version portion 701, which may be a random or deterministically different value.In at least one embodiment, the cryptographically encoded pointer 730 can be used as a data tweak for data 746 to be encrypted and stored on stack. Data 746 could include any type of data such as data elements, data structures, data composites, objects, arrays, linked lists, integers, shorts, longs, floating point values, and any other value that can be stored and manipulated by program code.The data 746 to be pushed to stack is encrypted by a cryptographic algorithm such as keystream generator 750. In at least one embodiment, keystream generator 750 can be implemented as an AES-CTR mode block cipher, at a particular size granularity (any suitable size). In one example, inputs to the keystream generator 950 can include a data key and a data tweak. The data tweak 916 can comprise multiple data encryption factors.In one example, a data encryption factor could include at least a portion (and possibly all) of the encoded pointer 730, which references the data 746 to be encrypted. In this embodiment, the contents of the cryptographically encoded pointer are used as the initialization vector (IV) or data tweak 744, with the mutable offset (e.g., 708) being used as the counter value (CTR). Keystream generator 750 encrypts data tweak 744 based on a data key 742 to generate a keystream 751. Data encryption may be indirectly bound to the values of the modified mutable offset bits, since those bits may be incorporated in the tweak used to generate an encrypted pointer slice (ciphertext) 732.If the data to be encrypted crosses one or more block-aligned boundaries, the keystream generator 750 may be re-invoked for the subsequent blocks with the data tweak 744 being increased by an amount equal to the block size each time that it is re-invoked. A suffix of the generated keystream 751 may be unneeded and thus discarded. A logic function 752 (e.g., an XOR operation or other suitable operations or combinations thereof) may then be performed on keystream 751 and an input data block (or cache line) 746 selected from the data in a processor register. The granularity of the input data block 746 matches the keystream 751 output from of the keystream generator 750, and the logic function 752 produces an encrypted output data block 762.The encrypted data 762 can be written (e.g., stored, pushed, copied, moved, transferred, etc.) to memory based on the linear address encoded in the cryptographically encoded pointer 730. Thus, while the cryptographically encoded pointer is being generated, the decoded linear address may be stored in a register, for example, until the write operation is completed. The stored, encrypted data 762 can subsequently be retrieved from memory by decoding the cryptographically encoded pointer 730 to obtain the decoded linear address, and then using the decoded linear address to load/pop/read the encrypted data 762. The encrypted data 762 can then be decrypted using the same data key 742 and data tweak 744 that was used during encryption.When a read operation is performed, the same operations shown in Fig. 7 can be performed on an encoded pointer (instead of unencoded pointer 710) and encrypted data (instead of unencrypted data 746) to achieve an opposite result. The encrypted pointer slice 732 can be can be decrypted by tweakable block cipher 720 using address key 718 and a tweak that includes fixed offset bits 707 and power size 702 both from the encoded pointer 730. The resulting decrypted first upper address bits can be combined with second upper address bits 703 stored in memory, the fixed offset bits 707, and the offset 708 to form a decoded linear address. The decoded linear address can be used to fetch encrypted data referenced by the linear address. The encrypted data can be read from cache/memory and the same operations can be performed. The encoded pointer 730 (or a portion thereof) can be used as a tweak input into keystream generator 750, along with data key 742. Keystream generator 750 can produce a keystream output 751, and the encrypted data from a processor register may be XORed with (or other appropriate logic functions performed) the keystream output 751 for the encoded pointer 730 and the resulting decrypted data loaded into a register.Fig. 8 is a simplified block diagram that illustrates a compiler flow 800 for embedding an instruction into compiled code according to at least one embodiment. As shown in flow 800, in one or more embodiments, a compiler 820 can be enhanced to pass a variable base address that is stored in stack. In compiler flow 800, software programming code 810 may be provided to compiler 820. The programming language to produce the programming code may any suitable programming language based on particular needs and implementations, including for example, C++, Rust, Swift, etc. Compiler 820 knows a priori the variable base addresses that are allocated in the programming code 810 and its associated functions, such as function B code 812. Compiler 820 can determine where variable data, such as object X, is accessed in the programming code 810 or its functions and can emit code 822 to load the variable base address of the object X before object X is accessed by another instruction in the code.An example is illustrated for Function B code 812 and operations that may be performed at 830 when the compiled code is executed. For example, if Function code B 812 declares object X as an integer and subsequent code that uses object X in an arithmetic operation, the compiler 820 may emit a load instruction (e.g., 822) into the compiled code 812, just prior to the arithmetic instruction, to load the variable base address of object X into a register. When Function B code 812 is executed, an instruction that declares a variable (e.g., for object X) causes the creation of a cryptographically encoded pointer 832 that can be used to access object X at the variable base address. Subsequently, just prior to another instruction that uses object X, a load instruction that was added by compiler 820 may load the variable base address of object X into a register 834. The subsequent instruction that accesses object X can use the variable base address stored in register 834 to perform a check on the bounds for object X to ensure that the linear address encoded in the cryptographically encoded pointer is valid (e.g., 512).In other embodiments, the code emitted by compiler 820 may include store instruction to store the variable base address to memory, rather than a register, or to other memory in the memory hierarchy.One or more embodiments of pointer encodings disclosed herein can provide fine-grained protection in both stack and heap memory allocations. For example, data structures containing multiple data elements can be allocated in heap and/or stack. Arrays, linked lists, and composite structures containing subfields, are a few nonlimiting examples of data structures that can contain many data elements for which individual memory access may be desirable. Within a given data structure, multiple types of data may be defined. For example, individual data elements within a character array, may include an integer, followed by a character, followed by floating point number, and so on. In some scenarios, it may be desirable to access and protect individual data elements that are primitive data types such as characters, integers, short, long, float, double, etc. Current cryptographic pointer encoding can bind a pointer to the data structures using various pointer encodings, at least some of which are described herein, and more broadly described with reference to Figs. 3A-3B . However, fine-grained access to, and protection of, the individual data elements using cryptographically encoded pointers to those individual data elements may be desired in some scenarios and implementations.One or more embodiments using data type based encoding can enable fine-grained access and protection using cryptographically encoded pointers to variables within a data structure in either stack or heap allocated memory. The data access instructions that are used to access data can be leveraged to determine how the pointer to the data and/or the data itself gets encrypted and decrypted. This can be achieved when different data types are implicit in the instructions that access the data. For example, some instructions for primitive data types have variances for different data types, and those variances can be used to infer the data type of a data element being accessed or stored. For example, a move instruction (e.g., MOV) and arithmetic instructions (e.g., ADD, SUB, etc.) of Intel® x86 64 and IA-32 Architectures implicitly indicate a data type. In particular, the op code of an ADD instruction can be different depending on the type of data (e.g., short, int, and long) that is being accessed, where the types of data are differentiated by size. The default size specified by the opcode may also be modified by an operand size prefix in the instruction encoding. Other architectures may also specify implicit data sizes with instruction encodings. A 2-byte short variable may be added using an addw (add word) instruction, while an integer variable may be added using an addw (add word) instruction, and a long variable may be added using an addl (add long) instruction. Accordingly, when an object is accessed in a way that requires it to be moved out of or into memory, the particular instruction being used to perform the operation can be identified and based on the op code of that particular instruction, a data type of the data element being accessed can be inferred. The inferred data type can then be used for decrypting/encrypting the pointer and/or for decrypting/encrypting the accessed data. Various different instruction set architectures (ISAs) use different op codes depending on the data type of the data being accessed, any of which can be leveraged to implement data type based encoding as disclosed herein.In another embodiment, prefixes can be added to some instructions and the prefixes can contain more precise data type information than what can be derived from an instruction itself. In addition, a prefix may also be used in some scenarios to override pointer encryption/decryption and/or data encryption/decryption. Thus, the source from which information is derived to perform pointer encryption and decryption and/or to perform data encryption and decryption can be expanded by one or more embodiments disclosed herein beyond a key and information derived from an encoded pointer to the data. One or more embodiments add a new source for tweaks to pointer encryption and decryption and/or to data encryption and decryption, where the source includes information derived from an instruction that is actually accessing the data.It should be noted that data type based pointer encoding can also enable secure access and protection to any objects residing in heap or stack memory. As previously noted, as used herein, the term 'objects' is intended to include, but is not necessarily limited to, data structures (e.g., arrays, records, maps, unions, linked lists, etc.), data composites, data elements (which can include primitives or data structures or composites, etc.), data elements within a data structure or composite, primitives (.e.g., Boolean, characters, floating point numbers, fixed-point numbers, integers, pointers, handles, enumerated types, etc.).Fig. 9A is a more detailed flow diagram illustrating an example process 900A of generating a cryptographically encoded pointer 930 (also referred to herein as "encoded pointer") from an unencoded pointer 910 containing at least a portion of a memory address (i.e., a linear address) and other metadata, and binding the contents of the cryptographically encoded pointer 930 to encryption of data referenced by the pointer. Embodiments of encoded pointer 930 can be used to reference data stored in any available memory including both heap and stack. At least some portions of process 900A may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing the address encoding/encrypting logic 152, encryption store logic 156, address cryptography unit 104, 302, cryptographic computing engine 108). In the example shown, pointer 910 is embodied as a 64-bit encoded linear address including a magic/other value 902, upper address bits 904, other metadata 906, fixed offset bits 907, and a mutable offset 908.Generally, power metadata (e.g., 702) or other size metadata (e.g., 704) may not be used in some embodiments when data type is bound to the encoded pointer. Binding data type of an object being referenced to an encoded pointer that references that object can enable detection of both malicious attempts to access data with the incorrect instruction and inadvertent programming errors when the wrong instruction is used for the wrong variable. Removing the power (or other size) metadata from the encoded pointer frees some bits in which other types of metadata can be encoded. For example, other metadata 906 may be added to the unencoded pointer 910. One example of other metadata is permissions, which can be useful for data pointers (when the encoded pointer references data) to indicate the permissions attributed to the executing code for performing certain accesses to the referenced data (e.g., read vs. write accesses). Although permissions metadata may offer some useful benefits particularly with code pointers, it is not the only option for encoding additional metadata and it should be apparent that any other type of constant metadata (e.g., a unique identifier) may be encoded.In some embodiments, even when data type is cryptographically bound to an encoded pointer, it may still be beneficial to still include size metadata (e.g., power size metadata or memory allocation size) in the encoded pointer as will be further discussed below. In particular size associated with a data structure that contains multiple variables of different types may be advantageously included in the encoded pointer and bound to the pointer encryption and decryption and/or the data encryption and decryption.Memory address bits are also encoded in the pointer. In this example, upper address bits 904, fixed offset bits 907, and a mutable offset 908 may be included. The upper address bits 904 and fixed offset bits 907 are separated by other metadata 906 in this example. It should be apparent that other arrangements and configurations are possible. For example, placement of the upper address bits 904 and other metadata 906 may be switched. Additional upper address bits may be stored in a register or other memory. Magic/other value 902 can also be provided to distinguish between different encoding types (e.g., stack pointer encoding, heap pointer encoding, legacy pointer, etc.), in at least some embodiments. In other encodings, the magic/other value 902 can hold different metadata or information. One example is a tag/version number.The operations of process 900A are identified in three phases: address encryption (Phase I 970A), pointer encoding (Phase II 970B), and data encryption (Phase III 970C). In Phase I, a portion of the unencoded pointer 910 (also referred to herein as "pointer slice") may be encrypted. The portions of the unencoded pointer 910 to be encrypted can include any suitable combination of constant bits (i.e., ones that are not changed as the encoded pointer is used), including at least some of the memory address bits. In this example, the pointer slice to be encrypted includes the upper address bits 904 and other metadata 906 (e.g., permissions, unique identifier, size of data structure, or other context information). This pointer slice may be encrypted by a cryptographic algorithm such as a tweakable block cipher 920 using an address key 918 and an address tweak 916. The address tweak 916 can comprise multiple address encryption factors.In one example, a first address encryption factor could be a numeric identifier for a data type 912, which is the data type associated with data stored at the memory address to be encoded in the encoded pointer 930. This may be used to prevent different data elements of different types from being accessed by an incorrectly typed instruction (e.g., a short instruction attempting to access an integer, a character instruction attempting to access a floating point number, etc.). For example, the memory address (or linear address) formed from upper address bits 904, fixed offset bits 907, and offset 908 correspond to a particular memory location and the data stored at that memory location is defined as a particular data type. When encoded pointer 930 is encoded for the first time, the data type 912 may be passed to the processor via an EncryptPtr instruction to indicate the data type for data referenced by that pointer. Subsequently, as the encoded pointer is decrypted and re-encrypted during data accesses, the data type 912 may be derived from the data access instruction that is using the encoded pointer 930 to access data at that memory location. In some instances, the data type can be inferred from an op code of the data access instruction. In other instances, the data type may be provided in a prefix to the instruction.A possible second address encryption factor could include a displacement value 914. Displacement can come from the way memory addresses are constructed in certain architectures (e.g., Intel® X86 architecture). Memory addresses may be composed of a scale index base (SIB) form of operands. A register serves as the base, and another register serves as an index. The registers can be combined in a single memory address that can be used for accessing data structures such as arrays, where you have a base register that points to the beginning of the array and the index register that specifies the index within the array. That can be scaled by a factor (e.g., 1, 2, 4, 8, etc.) depending on the size of the array element. For a data structure that has multiple fields, a displacement may represent the offset of a particular field within the structure. Some memory operands may use an implicit value for one or more of those memory operand components, e.g. a displacement of 0 or an index of 0. To compute the final memory address that gets accessed, the displacement can be added to the initial base register and the scaled index if the structure is in an array. Thus, the displacement value is another attribute of the instruction encoding, as it gets encoded as a value into the instruction code stream (rather than being in a register or memory). The displacement value can be useful for encoding because it can prevent instructions that intend to access a second field within a structure from being misused to access a third field within the structure.A third address encryption factor could be the fixed offset bits 907, which may be padded with zeroes. It should be apparent that other context information could also be used in one or more embodiments as additional address encryption factors and may be added as part of address tweak 916 or as a separate input for the cryptographic algorithm.The encryption of the pointer slice (e.g., 904 and 906) can be achieved by a cryptographic algorithm (e.g., tweakable block cipher) with inputs that include address key 918 and address tweak 916. In one embodiment, the cryptographic algorithm may include a block cipher 920 that implements any suitable encryption algorithm (e.g., tweakable version of a 32 bit block size cipher such as SIMON, SPECK, K-cipher, or other variable block size cipher, or for larger addresses, PRINCE, XTS-AES block cipher, LRW, AES-CTR mode, etc. may be used).When a ciphertext portion (encrypted pointer slice) 932 has been generated by encrypting selected portions of the unencoded pointer 910 (e.g., upper address bits 904, other metadata 906), then an encoded linear address (or encoded pointer) 930 can be formed in Phase II 970B. In at least one embodiment, the uppermost bits (e.g., magic/other value 902) can be set to the same bit value (e.g., 0 or 1). In addition, the bits of the fixed offset 907 and mutable offset 908 make up the lower bits of the encoded pointer 930.In at least one embodiment, the cryptographically encoded pointer 930 can be used as a data tweak for data 960 to be encrypted and stored in heap or stack memory. Data 960 could include any type of data such as data elements, data structures, data composites, objects, arrays, linked lists, integers, shorts, longs, floating point values, and any other value that can be stored and manipulated by program code.The data 960 to be stored is encrypted by a cryptographic algorithm such as a keystream generator 950. In at least one embodiment, keystream generator 950 can be implemented as an AES-CTR mode block cipher, at a particular size granularity (any suitable size). In one example, inputs to the keystream generator 950 can include a data key and a data tweak. The data tweak 916 can comprise multiple data encryption factors.In one example, a first data encryption factor could include a data type (e.g., data type 912) and a second data encryption factor could include a displacement value (e.g., 914), both of which were previously described herein with reference to address encryption factors for address tweak 916. In addition, for data encryption (and decryption) a third data encryption factor could include at least a portion (and possibly all) of the encoded pointer 930, which references the data 960 to be encrypted. These data encryption factors (e.g., 912, 914, and 930) may be combined (e.g., concatenated) into a data tweak 944 as a single tweak input for the keystream generator 950 (e.g., tweakable block cipher). In other implementations, the data encryption factors may be provided as two or more tweak inputs into the keystream generator.In one embodiment, the contents of the cryptographically encoded pointer 930 and the additional data encryption factors (e.g., 912, 914) are used as part of the initialization vector (IV) or data tweak 944 for keystream generator 950, with the mutable offset 908 being used as the counter value (CTR) for the block cipher. Keystream generator 950 encrypts data tweak 944 based on a data key 942 to generate a keystream 951. The value of data tweak 944 may be adjusted to be congruent to 0 (modulo the block size of the keystream generator 950) prior to being used as an input to the keystream generator. The value of the data tweak 944 may have some suitable number of least significant bits set to 0 to satisfy this requirement and a prefix of the keystream 951 may be discarded to account for that adjustment. The number of bytes of the keystream 951 to discard may be computed by subtracting the adjusted value of the data tweak 944 from the unadjusted value of the data tweak 944. This adjustment may modify the values of fixed offset bits 907 in pointers to objects smaller than the block size. However, the data encryption may be indirectly bound to the values of the modified fixed offset bits, since those bits may be incorporated in the address tweak used to generate ciphertext 932.If the data to be encrypted crosses one or more block-aligned boundaries, the keystream generator 950 may be re-invoked for the subsequent blocks with the data tweak 944 being increased by an amount equal to the block size each time that it is re-invoked. A suffix of the generated keystream 951 may be unneeded and thus discarded. A logic function 952 (e.g., an XOR operation or other suitable operations or combinations thereof) may then be performed on keystream 951 and an input data block (or cache line) 946 selected from the data in a processor register. The granularity of the input data block 946 matches the keystream 951 output from of the keystream generator 950, and the logic function 952 produces an encrypted output data block 962.The encrypted data 962 can be written (e.g., stored, pushed, copied, moved, transferred, etc.) to memory based on the linear address encoded in the cryptographically encoded pointer 930. Thus, while the cryptographically encoded pointer is being generated, the decoded linear address may be stored in a register, for example, until the write operation is completed.It should be noted that, in some implementations, data type 912 and displacement value 914 may be used as both address encryption factors for the address tweak 916 and as data encryption factors for the data tweak 944. In other implementations, data type 912 and displacement value 914 may be used in either address tweak 916 or data tweak 944. In yet further implementations, either data type 912 or displacement value 914 is used in the address tweak 916 and/or the data tweak 944. Generally, any combination of this additional information from the data access instruction encoding can be used as a second source of input to bind encryption of one or both of encoded pointer 930 and the encrypted data 962 it references.Fig. 9B is another detailed flow diagram illustrating an example process 900B of obtaining the data referenced by cryptographically encoded pointer 930, where encryption of the pointer and the data was described with reference to Fig. 9A . The data is bound to the contents of the encoded pointer 930 and to some additional information from a data access instruction according to at least one embodiment. At least some portions of process 900B may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing the address decoding/decrypting logic 154, decryption store logic 158, address cryptography unit 104, 302, cryptographic computing engine 108).The operations of process 900B are identified in three phases: address decryption (Phase I 980A), address formation (Phase II 980B), and data decryption (Phase III 980C). In Phase I 980A, the linear address embedded in pointer 930 is decrypted. Specifically, the encrypted pointer slice 932 of encoded pointer 930 is decrypted using a cryptographic algorithm, such as a tweakable block cipher 920, using the same address key 918 and address tweak 916 that were used to encrypt the unencrypted pointer slice (e.g., 904 and 906) in address encryption 970A of Fig. 9A .When the encrypted pointer slice of encoded pointer 930 has been decrypted by keystream generator 950, the output includes the upper address bits 904 and the other metadata 906. The decrypted upper address bits 904 may be used to form a decoded linear address 990 in Phase II 980B. In at least one embodiment, the uppermost bits (e.g., sign extension 901) of decoded linear address 990 can be set to the same bit value (e.g., 0 or 1). In addition, the fixed offset bits 907 and the mutable offset 908 can make up the lower bits of the decoded linear address 990.In some embodiments, the processor may check whether the decrypted pointer slice (e.g., with upper address bits 904 and other metadata 906) has an expected value as an indication of whether the decrypted upper address bits 904 were decrypted incorrectly. For example, in some paging modes, some number of upper address bits are required to all have the same value (i.e. all 0's or all 1's). If the corresponding bits in the decrypted pointer slice have differing values, then that indicates that decrypted upper address bits 904 were decrypted incorrectly. Some embodiments may generate a fault in that case. Some other embodiments may rely on existing canonicality checks to generate a fault in that case when the decoded linear address 990 is used. Even if the upper bits do all have the same value, that may not conclusively indicate that decrypted upper address bits (e.g., upper address bits 904) was decrypted correctly. Some embodiments may perform the aforementioned checks for expected bit values for both the minimum and maximum addresses to be accessed in the current operation so that a fault will likely be generated if any portion of the access is out-of-bounds. Other embodiments may only require that a particular portion of the access, e.g. the first byte, be within the bounds of the pointer, and thus only perform the aforementioned checks for expected bit values on the pointer for that portion of the access. Other embodiments may check both the minimum and maximum addresses for write operations but only check a single pointer value for reads, relying on data cryptography to likely prevent partially out-of-bounds reads from returning correct plaintext.The decoded linear address 990 is used to find the memory location of the encrypted data to be decrypted in Phase III 980C. The encrypted data can be decrypted by the same cryptographic algorithm (e.g., keystream generator 950) that was used to encrypt it. In addition, the same data key 942 and same data tweak 944 may be used as inputs to the keystream generator 950 to perform the decryption. In particular, in at least one embodiment, two sources provide inputs to be included as tweaks for the data decryption. One source includes the encoded pointer that references the data to be decrypted. The other source includes the data access instruction encoding stream, which can indicate various information about the data access. Such information can include, but is not limited to, the data type of the data being accessed (read from memory or written to memory) and a displacement value in the particular instruction.As previously described, keystream generator 950 can be implemented as an AES-CTR mode block cipher, at a particular size granularity (any suitable size). In this embodiment, the at least a portion of the contents of the cryptographically encoded pointer 930 are used as the initialization vector (IV) or data tweak 944, with the mutable offset (e.g., 908) being used as the counter value (CTR). Generation of keystream 951 may commence without waiting for encrypted pointer slice 932 to be decrypted.Keystream generator 950 decrypts data tweak 944 based on a data key 942 to generate a keystream 951. In at least some scenarios, the value of data tweak 944 may be adjusted to be congruent to 0 (modulo the block size of the keystream generator 950) prior to being used as an input to the keystream generator, as previously described herein. This adjustment may modify the values of fixed offset bits 907 in pointers to objects smaller than the block size. However, the data decryption may be indirectly bound to the values of the modified fixed offset bits 907, since those bits may be incorporated in the address tweak used to generate an encrypted pointer slice (ciphertext) 932.If the memory to be decrypted crosses one or more block-aligned boundaries, the keystream generator 950 may be re-invoked for the subsequent blocks with the data tweak 944 being increased by an amount equal to the block size each time that it is re-invoked. A suffix of the generated keystream 951 may be unneeded and thus discarded. The logic function 952 (e.g., an XOR operation or other suitable operations or combinations thereof) is then performed on keystream 951 and decrypted input data block (or cache line) 960 selected from the memory location referenced by the decoded linear address 990. The granularity of the encrypted input data block 960 matches the keystream 951 output from of the keystream generator 950, and the logic function 952 produces a decrypted output data block 964.Regarding data accesses, when a variable of a particular data type is accessed, it will be decrypted with the data type information. For example, for a variable having a character data type is decrypted using the character data type. If a first variable having first data type overruns into the memory allocation of a second variable having a second (different) data type, then the second variable cannot be accessed because the decryption would be performed on the contents where the second variable is supposed to be stored using the second data type, but the contents include data having the first data type. Thus, buffer overruns can be prevented.This is especially useful within data structures. For example, a cryptographically encoded pointer that is generated to a single heap allocation may be free to roam anywhere within that allocation. However, this can potentially result in intra-object overflows where multiple sub-parts, multiple fields within that structure. In an example scenario, if one of the sub-parts is a vulnerable string variable that has an overflow that overwrites some adjacent variable data. Using data type based pointer encoding with data encryption bound to the cryptographically encoded pointer can mitigate these potential issues. Accordingly, a hierarchical protection is provided, where the bounds from the pointer provided for the outer allocation, and then bindings for specific variable types with data type based encoding.In a further embodiment, hierarchical typing could be implemented. In this embodiment, typing for an outer data structure (e.g., a heap allocation, a data composite in stack) could be contained in the pointers to those data structures. Data typing information could also be enforced for the specific fields within the overall data structure.In yet further embodiments, information relating to size can still be encoded in a pointer to a data structure. For example, the size of the entire structure in stack may be included in an encoded pointer while data type is still inferred from the op codes that are accessing the internal fields (e.g., infer that the first data element in data structure is an integer, infer that the second data element in the same data structure is a character, etc.). The encoded pointer may contain some subset of the information needed to decrypt the data being accessed (e.g., size of data structure). Consider an example scenario. A pointer to a data structure having one 64-bit floating point number and one 64-bit integer may be encoded with size metadata indicating 128 bits for the data structure. An instruction accesses the integer data element as an integer, and thus an integer opcode is used. The encoded pointer can be decrypted using both the size information encoded in the encoded pointer and the data type inferred from the data access instruction for the data structure. The linear base address can be formed from the decrypted pointer bits and possibly other information, and this can be used to access the data.Integrity checks can also be implemented for data structures (or particular data elements) in one or more embodiments. Thus, integrity checks can also be used to perform access control for data cryptographically in addition to the data not being decrypted properly if the data was not accessed with the correct tweak (e.g., data type). When an integrity value in memory, which was previously computed for a data element, does not match a new integrity value that is computed based on the instruction that is accessing the data value, this can be used for security attack mitigation as well debugging. For example, if a programmer thought it was an integer when it was actually a floating point number and the integrity check fails, it can be ascertained during debugging that the data element was accessed as a data type (e.g., as an integer or character for example). Thus, programmers can benefit from such integrity checks when data type based encoding is used.Turning to Figs. 10 and 11 , flow diagrams 1000 and 1100 illustrate example processes associated with data access instructions in a program. Flow diagram 1000 of Fig. 10 illustrates a process related to data accesses involving writing (e.g., storing, pushing, saving, etc.) data to memory. The process can include one or more operations that may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing the address encoding/encrypting logic 152, encryption store logic 156, address cryptography unit 104, 302, cryptographic computing engine 108). For ease of description, Figs. 10 and 11 are shown and described with reference to obtaining (e.g., by inference or by prefix) data type and displacement value to be used as a tweak (or part of a tweak) for pointer encryption and decryption and for the data encryption and decryption. However, it should be apparent that the general concepts of Figs. 10 and 11 are applicable to other data encryption factors that may be inferable, derivable, or otherwise obtainable from data access instructions, to be used as a tweak for the pointer encryption and decryption and/or for the data encryption and decryption.At 1002, a data access instruction to write an object to memory is executed in a program. Generally, if a data access instruction includes a memory operand, then it is accessing data in memory data type information may be inferable from the instruction. A data access request to write an object to memory can include any instruction that stores, pushes, writes, moves, transfers, copies, or otherwise causes data to be saved in the memory (e.g., 120) of the computing device (e.g., 102).At 1004, a determination is made as to whether the pointer of the variable base address for the data access is encoded (i.e., cryptographically encoded where at least a portion of the memory address is encrypted). The variable base address may be, for example, the base (or beginning) address of a character array, address of the first byte in case of multi-byte data types such as short, int., etc.If the pointer is not encoded, then at 1006, the instruction can be checked for a prefix. A prefix added to the data access instruction may contain more precise data type information or other information that indicates a particular way of handling the pointer-based encryption and decryption. For example, in some scenarios, data type based encoding may not be a desired way of encoding a particular object. In such cases, the prefix may be used to override type based encoding and in these cases, type based encoding is not used. That is, data type is not used in the address tweak of the pointer encryption and/or is not used in the data tweak of the data encryption. In cases of a prefix, the compiler may generate executable code so that data type based encoding is skipped. Multiple prefixes may be defined to separately select between one or more of type based encoding and displacement based encoding and their combination.If the prefix indicates that the data type based encoding should not be overridden (e.g., that it should be used in the pointer encoding), or if the instruction does not have a prefix, then at 1010, the instruction can be checked and a data type associated with the object to be written can be inferred from the op code of the instruction. The op code can indicate the data type, for example, based on the particular size of the object the op code is used to access, and it can be inferred that the object referenced by the pointer in the instruction has that data type. In other scenarios, the data type can be obtained from the prefix if the instruction has a prefix and if the prefix contains this information.At 1012, a slice of bits in the pointer can be encrypted to generate encrypted pointer slice 932. In one example, the slice of bits can include upper address bits 904, and other metadata 906. If data type is being used in the encryption, then power metadata may be eliminated from the pointer (e.g., as shown in Figs. 4 and 7 ) leaving room for other metadata that may be desirable such as, for example, permissions metadata. Permissions metadata could indicate the permissions associated with the encoded pointer (e.g., what data it can access, what it is allowed to do with data it accesses, etc.). In at least some implementations, this other metadata such as permissions could be included in the slice of pointer bits that is encrypted. The encryption of the pointer bits can be achieved by a cryptographic algorithm (e.g., tweakable block cipher) having inputs including an address key and an address tweak. In at least one embodiment, the address tweak can include the data type that can be supplied by the compiler. The address tweak may also include the fixed offset portion of the linear address. When the encoded pointer is decoded and decrypted during memory accesses, the data type can be inferred from the instruction that uses the encoded pointer.At 1014, the pointer can be encoded with any additional information that may be desirable that is not part of the encrypted pointer bits. At 1016, the object can be encrypted before the write operation is performed. The encryption of the object can be achieved by a cryptographic algorithm (e.g., tweakable block cipher) having inputs including a data key and a data tweak. In at least one embodiment, the data tweak can include the inferred data type, a displacement value, and pointer binding bits of the encoded pointer. In at least some embodiments, the entire encoded pointer may be used as part of the data tweak. If prefixes are used, then if the prefix indicates that use of data type based encoding is to be overridden in the data encryption, then the data tweak may not include the data type and the displacement value.With reference again to 1004, if the pointer of the variable base address is already encoded as determined at 1004, then the object can be encrypted at 1016, as previously described, using the already-encoded pointer. With reference again to 1004, if the pointer is not already encoded as determined at 1004, but the prefix is determined to override the use of data type based encoding for the pointer at 1106, then at 1008, the pointer may be encoded without using data type as part of the address tweak to encrypt the slice of bits in the pointer (e.g., upper address bits 904, other metadata 906). Once the pointer is encoded without using the data type, then the object can be encrypted at 1016, as previously described.At 1018, a write operation can be performed to write the encrypted data generated at 1016, to the memory address (e.g., linear address) referenced by the encoded pointer.Fig. 11 illustrates a process related to data accesses involving reading (e.g., loading, popping, fetching, moving, etc.) data from memory to registers. The process can include one or more operations that may be executed by hardware, firmware, and/or software of the computing device 100 (e.g., by the processor 102 executing the address decoding/decrypting logic 154, decryption store logic 158, address cryptography unit 104, 302, cryptographic computing engine 108).At 1102, a data access instruction to read an object from memory is executed in a program. Generally, if a data access instruction to read data includes a memory operand, then it is accessing data in memory data type information may be inferable from the instruction. A data access request to read an object to memory can include any instruction that loads, reads, pops, moves transfers, copies or otherwise causes data that is in memory (e.g., 120) or cache if it is encrypted in cache (e.g., 170) to be saved in the processor (e.g., in registers 110 or other processor memory) of the computing device (e.g., 102).At 1104, a determination is made as to whether the pointer of the variable base address for the data access is encoded (i.e., cryptographically encoded where at least a portion of the memory address is encrypted). The variable base address may be, for example, the base (or beginning) address of a character array, address of the first byte in case of multi-byte data types such as short, int., etc. If the pointer is not encoded, the read operation may be performed at 1120.If the pointer is encoded, however, at 1106 the instruction can be checked for a prefix. A prefix added to the data access instruction may contain more precise data type information or other information that indicates a particular way of handling the pointer-based encryption and decryption. For example, in some scenarios, data type based encoding may not be a desired way of encoding a particular object. In such cases, the prefix may be used to override type based encoding and in these cases, type based encoding is not used. That is, data type is not used in the address tweak of the pointer encryption and/or is not used in the data tweak of the data encryption. Multiple prefixes may be defined to separately select between one or more of type based encoding and displacement based encoding and their combination.If the prefix indicates that the data type based encoding should not be overridden (e.g., that it should be used in the pointer encoding), or if the instruction does not have a prefix, then at 1110, the instruction can be checked and a data type associated with the object to be read can be inferred from the op code of the instruction. The op code can indicate the data type, for example, based on the particular size of the object the op code is used to access, and it can be inferred that the object referenced by the pointer in the instruction has that data type. In other scenarios, the data type can be obtained from the prefix if the instruction has a prefix and if the prefix contains this information.In order to execute the data access instruction to read the object, the encoded pointer is decoded to obtain the linear address, which can be used (e.g., translated to physical address) to read the data from memory. To decode the encoded pointer, at 1112, a slice of bits in the pointer can be decrypted to generate the unencrypted slice of pointer bits. In one example, the unencrypted slice of pointer bits can include upper address bits 904 and other metadata 906 and can be encrypted to generate the encrypted pointer slice 932. The decryption of the pointer bits can be achieved by a cryptographic algorithm (e.g., tweakable block cipher) having inputs including an address key and an address tweak. In at least one embodiment, the address tweak can include the inferred data type. The address tweak may also include the fixed offset portion of the linear address.At 1114, the linear base address for the object can be formed by using the decrypted upper address bits and the fixed offset bits. If additional address bits (e.g., most significant address bits) are stored in a register, for example, they may also be added to the decrypted upper address bits and the fixed offset bits. In addition, the mutable offset bits can be added to derive the address of the particular object being fetched, which may be within a larger data structure, for example.At 1116, a read operation can be performed to read the encrypted data (object) from memory at the memory address (e.g., linear address) referenced by the encoded pointer. At 1118, the object can be decrypted. The decryption of the object can be achieved by a cryptographic algorithm (e.g., tweakable block cipher) having inputs including a data key and a data tweak. In at least one embodiment, the data tweak can include the inferred data type, a displacement value, and pointer binding bits of the encoded pointer. In at least some embodiments, the entire encoded pointer may be used as part of the data tweak. If prefixes are used, then if the prefix indicates that use of data type based encoding is to be overridden in the data decryption, then the data tweak may not include the data type and the displacement value.With reference again to 1108, if the pointer is not already encoded as determined at 1106, but the prefix is determined to override the use of data type based encoding for the pointer at 1106, then at 1108, the pointer may be decoded without using data type as part of the address tweak to decrypt the slice of encrypted bits in the pointer (e.g., upper address bits 904, other metadata 906). Accordingly, at 1108, the encrypted pointer bits can be decrypted without using type metadata. Then at 1114, the linear base address can be formed, and the flow can continue to perform the read operation at 1116 to read the data from memory, and then decrypt the data at 1118, as previously described.For instructions in an instruction set architectures, which are not currently configured to differentiate between data types (e.g., based on the op code), extensions may be implemented extend the capability of inferring data type from those instruction op codes. For example, highly optimized code may use Single Instruction/Multiple Data (SIMD) instructions for memory read and write operations that do not have implicit data type information. In particular, data type information may not be inferable from SSE instructions and AVX instructions in Intel® 64 and IA-32 Architectures. In particular examples, the following SSE instructions do not have implicit data type information:Data transfer instructions: MOVA∗S ― movaps, movhsp, etc.Packed arithmetic instructions: ADDPSS, SUBPS, etc.Logical, compare, conversion instructionsIn one or more embodiments of data type based pointer encoding, extensions can be created for the above instructions (and others) to enable inferences of the data type of a data element being accessed using the extension. Compilers can be allowed to emit data type dependent instructions, which optimization can be implemented in micro-code. Particular extension examples that could be added to the instruction set architecture include, but are not necessarily limited to:MOVAPS xmm1, xmm2/m123 - for moving integers;MOVAPSB xmm, xmm2/m123 -for moving bytes; andMOVAPSW xmm, xmm2/m128 - for moving wordsEnabling data type inferences can be implemented in various ways. In some scenarios, extensions (e.g., new instruction) may be created as described above. In other scenarios, a prefix may be defined for the existing instructions. In yet other embodiments, the existing instructions could be modified to use an immediate operand (e.g., a number) that gets encoded directly into the instruction itself. A prefix or something else added to instruction could be configured so that the behavior of the instruction does not change, but the data encryption and decryption could be changed based on the data type indicated in the addition to the instruction. For example, the compiler could add a number (e.g., prefix or operand) that represents to the compiler that a character is being moved by a 64-bit instruction, for example. Accordingly, the addition could instantiate the cryptographic operations to encrypt or decrypt the data element based on its data type. For example, if a prefix value is used, then the prefix value and the cryptographically encoded pointer to the data element are both used (e.g. as separate tweaks or as a combined tweak) to determine how the data is getting encrypted and decrypted on the particular instruction op code.Other extensions may be implemented for memory (mem∗) and string (str∗) operations performed in a library function. Some string (e.g., rep stosb) instructions may be used for faster copying. For example, STOSB, STOSW, STOSD may be used for 8-bits (byte), 16-bits (word), and 32-bits (double word), respectively. Thus, extensions may be implemented to enable the optimized fast string copy for 64-bits (quad word) and different instructions for other data types such as floating, double, etc. data types.Typically, memory moves, such as a memory block copy (e.g., movq instruction) are type independent. Some functions are also type-independent including, for example, memset, memmove, etc. However, if string operations are used, the data type still needs to be differentiated when there are any changes to the data. Accordingly, the CPU pipeline can be enhanced to implement type-independent operations. For example, memset can be used to zero out memory. A marker can be added in a pointer, and memory content can be reset to a universal constant. This type binding is a selective override. Without the indicator, the processor may type to bind the cryptography to the types as previously described herein. With the marker, however, it tells the processor not to decrypt/encrypt based on data type as it will result in correctness errors.Figs. 12-16 below provide some example computing devices, computing environments, hardware, software or flows that may be used in the context of embodiments as described herein.Fig. 12 is a block diagram illustrating an example cryptographic computing environment 1200 according to at least one embodiment. In the example shown, a cryptographic addressing layer 1210 extends across the example compute vectors central processing unit (CPU) 1202, graphical processing unit (GPU) 1204, artificial intelligence (Al) 1206, and field programmable gate array (FPGA) 1208. For example, the CPU 1202 and GPU 1204 may share the same virtual address translation for data stored in memory 1212, and the cryptographic addresses may build on this shared virtual memory. They may share the same process key for a given execution flow, and compute the same tweaks to decrypt the cryptographically encoded addresses and decrypt the data referenced by such encoded addresses, following the same cryptographic algorithms.Combined, the capabilities described herein may enable cryptographic computing. Memory 1212 may be encrypted at every level of the memory hierarchy, from the first level of cache through last level of cache and into the system memory. Binding the cryptographic address encoding to the data encryption may allow extremely fine-grain object boundaries and access control, enabling fine grain secure containers down to even individual functions and their objects for function-as-a-service. Cryptographically encoding return addresses on a call stack (depending on their location) may also enable control flow integrity without the need for shadow stack metadata. Thus, any of data access control policy and control flow can be performed cryptographically, simply dependent on cryptographic addressing and the respective cryptographic data bindings.Figs. 13-14 are block diagrams of exemplary computer architectures that may be used in accordance with embodiments disclosed herein. Generally, any computer architecture designs known in the art for processors and computing systems may be used. In an example, system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, tablets, engineering workstations, servers, network devices, servers, appliances, network hubs, routers, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, smart phones, mobile devices, wearable electronic devices, portable media players, hand held devices, and various other electronic devices, are also suitable for embodiments of computing systems described herein. Generally, suitable computer architectures for embodiments disclosed herein can include, but are not limited to, configurations illustrated in Figs. 13-15 .Fig. 13 is an example illustration of a processor according to an embodiment. Processor 1300 is an example of a type of hardware device that can be used in connection with the implementations shown and described herein (e.g., processor 102). Processor 1300 may be any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a multi-core processor, a single core processor, or other device to execute code. Although only one processor 1300 is illustrated in Fig. 13 , a processing element may alternatively include more than one of processor 1300 illustrated in Fig. 13 . Processor 1300 may be a single-threaded core or, for at least one embodiment, the processor 1300 may be multi-threaded in that it may include more than one hardware thread context (or "logical processor") per core.Fig. 13 also illustrates a memory 1302 coupled to processor 1300 in accordance with an embodiment. Memory 1302 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. Such memory elements can include, but are not limited to, random access memory (RAM), read only memory (ROM), logic blocks of a field programmable gate array (FPGA), erasable programmable read only memory (EPROM), and electrically erasable programmable ROM (EEPROM).Processor 1300 can execute any type of instructions associated with algorithms, processes, or operations detailed herein. Generally, processor 1300 can transform an element or an article (e.g., data) from one state or thing to another state or thing.Code 1304, which may be one or more instructions to be executed by processor 1300, may be stored in memory 1302, or may be stored in software, hardware, firmware, or any suitable combination thereof, or in any other internal or external component, device, element, or object where appropriate and based on particular needs. In one example, processor 1300 can follow a program sequence of instructions indicated by code 1304. Each instruction enters a front-end logic 1306 and is processed by one or more decoders 1308. The decoder may generate, as its output, a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals that reflect the original code instruction. Front-end logic 1306 also includes register renaming logic 1310 and scheduling logic 1312, which generally allocate resources and queue the operation corresponding to the instruction for execution.Processor 1300 can also include execution logic 1314 having a set of execution units 1316a, 1316b, 1316n, etc. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. Execution logic 1314 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back-end logic 1318 can retire the instructions of code 1304. In one embodiment, processor 1300 allows out of order execution but requires in order retirement of instructions. Retirement logic 1320 may take a variety of known forms (e.g., re-order buffers or the like). In this manner, processor 1300 is transformed during execution of code 1304, at least in terms of the output generated by the decoder, hardware registers and tables utilized by register renaming logic 1310, and any registers (not shown) modified by execution logic 1314.Although not shown in Fig. 13 , a processing element may include other elements on a chip with processor 1300. For example, a processing element may include memory control logic along with processor 1300. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. In some embodiments, non-volatile memory (such as flash memory or fuses) may also be included on the chip with processor 1300.Fig. 14A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to one or more embodiments of this disclosure. Fig. 14B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to one or more embodiments of this disclosure. The solid lined boxes in Figs. 14A-14B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Fig. 14A , a processor pipeline 1400 includes a fetch stage 1402, a length decode stage 1404, a decode stage 1406, an allocation stage 1408, a renaming stage 1410, a scheduling (also known as a dispatch or issue) stage 1412, a register read/memory read stage 1414, an execute stage 1416, a write back/memory write stage 1418, an exception handling stage 1422, and a commit stage 1424.Fig. 14B shows processor core 1490 including a front end unit 1430 coupled to an execution engine unit 1450, and both are coupled to a memory unit 1470. Processor core 1490 and memory unit 1470 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., processor 102, memory 120). The core 1490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. In addition, processor core 1490 and its components represent example architecture that could be used to implement logical processors and their respective components.The front end unit 1430 includes a branch prediction unit 1432 coupled to an instruction cache unit 1434, which is coupled to an instruction translation lookaside buffer (TLB) unit 1436, which is coupled to an instruction fetch unit 1438, which is coupled to a decode unit 1440. The decode unit 1440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1440 or otherwise within the front end unit 1430). The decode unit 1440 is coupled to a rename/allocator unit 1452 in the execution engine unit 1450.The execution engine unit 1450 includes the rename/allocator unit 1452 coupled to a retirement unit 1454 and a set of one or more scheduler unit(s) 1456. The scheduler unit(s) 1456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1456 is coupled to the physical register file(s) unit(s) 1458. Each of the physical register file(s) units 1458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers (GPRs). In at least some embodiments described herein, register units 1458 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., registers 110). The physical register file(s) unit(s) 1458 is overlapped by the retirement unit 1454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using register maps and a pool of registers; etc.). The retirement unit 1454 and the physical register file(s) unit(s) 1458 are coupled to the execution cluster(s) 1460. The execution cluster(s) 1460 includes a set of one or more execution units 1462 and a set of one or more memory access units 1464. The execution units 1462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Execution units 1462 may also include an address generation unit to calculate addresses used by the core to access main memory (e.g., memory unit 1470) and a page miss handler (PMH).The scheduler unit(s) 1456, physical register file(s) unit(s) 1458, and execution cluster(s) 1460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 1464 is coupled to the memory unit 1470, which includes a data TLB unit 1472 coupled to a data cache unit 1474 coupled to a level 2 (L2) cache unit 1476. In one exemplary embodiment, the memory access units 1464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1472 in the memory unit 1470. The instruction cache unit 1434 is further coupled to a level 2 (L2) cache unit 1476 in the memory unit 1470. The L2 cache unit 1476 is coupled to one or more other levels of cache and eventually to a main memory. In addition, a page miss handler may also be included in core 1490 to look up an address mapping in a page table if no match is found in the data TLB unit 1472.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1400 as follows: 1) the instruction fetch unit 1438 performs the fetch and length decoding stages 1402 and 1404; 2) the decode unit 1440 performs the decode stage 1406; 3) the rename/allocator unit 1452 performs the allocation stage 1408 and renaming stage 1410; 4) the scheduler unit(s) 1456 performs the scheduling stage 1412; 5) the physical register file(s) unit(s) 1458 and the memory unit 1470 perform the register read/memory read stage 1414; the execution cluster 1460 perform the execute stage 1416; 6) the memory unit 1470 and the physical register file(s) unit(s) 1458 perform the write back/memory write stage 1418; 7) various units may be involved in the exception handling stage 1422; and 8) the retirement unit 1454 and the physical register file(s) unit(s) 1458 perform the commit stage 1424.The core 1490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). Accordingly, in at least some embodiments, multi-threaded enclaves may be supported.While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1434/1474 and a shared L2 cache unit 1476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Fig. 15 illustrates a computing system 1500 that is arranged in a point-to-point (PtP) configuration according to an embodiment. In particular, Fig. 15 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. Generally, one or more of the computing systems or computing devices described herein may be configured in the same or similar manner as computing system 1500.Processors 1570 and 1580 may be implemented as single core processors 1574a and 1584a or multi-core processors 1574a-1574b and 1584a-1584b. Processors 1570 and 1580 may each include a cache 1571 and 1581 used by their respective core or cores. A shared cache (not shown) may be included in either processors or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. It should be noted that one or more embodiments described herein could be implemented in a computing system, such as computing system 1500. Moreover, processors 1570 and 1580 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., processor 102).Processors 1570 and 1580 may also each include integrated memory controller logic (IMC) 1572 and 1582 to communicate with memory elements 1532 and 1534, which may be portions of main memory locally attached to the respective processors. In alternative embodiments, memory controller logic 1572 and 1582 may be discrete logic separate from processors 1570 and 1580. Memory elements 1532 and/or 1534 may store various data to be used by processors 1570 and 1580 in achieving operations and functionality outlined herein.Processors 1570 and 1580 may be any type of processor, such as those discussed in connection with other figures. Processors 1570 and 1580 may exchange data via a point-to-point (PtP) interface 1550 using point-to-point interface circuits 1578 and 1588, respectively. Processors 1570 and 1580 may each exchange data with an input/output (I/O) subsystem 1590 via individual point-to-point interfaces 1552 and 1554 using point-to-point interface circuits 1576, 1586, 1594, and 1598. I/O subsystem 1590 may also exchange data with a high-performance graphics circuit 1538 via a high-performance graphics interface 1539, using an interface circuit 1592, which could be a PtP interface circuit. In one embodiment, the high-performance graphics circuit 1538 is a special-purpose processor, such as, for example, a highthroughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. I/O subsystem 1590 may also communicate with a display 1533 for displaying data that is viewable by a human user. In alternative embodiments, any or all of the PtP links illustrated in Fig. 15 could be implemented as a multi-drop bus rather than a PtP link.I/O subsystem 1590 may be in communication with a bus 1510 via an interface circuit 1596. Bus 1510 may have one or more devices that communicate over it, such as a bus bridge 1518, I/O devices 1514, and one or more other processors 1515. Via a bus 1520, bus bridge 1518 may be in communication with other devices such as a user interface 1522 (such as a keyboard, mouse, touchscreen, or other input devices), communication devices 1526 (such as modems, network interface devices, or other types of communication devices that may communicate through a computer network 1560), audio I/O devices 1524, and/or a storage unit 1528. Storage unit 1528 may store data and code 1530, which may be executed by processors 1570 and/or 1580. In alternative embodiments, any portions of the bus architectures could be implemented with one or more PtP links.Program code, such as code 1530, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system may be part of computing system 1500 and includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code (e.g., 1530) may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Fig. 16 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of this disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Fig. 16 shows a program in a high level language 1602 may be compiled using an x86 compiler 1604 to generate x86 binary code 1606 that may be natively executed by a processor with at least one x86 instruction set core 1616. The processor with at least one x86 instruction set core 1616 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1604 represents a compiler that is operable to generate x86 binary code 1606 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1616. Similarly, Fig. 16 shows the program in the high level language 1602 may be compiled using an alternative instruction set compiler 1608 to generate alternative instruction set binary code 1610 that may be natively executed by a processor without at least one x86 instruction set core 1614 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1612 is used to convert the x86 binary code 1606 into code that may be natively executed by the processor without an x86 instruction set core 1614. This converted code is not likely to be the same as the alternative instruction set binary code 1610 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1612 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1606.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the one or more of the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the present disclosure also include non-transitory, tangible machine readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.The computing system depicted in Fig. 15 is a schematic illustration of an embodiment of a computing system that may be utilized to implement various embodiments discussed herein. It will be appreciated that various components of the system depicted in Fig. 15 may be combined in a system-on-a-chip (SoC) architecture or in any other suitable configuration capable of achieving the functionality and features of examples and implementations provided herein.Although this disclosure has been described in terms of certain implementations and generally associated methods, alterations and permutations of these implementations and methods will be apparent to those skilled in the art. For example, the actions described herein can be performed in a different order than as described and still achieve the desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve the desired results. In certain implementations, multitasking and parallel processing may be advantageous. Other variations are within the scope of the following claims.The architectures presented herein are provided by way of example only, and are intended to be non-exclusive and non-limiting. Furthermore, the various parts disclosed are intended to be logical divisions only, and need not necessarily represent physically separate hardware and/or software components. Certain computing systems may provide memory elements in a single physical memory device, and in other cases, memory elements may be functionally distributed across many physical devices. In the case of virtual machine managers or hypervisors, all or part of a function may be provided in the form of software or firmware running over a virtualization layer to provide the disclosed logical function.Note that with the examples provided herein, interaction may be described in terms of a single computing system. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a single computing system. Moreover, the system for deep learning and malware detection is readily scalable and can be implemented across a large number of components (e.g., multiple computing systems), as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the computing system as potentially applied to a myriad of other architectures.As used herein, unless expressly stated to the contrary, use of the phrase 'at least one of' refers to any combination of the named items, elements, conditions, or activities. For example, 'at least one of X, Y, and Z' is intended to mean any of the following: 1) at least one X, but not Y and not Z; 2) at least one Y, but not X and not Z; 3) at least one Z, but not X and not Y; 4) at least one X and at least one Y, but not Z; 5) at least one X and at least one Z, but not Y; 6) at least one Y and at least one Z, but not X; or 7) at least one X, at least one Y, and at least one Z.Additionally, unless expressly stated to the contrary, the terms 'first', 'second', 'third', etc., are intended to distinguish the particular nouns (e.g., element, condition, module, activity, operation, claim element, etc.) they modify, but are not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, 'first X' and 'second X' are intended to designate two separate X elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements.References in the specification to "one embodiment," "an embodiment," "some embodiments," etc., indicate that the embodiment(s) described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment.While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any embodiments or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination.Similarly, the separation of various system components and modules in the embodiments described above should not be understood as requiring such separation in all embodiments. It should be understood that the described program components, modules, and systems can generally be integrated together in a single software product or packaged into multiple software products.Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of this disclosure. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.OTHER NOTES AND EXAMPLESExample AA1 provides a processor comprising: a register to store an encoded pointer to a variable in stack memory, the encoded pointer including: an encrypted portion; and a fixed plaintext portion of a memory address corresponding to the variable; circuitry to: in response to a memory access request associated with variable, decrypt the encrypted portion of the encoded pointer to obtain first upper address bits of the memory address and a memory allocation size for a variable; decode the encoded pointer to obtain the memory address; verify the memory address is valid based, at least in part on the memory allocation size; and in response to determining that the memory address is valid, allow the memory access request.Example AA2 comprises the subject matter of Example AA1, and the circuitry is further to: receive a memory allocation request for the variable; determine the memory allocation size for the stack memory; obtain the memory address for the variable based, at least in part, on a stack pointer; store the first upper address bits of the memory address in a memory location; and encrypt the memory allocation size and the first upper address bits of the memory address based on an address key and an address tweak.Example AA3 comprises the subject matter of any one of Examples AA1-AA2, and the circuitry is further to the circuitry is further to store second upper address bits with the first upper address bits in a memory location, wherein the first upper address bits and the second upper address bits are fixed for the stack memory.Example AA4 comprises the subject matter of Example AA3, and to verify the memory address is valid is to include determining that the first upper address bits obtained from decrypting the encrypted portion of the encoded pointer match first upper address bits stored in the memory location.Example AA5 comprises the subject matter of Example AA4, and the encoded pointer includes power metadata that indicates in a power of two, a first number of bits in the encoded pointer that represents a fixed offset and a second number of bits in the encoded pointer that represents a mutable offset.Example AA6 comprises the subject matter of any one of Examples AA1-AA5, and the circuitry is further to decrypt the encrypted portion of the encoded pointer with a block cipher using an address key and an address tweak as inputs.Example AA7 comprises the subject matter of Example AA6, and the encoded pointer includes power metadata that indicates in a power of two, a first number of bits in the encoded pointer that represents a fixed offset and a second number of bits in the encoded pointer that represents a mutable offset.Example AA8 comprises the subject matter of Example AA7, and the address tweak includes the fixed offset and the power metadata.Example AA9 comprises the subject matter of any one of Examples AA1-AA8, and the memory address is to be decoded from the encoded pointer based, in part, on the fixed plaintext portion and the first upper address bits.Example AA10 comprises the subject matter of Example AA9, and the circuitry is further to load the first data stored in the variable of the stack memory based on the memory address decoded from the encoded pointer; and decrypt the first data based on a first data key and a data tweak derived, at least in part, from the encoded pointer.Example AA11 comprises the subject matter of Example AA9, and the circuitry is further to encrypt first data based on a first data key and a data tweak derived, at least in part, from the encoded pointer; and use the memory address decoded from the encoded pointer to store the encrypted first data in the variable corresponding to the memory address.Example AM1 provides method comprising: storing, in a register, an encoded pointer to a variable in stack memory, wherein the encoded pointer includes an encrypted portion and a fixed plaintext portion of a memory address corresponding to the variable; in response to a memory access request associated with the variable, decrypting the encrypted portion of the encoded pointer to obtain first upper address bits of the memory address and a memory allocation size for a variable; decoding the encoded pointer to obtain the memory address; verifying the memory address is valid based, at least in part on the memory allocation size; and allowing the memory access request based on verifying that the memory address is valid.Example AM2 comprises the subject matter of Example AM1, and the method further includes receiving a memory allocation request for the variable; determining the memory allocation size for the stack memory; obtaining the memory address for the variable based, at least in part, on a stack pointer; storing the first upper address bits of the memory address in a memory location; and encrypting the memory allocation size and the first upper address bits of the memory address based on an address key and an address tweak.Example AM3 comprises the subject matter of any one of Examples AM1-AM2, and the method further includes storing second upper address bits with the first upper address bits in a memory location, wherein the first upper address bits and the second upper address bits are fixed for the stack memory.Example AM4 comprises the subject matter of Example AM3, and the verifying the memory address is valid includes determining that the first upper address bits obtained from decrypting the encrypted portion of the encoded pointer match first upper address bits stored in the memory location.Example AM5 comprises the subject matter of Example AM4, and the verifying the memory address is valid further includes determining whether the memory address is less than a sum of the memory allocation size and a variable base address of the variable.Example AM6 comprises the subject matter of any one of Examples AM1-AM5, and decrypting the encrypted portion of the encoded pointer with a block cipher using an address key and an address tweak as inputs.Example AM7 comprises the subject matter of Example AM6, and the encoded pointer includes power metadata that indicates in a power of two, a first number of bits in the encoded pointer that represents a fixed offset and a second number of bits in the encoded pointer that represents a mutable offset.Example AM8 comprises the subject matter of Example AM7, and the address tweak includes the fixed offset and the power metadata.Example AM9 comprises the subject matter of any one of Examples AM1-AM8, and the memory address is decoded from the encoded pointer based, in part, on the fixed plaintext portion and the first upper address bits.Example AM10 comprises the subject matter of Example AM9, and the method further includes loading first data stored in the variable of the stack memory based on the memory address decoded from the encoded pointer; and decrypting the first data based on a first data key and a data tweak derived, at least in part, from the encoded pointer.Example AM11 comprises the subject matter of Example AM9, and the method further includes encrypting first data based on a first data key and a data tweak derived, at least in part, from the encoded pointer; and using the memory address decoded from the encoded pointer to store the encrypted first data in the variable corresponding to the memory address.Example BA1 provides a processor comprising: a register to store an encoded pointer to a memory location in memory, and the encoded pointer is to include an encrypted portion; circuitry to: determine a first data encryption factor based on a first data access instruction; decode the encoded pointer to obtain a memory address of the memory location; use the memory address to access an encrypted first data element; and decrypt the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element, the first inputs including the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.Example BA2 comprises the subject matter of Example BA1, and the encoded pointer further includes first metadata.Example BA3 comprises the subject matter of Example BA2, and the first metadata includes permissions.Example BA4 comprises the subject matter of Example BA2, and the first metadata is a memory allocation size of a data structure.Example BA5 comprises the subject matter of Example BA4, and the memory address corresponds to a base address of the data structure.Example BA6 comprises the subject matter of any one of Examples BA4-BA5, and the first data encryption factor includes a first data type of the encrypted first data element inferred from the first data access instruction, and the data structure contains the encrypted first data element having the first data type and an encrypted second data element having a second data type.Example BA7 comprises the subject matter of any one of Examples BA2-BA6, and the first metadata is a memory allocation size of the encrypted first data element, and the memory address corresponds to a first byte of the encrypted first data element.Example BA8 comprises the subject matter of any one of Examples BA1-BA7, and the circuitry is further in response to a second data access instruction, decode a second encoded pointer to obtain a second memory address of a second memory location; use the second memory address to access an encrypted second data element; and determine a third data encryption factor based on the second data access instruction; and decrypt the encrypted second data element using the cryptographic algorithm with second inputs, the second inputs including the third data encryption factor based on the second data access instruction and a fourth data encryption factor from the second encoded pointer.Example BA9 comprises the subject matter of any one of Examples BA1-BA8, and the first data encryption factor and the second data encryption factor are included in a data tweak as one of the first inputs for the cryptographic algorithm to decrypt the encrypted first data element.Example BA10 comprises the subject matter of any one of Examples BA1-BA9, and the first data encryption factor includes a first data type derived from the first data access instruction.Example BA11 comprises the subject matter of Example BA10, and to derive the first data type from the first data access instruction is to infer the first data type based on an op code of the first data access instruction.Example BA12 comprises the subject matter of Example BA10, and the first data encryption factor for the cryptographic algorithm to decrypt the encrypted first data element further includes a displacement value derived from the first data access instruction.Example BA13 comprises the subject matter of any one of Examples BA1-BA12, and the circuitry is further to determine that the first data access instruction includes a prefix; and determine the first data encryption factor based on information included in the prefix.Example BA14 comprises the subject matter of any one of Examples BA1-BA13, and the memory location is in heap memory or stack memory.Example BA15 comprises the subject matter of any one of Examples BA1-BA14, and to decode the encoded pointer is to include decrypting the encrypted portion of the encoded pointer using a second cryptographic algorithm with third inputs, the third inputs including the first data encryption factor associated with the first data access instruction.Example BA16 comprises the subject matter of any one of Examples BA1-BA15, and the circuitry is further to in response to determining that the decrypted first data element is not a valid result of the cryptographic algorithm, block the first data access instruction.Example BA17 comprises the subject matter of any one of Examples BA1-BA16, and the first data access instruction is associated with a read operation for the first encrypted data element.Example BM1 provides method comprising: storing, in a register, an encoded pointer to a memory location in memory, and the encoded pointer is to include an encrypted portion; determining a first data encryption factor based on a first data access instruction; decoding the encoded pointer to obtain a memory address of the memory location; using the memory address to access an encrypted first data element; and decrypting the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element, the first inputs including the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.Example BM2 comprises the subject matter of Example BM1, and the encoded pointer further includes first metadata.Example BM3 comprises the subject matter of Example BM2, and the first metadata is permissions.Example BM4 comprises the subject matter of Example BM2, and the first metadata is a memory allocation size of a data structure.Example BM5 comprises the subject matter of Example BM4, and the memory address corresponds to a base address of the data structure.Example BM6 comprises the subject matter of any one of Examples BM4-BM5, and the first data encryption factor includes a first data type of the encrypted first data element inferred from the first data access instruction, and the data structure contains the encrypted first data element having the first data type and an encrypted second data element having a second data type.Example BM7 comprises the subject of any one of Examples BM2-BM6, and the first metadata is a memory allocation size of the encrypted first data element, and the memory address corresponds to a first byte of the encrypted first data element.Example BM8 comprises the subject matter of any one of Examples BM1-BM7, and the method further includes in response to a second data access instruction, decoding a second encoded pointer to obtain a second memory address of a second memory location; using the second memory address to access an encrypted second data element; and determining a third data encryption factor based on the second data access instruction; and decrypting the encrypted second data element using the cryptographic algorithm with second inputs, the second inputs including the third data encryption factor based on the second data access instruction and a fourth data encryption factor from the second encoded pointer.Example BM9 comprises the subject matter of any one of Examples BM1-BM8, and the first data encryption factor and the second data encryption factor are included in a data tweak as one of the first inputs for the cryptographic algorithm to decrypt the encrypted first data element.Example BM10 comprises the subject matter of any one of Examples BM1-BM9, and the first data encryption factor includes a first data type derived from the first data access instruction.Example BM11 comprises the subject matter of Example BM10, and to derive the first data type from the first data access instruction is to infer the first data type based on an op code of the first data access instruction.Example BM12 comprises the subject matter of Example BM10, and the first data encryption factor for the cryptographic algorithm to decrypt the encrypted first data element further includes a displacement value derived from the first data access instruction.Example BM13 comprises the subject matter of any one of Examples BM1-BM12, and the method further includes determining that the first data access instruction includes a prefix; and determining the first data encryption factor based on information included in the prefix.Example BM14 comprises the subject matter of any one of Examples BM1-BM13, and the memory location is in heap memory or stack memory.Example BM15 comprises the subject matter of any one of Examples BM1-BM14, and the decoding the encoded pointer includes: decrypting the encrypted portion of the encoded pointer using a second cryptographic algorithm with third inputs, the third inputs including the first data encryption factor associated with the first data access instruction.Example BM16 comprises the subject matter of any one of Examples BM1-BM15, and the method further includes in response to determining that the decrypted first data element is not a valid result of the cryptographic algorithm, block the first data access instruction.Example BM17 comprises the subject matter of any one of Examples BM1-BM16, and the first data access instruction is associated with a read operation for the first encrypted data element.Example BA1 provides a processor comprising: a register to store an encoded pointer to a memory location in memory, and the encoded pointer is to include an encrypted portion; circuitry to: determine a first data encryption factor based on a first data access instruction; decode the encoded pointer to obtain a memory address of the memory location; use the memory address to access an encrypted first data element; and decrypt the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element, the first inputs including the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.Example BA2 comprises the subject matter of Example BA1, and the encoded pointer further includes first metadata.Example BA3 comprises the subject matter of Example BA2, and the first metadata includes permissions.Example BA4 comprises the subject matter of Example BA2, and the first metadata is a memory allocation size of a data structure.Example BA5 comprises the subject matter of Example BA4, and the memory address corresponds to a base address of the data structure.Example BA6 comprises the subject matter of any one of Examples BA4-BA5, and the first data encryption factor includes a first data type of the encrypted first data element inferred from the first data access instruction, and the data structure contains the encrypted first data element having the first data type and an encrypted second data element having a second data type.Example BA7 comprises the subject matter of any one of Examples BA2-BA6, and the first metadata is a memory allocation size of the encrypted first data element, and the memory address corresponds to a first byte of the encrypted first data element.Example BA8 comprises the subject matter of any one of Examples BA1-BA7, and the circuitry is further in response to a second data access instruction, decode a second encoded pointer to obtain a second memory address of a second memory location; use the second memory address to access an encrypted second data element; and determine a third data encryption factor based on the second data access instruction; and decrypt the encrypted second data element using the cryptographic algorithm with second inputs, the second inputs including the third data encryption factor based on the second data access instruction and a fourth data encryption factor from the second encoded pointer.Example BA9 comprises the subject matter of any one of Examples BA1-BA8, and the first data encryption factor and the second data encryption factor are included in a data tweak as one of the first inputs for the cryptographic algorithm to decrypt the encrypted first data element.Example BA10 comprises the subject matter of any one of Examples BA1-BA9, and the first data encryption factor includes a first data type derived from the first data access instruction.Example BA11 comprises the subject matter of Example BA10, and to derive the first data type from the first data access instruction is to infer the first data type based on an op code of the first data access instruction.Example BA12 comprises the subject matter of Example BA10, and the first data encryption factor for the cryptographic algorithm to decrypt the encrypted first data element further includes a displacement value derived from the first data access instruction.Example BA13 comprises the subject matter of any one of Examples BA1-BA12, and the circuitry is further to determine that the first data access instruction includes a prefix; and determine the first data encryption factor based on information included in the prefix.Example BA14 comprises the subject matter of any one of Examples BA1-BA13, and the memory location is in heap memory or stack memory.Example BA15 comprises the subject matter of any one of Examples BA1-BA14, and to decode the encoded pointer is to include decrypting the encrypted portion of the encoded pointer using a second cryptographic algorithm with third inputs, the third inputs including the first data encryption factor associated with the first data access instruction.Example BA16 comprises the subject matter of any one of Examples BA1-BA15, and the circuitry is further to in response to determining that the decrypted first data element is not a valid result of the cryptographic algorithm, block the first data access instruction.Example BA17 comprises the subject matter of any one of Examples BA1-BA16, and the first data access instruction is associated with a read operation for the first encrypted data element.Example BM1 provides method comprising: storing, in a register, an encoded pointer to a memory location in memory, and the encoded pointer is to include an encrypted portion; determining a first data encryption factor based on a first data access instruction; decoding the encoded pointer to obtain a memory address of the memory location; using the memory address to access an encrypted first data element; and decrypting the encrypted first data element using a cryptographic algorithm with first inputs to generate a decrypted first data element, the first inputs including the first data encryption factor based on the first data access instruction and a second data encryption factor from the encoded pointer.Example BM2 comprises the subject matter of Example BM1, and the encoded pointer further includes first metadata.Example BM3 comprises the subject matter of Example BM2, and the first metadata is permissions.Example BM4 comprises the subject matter of Example BM2, and the first metadata is a memory allocation size of a data structure.Example BM5 comprises the subject matter of Example BM4, and the memory address corresponds to a base address of the data structure.Example BM6 comprises the subject matter of any one of Examples BM4-BM5, and the first data encryption factor includes a first data type of the encrypted first data element inferred from the first data access instruction, and the data structure contains the encrypted first data element having the first data type and an encrypted second data element having a second data type.Example BM7 comprises the subject of any one of Examples BM2-BM6, and the first metadata is a memory allocation size of the encrypted first data element, and the memory address corresponds to a first byte of the encrypted first data element.Example BM8 comprises the subject matter of any one of Examples BM1-BM7, and the method further includes in response to a second data access instruction, decoding a second encoded pointer to obtain a second memory address of a second memory location; using the second memory address to access an encrypted second data element; and determining a third data encryption factor based on the second data access instruction; and decrypting the encrypted second data element using the cryptographic algorithm with second inputs, the second inputs including the third data encryption factor based on the second data access instruction and a fourth data encryption factor from the second encoded pointer.Example BM9 comprises the subject matter of any one of Examples BM1-BM8, and the first data encryption factor and the second data encryption factor are included in a data tweak as one of the first inputs for the cryptographic algorithm to decrypt the encrypted first data element.Example BM10 comprises the subject matter of any one of Examples BM1-BM9, and the first data encryption factor includes a first data type derived from the first data access instruction.Example BM11 comprises the subject matter of Example BM10, and to derive the first data type from the first data access instruction is to infer the first data type based on an op code of the first data access instruction.Example BM12 comprises the subject matter of Example BM10, and the first data encryption factor for the cryptographic algorithm to decrypt the encrypted first data element further includes a displacement value derived from the first data access instruction.Example BM13 comprises the subject matter of any one of Examples BM1-BM12, and the method further includes determining that the first data access instruction includes a prefix; and determining the first data encryption factor based on information included in the prefix.Example BM14 comprises the subject matter of any one of Examples BM1-BM13, and the memory location is in heap memory or stack memory.Example BM15 comprises the subject matter of any one of Examples BM1-BM14, and the decoding the encoded pointer includes: decrypting the encrypted portion of the encoded pointer using a second cryptographic algorithm with third inputs, the third inputs including the first data encryption factor associated with the first data access instruction.Example BM16 comprises the subject matter of any one of Examples BM1-BM15, and the method further includes in response to determining that the decrypted first data element is not a valid result of the cryptographic algorithm, block the first data access instruction.Example BM17 comprises the subject matter of any one of Examples BM1-BM16, and the first data access instruction is associated with a read operation for the first encrypted data element.Example G1 includes an apparatus comprising means to perform one or more elements of a method of any one of Examples BM1-BM17.Example G2 includes the subject matter of G1, and the means for performing the method comprises at least one processor and at least one memory element.Example G3 includes the subject matter of any one of Examples G1-G2, and the apparatus is one of a computing system, a system-on-a-chip, or a multi-chip package device, or a die.Example G3 includes one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method of any one of Examples BM1-BM17.Example G1 includes an apparatus comprising means to perform one or more elements of a method of any one of Examples AM1-AM11 or BM1-BM17.Example G2 includes the subject matter of G1, and the means for performing the method comprises at least one processor and at least one memory element.Example G3 includes the subject matter of any one of Examples G1-G2, and the apparatus is one of a computing system, a system-on-a-chip, or a multi-chip package device, or a die.Example G3 includes one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method of any one of Examples AM1-AM11 or BM1-BM17.
Techniques for improving security of an electronics device are disclosed. In one aspect of the present disclosure, security of a device may be improved by generating a working key based on a hardware secret key and at least one security parameter of the device, e.g., with a key derivation function. The security parameter(s) may be related to software to be authenticated on the device and/or other aspects of security for the wireless device. The security parameter(s) may indicate whether the software is authorized and/or at least one operating function authorized for the software. At least one security function may be performed for the device based on the working key. For example, the working key may be used to encrypt, sign, decrypt, or verify data for the device. The working key may be used directly or indirectly by the software for the at least one security function.
WHAT IS CLAIMED IS:CLAIMS1. A method of providing security, comprising:generating a working key based on at least one security parameter and a secret key of a device, the at least one security parameter being related to software to be authenticated on the device; andperforming at least one security function for the device based on the working key, the working key being used directly or indirectly by the software for the at least one security function.2. The method of claim 1, the at least one security parameter determining whether the software is authorized for execution on the device.3. The method of claim 1, the at least one security parameter determining at least one operating function authorized for the software on the device.4. The method of claim 1, the at least one security parameter comprising a public key used to determine whether the software is authorized for the device.5. The method of claim 4, the public key corresponding to a private key used to sign the software.6. The method of claim 1, the secret key being loaded onto the device by a first entity, and the at least one security parameter being loaded onto the device by a second entity different from the first entity.7. The method of claim 1, the secret key and the at least one security parameter being loaded onto the device at different times.8. The method of claim 1, the generating a working key comprises generating the working key based on the at least one security parameter and the secret key of the device with a key derivation function.9. The method of claim 1, the performing at least one security function comprises encrypting or signing data for the device with the working key.10. The method of claim 1, the performing at least one security function comprises decrypting or verifying data for the device with the working key.11. The method of claim 1, the performing at least one security function comprises performing the at least one security function under control of the software.12. The method of claim 1, further comprising:executing the software on the device without authenticating the software via a secure mechanism.13. The method of claim 1, further comprising:storing the secret key in a secure memory on the device; andstoring the at least one security parameter in the secure memory or an unsecure memory on the device.14. An apparatus comprising:means for generating a working key based on at least one security parameter and a secret key of a device, the at least one security parameter being related to software to be authenticated on the device; andmeans for performing at least one security function for the device based on the working key, the working key being used directly or indirectly by the software for the at least one security function.15. The apparatus of claim 14, the secret key and the at least one security parameter being loaded onto the device by different entities, or at different times, or both.16. The apparatus of claim 14, the means for performing at least one security function comprising means for performing the at least one security function prior to activation of a secure mechanism to authenticate the software.17. An apparatus comprising:a memory configured to store software for a device; anda processor coupled to the memory and configured to:generate a working key based on at least one security parameter and a secret key of the device, the at least one security parameter being related to authentication of the software; andperform at least one security function for the device based on the working key, the working key being used directly or indirectly by the software for the at least one security function.18. The apparatus of claim 17, the secret key and the at least one security parameter being loaded onto the device by different entities, or at different times, or both.19. The apparatus of claim 17, the at least one processor being configured to perform the at least one security function prior to activation of a secure mechanism to authenticate the software.20. A computer program product, comprising:a computer-readable medium comprising:code for causing at least one computer to generate a working key based on at least one security parameter and a secret key of a device, the at least one security parameter being related to software to be authenticated on the device; andcode for causing the at least one computer to perform at least one security function for the device based on the working key, the working key being used directly or indirectly by the software for the at least one security function.21. The computer program product of claim 20, the secret key and the at least one security parameter being loaded onto the device by different entities, or at different times, or both.22. The computer program product of claim 20, the code for causing the at least one computer to perform at least one security function comprising code for causing the at least one computer to perform the at least one security function prior to activation of a secure mechanism to authenticate the software.
GENERATION OF WORKING SECURITY KEYBASED ON SECURITY PARAMETERSBACKGROUNDI. Field[0001] The present disclosure relates generally to electronics, and more specifically to techniques for providing security on an electronics device.II. Background[0002] An electronics device (e.g., a cellular phone or a smartphone) typically operates based on software that controls the operation of hardware on the device and supports various functions of the device. A security mechanism (e.g., a secure boot) may be employed to ensure that only software that has been authorized for the device can be executed on the device. However, the device may be vulnerable to malicious attack (e.g., during manufacturing) prior to activation of the security mechanism on the device. During this vulnerable time period, unauthorized software may be maliciously loaded onto the device and executed by the device to access security information (e.g., security keys) on the device and/or to manipulate data using the security information.SUMMARY[0003] Techniques for improving security of an electronics device are disclosed herein. In an aspect of the present disclosure, security of a device may be improved by generating a working key based on a hardware secret key as well as at least one security parameter of the device. The working key (instead of the hardware secret key) may be used to perform security functions (e.g., encrypt and decrypt data) on the device.[0004] In an exemplary design, a working key may be generated based on at least one security parameter and a secret key of a device, e.g., with a key derivation function. The at least one security parameter may be related to software to be authenticated on the device and/or other aspects of security for the device. At least one security function may be performed for the device based on the working key. For example, the working key may be used to encrypt, sign, decrypt, or verify data for the device. The working key may be used directly or indirectly by the software for the at least one security function. [0005] The at least one security parameter may control various aspects of security for the device. For example, the at least one security parameter may determine whether the software is authorized for execution on the device, whether at least one operating function is authorized for the software, etc. In one design, the at least one security parameter may include a public key used to determine whether the software is authorized for the device. The public key may correspond to a private key used to sign the software. The at least one security parameter may also comprise other types of information.[0006] Various aspects and features of the disclosure are described in further detail below.BRIEF DESCRIPTION OF THE DRAWINGS[0007] FIG. 1 shows a block diagram of a wireless device.[0008] FIG. 2 shows an exemplary manufacturing process for the wireless device.[0009] FIG. 3A shows a process for storing security information on the wireless device.[0010] FIG. 3B shows a process for performing secure boot of the wireless device.[0011] FIG. 4 shows a process for encrypting and decrypting data based on a hardware secret key.[0012] FIG. 5 shows a process for encrypting and decrypting data based on a working key.[0013] FIG. 6 shows a process for providing security for a device.DETAILED DESCRIPTION[0014] The security key generation techniques disclosed herein may be used for various electronics devices such as wireless communication devices, handheld devices, game devices, computing devices, consumer electronics devices, computers, etc. For clarity, the techniques are described below for a wireless communication device.[0015] FIG. 1 shows a block diagram of an exemplary design of a wireless communication device 100 capable of implementing the security key generation techniques disclosed herein. Wireless device 100 may be a cellular phone, a smartphone, a tablet, a wireless modem, a personal digital assistant (PDA), a handheld device, a laptop computer, a smartbook, a netbook, a cordless phone, a wireless local loop (WLL) station, a Bluetooth device, etc. Wireless device 100 may support bi-directional communication with one or more wireless communication systems.[0016] For data transmission, a digital module 120 may process (e.g., encode and modulate) data to be transmitted and provide an output baseband signal to a transmitter (TMTR) 114. Transmitter 114 may amplify, filter, and upconvert the output baseband signal to generate an output radio frequency (RF) signal, which may be transmitted via an antenna 112 to base stations.[0017] For data reception, antenna 112 may receive signals from base stations and/or other transmitter stations and may provide a received RF signal to a receiver (RCVR) 116. Receiver 116 may downconvert the received RF signal from RF to baseband, filter and amplify the downconverted signal, and provide an input baseband signal to digital module 120. Digital module 120 may process (e.g., demodulate and decode) the input baseband signal to recover data sent to wireless device 100.[0018] Digital module 120 may include various processing, interface, and memory units to support digital processing for wireless device 100. In the design shown in FIG. 1, digital module 120 includes a modem processor 122, a central processing unit (CPU)/ reduced instruction set computer (RISC) processor 124, a main controller 126, an internal memory 130, a secure memory 140, a memory controller 148, and an input/output (I/O) controller 158, all of which may communicate with one another via one or more data buses 160.[0019] Modem processor 122 may perform processing for data transmission and reception, e.g., encoding, modulation, demodulation, decoding, etc. CPU/RISC processor 124 may perform general-purpose processing for wireless device 100, e.g., processing for audio, video, graphics, and/or other applications. Main controller 126 may direct the operation of various units within digital module 120. Internal memory 130 may store software 132 and/or data used by the processors and controllers within digital module 120. Memory 130 may be implemented with static random access memory (SRAM) or other types of memory.[0020] Secure memory 140 may store security keys 142, security parameters 144, a boot code 146, and/or other secure information. Security keys 142 may be used for security functions on wireless device 100, e.g., to encrypt data sent by wireless device 100, decrypt encrypted data sent to wireless device 100, authenticate software loaded into internal memory 130, etc. Security parameters 144 may control various aspects related to security of wireless device 100. Boot code 146 may perform secure boot to authenticate software loaded onto wireless device 100. Memory 140 may be implemented with a read-only memory (ROM), one-time programmable (OTP) elements, and/or other types of memory.[0021] Memory controller 148 may facilitate transfer of data between an external memory 150 and digital module 120. External memory 150 may provide mass/bulk storage for the processing units within digital module 120. For example, memory 150 may store software 152 that can be loaded into digital module 120 for execution, data, etc. Memory 150 may comprise (i) bulk nonvolatile memory such as NAND Flash and/or NOR Flash memory, (ii) bulk volatile memory such as synchronous dynamic random access memory (SDRAM) or dynamic random access memory (DRAM), and/or (iii) other types of memory. I/O controller 158 may allow wireless device 100 to communicate with secure servers and/or other entities.[0022] FIG. 1 shows an exemplary design of digital module 120. In general, digital module 120 may include any number of processing, interface, and memory units. Digital module 120 may also be implemented with one or more digital signal processors (DSPs), micro-processors, RISC processors, etc. Digital module 120 may be implemented on one or more application specific integrated circuits (ASICs) and/or other integrated circuits (ICs).[0023] An electronics device, such as wireless device 100, typically goes through a series of manufacturing steps. The electronics device may be vulnerable to security attack during one or more manufacturing steps.[0024] FIG. 2 shows an exemplary manufacturing process 200 for wireless device 100 (or any electronics device). A key provisioning entity may securely provision a hardware (HW) secret key on wireless device 100 (step 1). The key provisioning entity may be an integrated circuit (IC) chip manufacture of an IC chip (e.g., an ASIC) used in wireless device 100 (as shown in FIG. 2) or some other entity. The hardware secret key may be stored in secure memory 140 on wireless device 100.[0025] A device manufacturer may manufacture or build wireless device 100 in an unsecured manufacturing environment where access to manufactured devices cannot be limited to only trusted employees. The device manufacturer may be an original device manufacturer (ODM) (as shown in FIG. 2), an original equipment manufacturer (OEM), or any entity that builds, assembles, and provisions wireless device 100. The device manufacturer typically loads software, loads security parameters, and enables security functions on wireless device 100 (step 2).[0026] A secure server may provision wireless device 100 with secret data using the hardware secret key (step 3). Secure data provisioning is typically performed in a secure facility to load secret data onto wireless device 100. For secure data provisioning, the secure server may securely exchange data with wireless device 100 using the hardware secret key. The provisioned secret data may be stored in secure memory 140 on wireless device 100. [0027] As shown in FIG. 2, security parameters may be loaded onto wireless device 100 during the manufacturing process. The security parameters may control various aspects of security on wireless device 100 and may include one or more of the following:• Information related to a root-of-trust (RoT) of the device,• Information that controls which software can execute on the device and/or how software can operate on the device,• Information that controls whether certain security features can be enabled or disabled on the device, and/or• Other security related information.[0028] The security parameters may include information related to the root-of-trust of wireless device 100. The root-of-trust may be the foundation basis of all security mechanisms on wireless device 100. The root-of-trust related information may include one or more public root keys corresponding to one or more private root keys, one or more certificates for the public root key(s), etc. A private root key may be used to sign data sent to wireless device 100. A corresponding public root key may be used to authenticate data that has been signed with the private root key. For example, the public root key may be used in secure boot to authenticate software loaded onto wireless device 100, as described below.[0029] The security parameters may control which software can execute on wireless device 100 and/or how software can operate on wireless device 100. For example, the security parameters may include a public key used to authenticate software authorized for execution on wireless device 100. The software may be signed based on a private key corresponding to the public key and may be stored on wireless device 100. The software may be authenticated based on the public key prior to execution on wireless device 100, as described below.[0030] The security parameters may control whether certain security features can be enabled or disabled on wireless device 100. For example, the security parameters may control whether secure boot of wireless device 100 is enabled, whether debug capability of wireless device 100 can be disabled to allow access to internal states of wireless device 100 during testing or debug, etc.[0031] Some security parameters may serve multiple purposes. For example, the public root key may serve as both the root-of-trust of wireless device 100 as well as to control which software can execute on wireless device 100. [0032] The security parameters may be stored in secure memory 140 on wireless device 100. For example, the security parameters may be stored using OTP elements on an IC chip of a processor for wireless device 100. The OTP elements may be implemented with fuses that can be blown once during manufacturing to permanently store data via the state of the fuses.[0033] Software and security information may be stored on wireless device 100 in a manner to allow wireless device 100 to authenticate the software prior to execution of the software. An exemplary security mechanism for authenticating software stored on wireless device 100 is described below.[0034] FIG. 3A shows an exemplary design of a process 300 for storing security information on wireless device 100 to support authentication of software loaded onto wireless device 100. Process 300 may be performed by a secure server or some other entity.[0035] At the secure server, a sign function 320 may generate a digital signature SR over a public key X' and possibly other information using a private root key R. Signature SR may be used to authenticate a source entity, which is the secure server. Sign function 320 may implement an RSA (Rivest, Shamir and Adleman) algorithm, a Digital Signature Algorithm (DSA), or some other cryptographic (digital signature or encryption) algorithm. A certificate generator 322 may form a certificate CR containing public key X', signature SR, and possibly other information such as an identifier of the source entity, a cryptographic algorithm selected for use, an expiration date of the certificate, etc. This certificate may be stored as an X.509 certificate in secure memory 140 (or some other memory) on wireless device 100. Public root key R' may be made available to wireless device 100 in a secure manner and may be stored in secure memory 140 (e.g., an OTP memory or a ROM) on wireless device 100.[0036] A secure hash function 330 may hash software loaded onto wireless device 100 and may provide a hash digest S. Secure hash function 330 may implement SHA-1, SHA-2, MD-5, or some other secure hash algorithm. A sign function 332 may generate a digital signature SX over digest S using private key X. Signature SX may be stored in memory 150. Sign function 332 may implement the RSA, DSA, or some other cryptographic algorithm. The software may be stored in memory 150 (or some other memory) on wireless device 100.[0037] FIG. 3B shows an exemplary design of a process 350 for secure boot of wireless device100. Process 350 may be performed by wireless device 100, as described below. At wireless device 100, a verify function 370 may receive certificate CR and public root key R' from secure memory 140. Verify function 370 may extract signature SR and public key X' from certificateCR, verify signature SR with public root key R', and provide public key X' if signature SR is verified. Any tampering with certificate CR by a third party can be easily detected by signature SR not verifying.[0038] A secure hash function 380 may receive software from memory 150, hash the software, and provide a hash digest S'. Secure hash function 380 may implement the same secure hash algorithm used by secure hash function 330 in FIG. 3A. A verify function 390 may receive digest S' from secure hash function 380, digital signature SX from memory 150, and public key X' from verify function 370. Verify function 390 may verify digital signature SX with public key X' and digest S' and may indicate whether or not digital signature SX is verified. Public key X' is authenticated with public root key R' . Hence, any tampering with digital signature SX and/or the software by a third party can be easily detected by digital signature SX not verifying. If digital signature SX is verified, then the software may be provided for use. Otherwise, an error message may be provided.[0039] FIG. 3 A show an exemplary secure boot software signing process. FIG. 3B show an exemplary secure boot software authentication process. Secure boot may also be implemented in other manners.[0040] During normal operation, wireless device 100 may perform secure boot to authenticate software loaded onto wireless device 100 prior to execution of the software. For the secure boot, wireless device 100 may first authenticate signature SR with public root key R' to determine the authenticity of public key X' . If public key X' is authenticated, then wireless device 100 may authenticate signature SX with public key X' to determine the authenticity of the software. The secure boot may ensure that only software that has been authorized for wireless device 100 can be executed on wireless device 100.[0041] Hardware secret keys are commonly provisioned on ASICs such as system-on-chip (SoC)ICs and are used to encrypt and decrypt data stored in memories external to the ASICs. This security mechanism is also known as a secure file system or an encrypted file system. Hardware secret keys are typically separate from public/private keys. A hardware secret key is typically a symmetric key used to encrypt and decrypt secrets in a device. For example, a hardware secret key may be used to encrypt data before storing the encrypted data in a non-protected data storage such as a solid state disk (SSD), a MultiMediaCard (MMC), an eMMC, etc. Many OEMs do not trust their manufacturing floor employees or ODM employees. Hence, most security implementations do not allow software to have access to a hardware secret key on an ASIC.However, these security implementations typically allow the hardware secret key to be used indirectly by software. This usage may include decryption or encryption of data. [0042] Software may be considered to be trusted after it has been authenticated and verified by an authentication mechanism that is tied to a root-of-trust (RoT) of the ASIC. This authentication mechanism is typically referred to as secure boot. However, secure boot may not be available during the manufacturing process.[0043] It is common practice to provide an OEM/ODM with a generic ASIC in which a hardware secret key is provisioned, secure boot is not enabled, and root-of-trust is not provisioned. The hardware secret key may be provisioned in a secure memory on a device or the ASIC. At this stage, prior to enabling secure boot and/or other security mechanisms that can protect the integrity of software on the device, unauthorized software may be loaded onto the device and executed by the device. Any security key provisioned on the device may be manipulated by the unauthorized software. This opens the door for untrusted ODM/OEM manufacturing employees to manipulate data using the hardware secret key, expose confidential information, or compromise the integrity of data protected by the hardware secret key.[0044] Wireless device 100 may thus be vulnerable to attack from (i) the time when the hardware secret key is provisioned on wireless device 100, e.g., by an IC chip manufacturer in step 1 in FIG. 2, to (ii) the time when security is locked on wireless device 100, e.g., by the OEM in step 3 in FIG. 2. During this vulnerable time period, unauthorized software may be maliciously loaded onto wireless device 100 and executed by the wireless device to (i) access the hardware secret key and/or (ii) manipulate data using the hardware secret key, e.g., in cases where the hardware secret key is not accessible by software on wireless device 100.[0045] In an aspect of the present disclosure, security of a device may be improved (and security weakness described above may be effectively addressed) by generating a working key based on a hardware secret key as well as at least one security parameter, which may be related to software authorized for the device. The working key (instead of the hardware secret key) may be used to encrypt and/or decrypt data on the device.[0046] FIG. 4 shows a process 400 for encrypting and decrypting data in a conventional manner based on a hardware secret key. At a secure server 410 (which may belong to an OEM), a crypto engine 430 may encrypt data with a hardware secret key 442 of a device 450 to obtain encrypted data. Crypto engine 430 may operate as directed by software 440 in secure server 410. The encrypted data may be sent to device 450.[0047] At device 450, a crypto engine 470 may receive the encrypted data from secure server410 and may decrypt the encrypted data with hardware secret key 442 of device 450. Crypto engine 470 may operate as directed by software 480 in device 450. As noted above, software 480 may be insecure prior to enabling of secure boot on device 450. In this case, malicious software may be loaded onto device 450 and may be executed to (i) direct crypto engine 470 to decrypt the encrypted data and/or (ii) manipulate the decrypted data.[0048] FIG. 5 shows an exemplary design of a process 500 for encrypting and decrypting data in a novel manner based on a working key. At a secure server 510, a one-way key derivation function (KDF) 522 may generate a working key for a device 550 based on a hardware secret key 542 and at least one security parameter 544, which may be related to software authorized on device 550. A crypto engine 530 may encrypt data with the working key to obtain encrypted data, which may be sent to device 550.[0049] At device 550, key derivation function 522 may generate the working key for device 550 based on hardware secret key 542 and at least one security parameter 544 of device 550. A crypto engine 570 may receive the encrypted data from secure server 510 and may decrypt the encrypted data with the working key to obtain decrypted data.[0050] At secure server 510, hardware secret key 542 and/or security parameters 544 may be stored in a secure storage 541 within secure server 510. Key derivation function 522 and crypto engine 530 may be implemented with hardware, software, and/or firmware and may be implemented by (e.g., executed on) a processor 521 within secure server 510.[0051] At device 550, hardware secret key 542 and/or security parameters 544 may be stored in a secure memory 540 of device 550. For example, secure memory 540 may comprise an OTP memory, and hardware secret key 542 and/or security parameters 544 may be stored by blowing fuses of the OTP memory. Key derivation function 522 and crypto engine 570 may be implemented with hardware, software, and/or firmware and may be implemented by (e.g., executed on) a processor 520 within device 550. Device 550 may be one exemplary design of wireless device 100 in FIG. 1. Secure memory 540 may correspond to secure memory 140 within wireless device 100 in FIG. 1. Processor 520 may correspond to processor 122 or 124 within wireless device 100 in FIG. 1.[0052] Various key derivation functions may be used for key derivation function 522 at secure server 510 and device 550. A key derivation function may utilize one or more cryptographic hash functions such as SHA-1 (Secure Hash Algorithm), SHA-2 (which includes SHA-224, SHA-256, SHA-384 and SHA-512), MD-4 (Message Digest), MD-5, etc. A secure hash algorithm has cryptographic properties so that the function between an input message and an output digest (which is a pseudo-random bit string) is irreversible and the likelihood of two input messages mapping to the same digest is very small. Key derivation function 522 may be implemented as described in NIST 800-108, which is publicly available.[0053] As show in FIG. 2, security parameters (e.g., root-of-trust related security information and secure boot related security information) may be provisioned in a secure memory of a wireless device as part of a manufacturing process. The provisioning of security parameters is typically done after a hardware secret key is already provisioned on the wireless device. The security parameters are typically not secret and may be provisioned by unauthorized entities (e.g., manufacturing employees).[0054] As shown in FIG. 5, a key derivation function may be used to generate a working key based on the hardware secret key and at least one security parameter provisioned on the device. The security parameter(s) may relate to software authorized for the device. The security parameter(s) may also determine a system security level and/or a specific root-of-trust on the device. The working key may be properly generated after the security parameter(s) have been provisioned on the device, e.g., by an OEM. The working key may be used by the OEM to protect secret data. Unauthorized software may be maliciously loaded onto the device prior to provisioning of security parameters on the device. However, the unauthorized software would not be able to generate the correct working key without the right set of security parameters. Furthermore, incorrect security parameters may be loaded onto the device by an unauthorized entity, e.g., an untrusted employee. However, the correct working key would not be generated without the right set of security parameters, and data would still be protected. In any case, software unable to utilize the correct working key would be unable to properly encrypt or decrypt data on the device.[0055] FIG. 6 shows an exemplary design of a process 600 for providing security. Process 600 may be performed by a device, or a secure server, or some other entity. A working key may be generated based on at least one security parameter and a secret key (e.g., a hardware secret key) of the device (e.g., with a key derivation function) (block 612). The at least one security parameter may be related to software to be authenticated on the device and/or other aspects of security for the wireless device. At least one security function may be performed for the device based on the working key (block 614). The working key may be used directly or indirectly by the software for the at least one security function. The at least one security parameter and/or the secret key may be stored in a secure memory on the device, e.g., in OTP elements.[0056] The at least one security parameter may control various aspects of security for the device.In one design, the at least one security parameter may determine whether the software (or which software) is authorized for execution on the device. In another design, the at least one security parameter may determine at least one operating function authorized for the software on the device (or how software can be used on the device). In yet another design, the at least one security parameter may comprise a public key used to determine whether the software is authorized for the device. The public key may correspond to a private key used to sign the software, e.g., as shown in FIGS. 3A and 3B. The at least one security parameter may also comprise other types of information.[0057] In one design, the secret key may be loaded onto the device by a first entity (e.g., an IC chip manufacturer, during the manufacturing of an IC chip). The at least one security parameter may be loaded onto the device by a second entity (e.g., an OEM or an ODM device manufacture), which may be different from the first entity. In one design, the secret key and the at least one security parameter may be loaded onto the device at different times. The secret key and the at least one security parameter may have other characteristics that are different.[0058] In one design of block 614, data for the device may be encrypted or signed with the working key. In another design of block 614, data for the device may be decrypted or verified with the working key. In one design, the at least one security function may be performed under control of the software.[0059] The at least one security function may be performed by the software prior to activation of a secure mechanism (e.g., a secure boot) to authenticate the software. The use of the working key may enable the software to be executed on the device without authenticating the software via the secure mechanism.[0060] In an exemplary design, an apparatus (e.g., an ASIC, a wireless device, an electronics device, etc.) may include a memory and a processor. The memory (e.g., memory 150 in FIG. 1) may store software for a device. The processor (e.g., processor 122 or 124 in FIG. 1) may be operatively coupled to the memory (e.g., via one or more data buses). The processor may (i) generate a working key based on at least one security parameter and a secret key of the device and (ii) perform at least one security function (e.g., encryption, decryption, signature, verification, etc.) for the device based on the working key. The processor may perform the at least one security function prior to activation of a secure mechanism (e.g., a secure boot) to authenticate the software. The at least one security parameter may be related to authentication of the software stored in the memory. The working key may be used directly or indirectly by the software for the at least one security function. The secret key and the at least one security parameter may be loaded onto the device by different entities and/or at different times. A first entity may load the secret key onto the device, and a second entity may later load the at least one security parameter onto the device. The first entity may be an IC chip manufacture, and the second entity may be an OEM or ODM. Alternatively, the first entity may be a trusted employee, and the second entity may be a non-trusted employee, e.g., on the same manufacturing floor or at different locations. The apparatus may further include a secure memory that stores the secret key and/or the at least one security parameter. The at least one security parameter may also be stored in an unsecure memory on the apparatus, as long as the integrity of the at least one security parameter is protected by the memory.[0061] The security key generation techniques disclosed herein may provide various advantages. The techniques may prevent unauthorized software from utilizing a hardware secret key or manipulating data during a vulnerable time period in manufacturing prior to activation of secure boot. This may relieve an OEM/ODM from having to implement various processes to secure a manufacturing floor. There may be other advantages provided by the techniques disclosed herein.[0062] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0063] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0064] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0065] The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD- ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0066] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0067] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Method and apparatus for incremental design is described. More particularly, a text-circuit description of the integrated circuit having logic groups of respective logic instances is obtained. Area groups are created for the logic groups and correspondingly assigned. Unchanged logic groups are guided on an incremental implementation from existing guide files, and changed logic groups are re-implemented in area groups corresponding to the changed logic groups. In this manner, runtime of the unchanged logic groups is reduced by an incremental guide implementation instead of a re-implementation, while performance of such unchanged logic groups is maintained from a prior implementation. Furthermore, degrees of freedom for re-implementing are enhanced for improving a design, as all prior mapping, placing and routing within a changed area group may be stripped for re-implementation.
The invention claimed is:1. A processor-implemented method for implementing a circuit design, comprising:assigning portions of a first version of a circuit design to a plurality of areas of an integrated circuit, wherein each portion is assigned to one of the areas;generating respective netlists for the portions of the first version of the circuit design;implementing the first version of the circuit from the netlists;inputting netlists representing a second version of the circuit design, wherein portions of the second version of the circuit design correspond to the portions of the first version and are assigned to areas of the integrated circuit to which the portions of the first version are assigned;for each of the plurality of areas, comparing a first netlist of a portion of the first version of the circuit design assigned to the area to a corresponding second netlist of the portion of the second version of the circuit design;implementing, in place of an implementation of the first netlist, the second netlist of the second version of the design in response to the second netlist being not equal to the first netlist; andbypassing implementing the second netlist in response to the second netlist being equal to the first netlist.2. The method of claim 1, further comprising:determining for each of the plurality of areas of the integrated circuit, utilization by the assigned portions of the first version of the circuit design;changing an assignment of at least one portion of the first version of the circuit design from a first area to a second area in response to the utilization of the second area being outside a range.3. The method of claim 1, further comprising:for a first area of the plurality of areas having a first plurality of assigned portions of the first version of the design, comparing netlists of the first plurality of assigned portions of the first version to corresponding netlists of the second version of the design; andin response to at least one of the netlists of the first plurality of assigned portions of the first version being not equal to a corresponding at least one of the netlists of the second version of the design, implementing the corresponding netlists of the second version of the design in place of an implementation of the netlists of the first plurality of portions of the first version of the design.4. The method of claim 1, wherein the step of implementing the first version of the circuit and the step of implementing the second netlist include mapping the circuit design to resources of a programmable logic device.5. The method of claim 1, wherein the programmable logic device is a field programmable logic device.6. A apparatus for implementing a circuit design, comprising:means for assigning portions of a first version of a circuit design to a plurality of areas of an integrated circuit, wherein each portion is assigned to one of the areas;means for generating respective netlists for the portions of the first version of the circuit design;means for implementing the first version of the circuit from the netlists;means for inputting netlists representing a second version of the circuit design, wherein portions of the second version of the circuit design correspond to the portions of the first version and are assigned to areas of the integrated circuit to which the portions of the first version are assigned;means for comparing, for each of the plurality of areas, a first netlist of a portion of the first version of the circuit design assigned to the area to a corresponding second netlist of the portion of the second version of the circuit design;means for implementing, in place of an implementation of the first netlist, the second netlist of the second version of the design in response to the second netlist being not equal to the first netlist; andmeans for bypassing implementing the second netlist in response to the second netlist being equal to the first netlist.7. An article of manufacture, comprising:a processor-readable medium having instructions executable by at least one processor for implementing a circuit design by performing the steps of, assigning portions of a first version of a circuit design to a plurality of areas of an integrated circuit, wherein each portion is assigned to one of the areas;generating respective netlists for the portions of the first version of the circuit design;implementing the first version of the circuit from the netlists;inputting netlists representing a second version of the circuit design, wherein portions of the second version of the circuit design correspond to the portions of the first version and are assigned to areas of the integrated circuit to which the portions of the first version are assigned;for each of the plurality of areas, comparing a first netlist of a portion of the first version of the circuit design assigned to the area to a corresponding second netlist of the portion of the second version of the circuit design;implementing, in place of an implementation of the first netlist, the second netlist of the second version of the design in response to the second netlist being not equal to the first netlist; andbypassing implementing the second netlist in response to the second netlist being equal to the first netlist.8. The article of manufacture of claim 7, wherein the processor-readable medium is further configured with executable instructions for performing the steps comprising:determining for each of the plurality of areas of the integrated circuit, utilization by the assigned portions of the first version of the circuit design;changing an assignment of at least one portion of the first version of the circuit design from a first area to a second area in response to the utilization of the second area being outside a range.9. The article of manufacture of claim 7, wherein the processor-readable medium is further configured with executable instructions for performing the steps comprising:for a first area of the plurality of areas having a first plurality of assigned portions of the first version of the design, comparing netlists of the first plurality of assigned portions of the first version to corresponding netlists of the second version of the design; andin response to at least one of the netlists of the first plurality of assigned portions of the first version being not equal to a corresponding at least one of the netlists of the second version of the design, implementing the corresponding netlists of the second version of the design in place of an implementation of the netlists of the first plurality of portions of the first version of the design.10. The article of manufacture of claim 7, wherein the instructions for implementing the first version of the circuit and the instructions for implementing the second netlist include instructions for mapping the circuit design to resources of a programmable logic device.11. The method of claim 10, wherein the programmable logic device is a field programmable logic device.
FIELD OF THE INVENTIONOne or more aspects of the invention generally relate to incremental design, and more particularly to hierarchical and area-based guiding for incremental design.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDS) exist as a well-known type of integrated circuit (IC) that may be programmed by a user to perform specified logic functions. There are different types of programmable logic devices, such as programmable logic arrays (PLAs) and complex programmable logic devices (CPLDs). One type of programmable logic device, called a field programmable gate array (FPGA), is very popular because of a superior combination of capacity, flexibility, time-to-market, and cost.An FPGA typically includes an array of configurable logic blocks (CLBs) surrounded by a ring of programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. The CLBs, IOBs, and interconnect structure are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the CLBs, IOBs, and interconnect structure are configured. The configuration bitstream may be read from an external memory, conventionally an external integrated circuit memory EEPROM, EPROM, PROM, and the like, though other types of memory may be used. The collective states of the individual memory cells then determine the function of the FPGA.Conventionally, such a bitstream is generated using a hardware description language ("HDL"), such as Verilog or Very High Speed Integrated Circuit ("VHSIC") HDL ("VHDL"), to provide a textual description of circuitry ("text-circuit description") which is synthesized and implemented. Accordingly, a design described through an HDL is a functional description, which is converted into a text-circuit description using one or more synthesizer tools or programs. Mapping, placement, and routing of components/signals then implement this synthesis from which comes the bitstream. Due to the large number of components and interconnections, mapping, placing and routing can consume a significant amount of computer time ("runtime"). A more recent addition to HDL involves use of a programming language, such C/C++ as in SystemC from Synopsys of Mountain View, Calif., to provide a circuit synthesis.Circuit synthesis is used in designing all types of complex integrated circuits. One use is for designing hardwired circuitry, such as in FPGAs, processors, Application Specific Standard Products (ASSPs), and the like. Another use is for designing Application Specific Integrated Circuits (ASICs), including ASIC standard cells, where a vendor or customer uses synthesis tools to programmatically configure logic built on such an integrated circuit. Another use of synthesis tools is programmatically configuring a portion of an FPGA to provide a design. For purposes of clarity, an FPGA integrated circuit is described, though it will be apparent that any integrated circuit of sufficient complexity designable with synthesis tools may be implemented.Once a design is created and implemented for an FPGA, it may be improved for performance, debugged, or the like. The process for creating a design for an FPGA, or other circuit, can involve iterative use of synthesizing and implementing. This iterative process is sometimes referred to as "incremental design," and is described in additional detail in U.S. Pat. No. 5,867,396. Whether an FPGA design is being altered after meeting all timing requirements to improve performance, or is being changed to improve performance to meet all timing requirements, or other goal(s), rerunning a mapper ("MAP program"), and a placer and router ("PAR program") after each design change is time consuming. Thus, runtimes for mapping, placing and routing for implementation may significantly impact design cycle time, especially if multiple debugging iterations are done.In incremental design of the past, if there were some percentage of change in instances of circuitry as coded, whether Verilog modules or VHDL entities for example, all unchanged circuitry would be left without any further mapping, placing and routing ("locked down"). The changed circuitry could then be re-mapped, re-placed and re-routed subject to avoidance of locked down circuitry. Classically, there was no differentiation with respect to hierarchy of logic, this was a flat instance "name-based" guide for incremental design. By "name-based," it is meant a matching or comparison of specific names used.In U.S. Pat. No. 5,867,396, it was recognized that sometimes using name changes in a synthesis output ("netlist") as an indicator as to circuit changes in a design could be misleading. Thus, in instances of logic a name may be changed without actually having a corresponding change in circuitry from a prior naming. Accordingly, a check was made to determine whether a change in name actually amounted to a change in circuitry to potentially reduce the amount of circuitry to be re-mapped, re-placed and re-routed.However, each of the above-described types of incremental design tends to be disfavored owing to inability to meet performance goals. Thus, conventional incremental design re-maps, re-places and re-routes an entire design in order to meet integrated circuit performance goals at the expense of runtime. Thus, multiple debugging iterations can significantly impact design cycle time. Furthermore, as integrated circuits become more complex by having more circuits, runtimes may increase.Today, there is hierarchical synthesis. Hierarchical synthesis allows for multiple groupings of blocks of logic. Examples of hierarchical design methodologies, include: Having all critical paths within a grouping of instantiated logic. Having inputs or outputs (I/Os) registered at boundaries of a grouping of instantiated logic. Having a top hierarchical level ("top level") having only instantiated modules or entities, IOB logic and clock logic.Accordingly, it would be both desirable and useful to provide means to reduced runtime for incremental design while doing a better job of facilitating meeting performance goals.SUMMARY OF THE INVENTIONAn aspect of the invention is a method for implementing incremental design change to a circuit design by: obtaining a text-circuit description of the circuit design having logic groups of logic instances; creating area groups for at least a portion of the hierarchical groupings of logic instances; assigning at least the portion of the hierarchical groupings of logic instances to the area groups; and generating at least one guide file having membership information of the logic group assignment to the area group.Another aspect of the invention is a method for incremental design change to a circuit design. Reference logic instances and subsequent logic instances in an area group of the circuit design are obtained. The reference logic instances and the subsequent logic instances are compared with one another. In response to outcome of this comparison, logic instances are tagged to be either guided or re-implemented within the area group, where the tagging is for the guiding or the implementing of the area group in its entirety.Another aspect of the invention is a method for incremental design change to a circuit design. Reference logic instances and subsequent logic instances in an area group of the circuit design are obtained. The reference logic instances and the subsequent logic instances are compared with one another. In response to outcome of this comparison, a selection of whether to implement or guide responsive to the reference logic instances and the subsequent logic instances being different or equivalent, respectively, is made.Another aspect of the invention is a method for incrementally not modifying a circuit design of an integrated circuit. A plurality of logic groups is determined, and a plurality of area groups is created. A logic group of the plurality of logic groups is assigned to an area group of the plurality of area groups, where the logic group is an unmodified portion of the circuit design. Placement and routing for the logic group in the area group is maintained.Another aspect of the invention is a method for incrementally modifying a circuit design of an integrated circuit. A plurality of logic groups is determined. A plurality of area groups is created. A logic group of the plurality of logic groups is assigned to an area group of the plurality of area groups. The logic group is replaced with a modified logic group, where the modified logic group is assigned to the area group.BRIEF DESCRIPTION OF THE DRAWINGSAccompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the invention; however, the accompanying drawing(s) should not be taken to limit the invention to the embodiment(s) shown, but are for explanation and understanding only.FIG. 1 is a high-level block diagram of an exemplary embodiment of an FPGA divided into Area Groups in accordance with one or more aspects of the invention.FIG. 2 is a high-level flow diagram of an exemplary embodiment of a synthesis and implementation flow in accordance with one or more aspects of the invention.FIG. 3 is a flow diagram of an exemplary embodiment of an Area Group generation and assignment flow in accordance with one or more aspects of the invention.FIG. 4A is a flow diagram of an exemplary embodiment of a guide files generation flow in accordance with one or more aspects of the invention.FIG. 4B is a flow diagram of another exemplary embodiment of a guide files generation flow in accordance with one or more aspects of the invention.FIGS. 5A is a flow diagram of an exemplary embodiment of a revised design implementation flow in accordance with one or more aspects of the invention.FIGS. 5B is a flow diagram of another exemplary embodiment of a revised design implementation flow in accordance with one or more aspects of the invention.FIG. 6 is a block diagram of an exemplary embodiment of a computer system that may programmed with one or more program products for the synthesis and implementation flow of FIG. 2 in accordance with one or more aspects of the invention.DETAILED DESCRIPTION OF THE DRAWINGSA circuit design for programming an FPGA as described below involves hierarchical design. This facilitates partitioning a design into "Area Groups" and "Logic Groups." Thus, an "incremental design change" may be made on a "nearly completed design" without significant additional map, place and route runtime.An "Area Group" is a constraint on a design that packs logic together during a mapping processing. An "Area Group Range" constraint specifies a physical location in an FPGA for an Area Group. This physical location may be specified with x- and y-coordinates, among other coordinate systems.MAP is a programmatic tool that, among other known actions, obtains target information (FPGA device, package and speed), reads an input file (conventionally created with Verilog or VHDL), maps pads and associated logic into IOBs, maps logic into FPGA components (such as CLBs and IOBs, among other known FPGA components), creates a physical constraint file, and generates a MAP report.A "Logic Group" is a portion of a design that can be synthesized separately from other logic and can be assigned to an Area Group. A Logic Group thus conventionally may be a module in Verilog or an entity body in VHDL, or less conventionally a programming language description.The phrase "incremental design change" refers to one or more changes in a design. Examples of incremental design changes may include changes to state machines or control logic, adding registers to improve performance, and the like. Thus, an incremental design change encompasses any change in circuitry or name in a design.The phrase "nearly completed design" refers to a design that successfully runs through FPGA design implementation tools. Though timing requirements for such a design should already have been met, they need not have been met for a nearly completed design.What follows at least in part is a detailed description of an incremental design flow that significantly decreases map, place and route runtime while preserving design performance of unchanged logic when making incremental design change to a nearly completed design.Each Logic Group is constrained to occupancy of its own distinct space in an FPGA according to a corresponding Area Group, and thus there is a one-to-one correspondence between Area Groups and Logic Groups. When an incremental design change is made to a Logic Group, an incremental synthesis flow ensures that unchanged Logic Groups are not changed. Implementation tools re-map, re-place and re-route a changed Logic Group in its assigned Area Group, while unchanged Logic Groups are guided from a previous implementation. By guiding unchanged Logic Groups, performance of those unchanged Logic Groups is preserved, while map, place and route runtimes are decreased. This can significantly shorten design time when iterating a design.FIG. 1 is a high-level block diagram of an exemplary embodiment of an FPGA 10 divided into Area Groups. Though FPGA 10 is divided into six Area Groups 11A, 12A, 13A, 14A, 15A, and 16A, fewer or more Area Groups may be used. While the invention is more suited to two or more Area Groups, it is possible to have only one Area Group that is a subset of the total area of the device, such as FPGA 10, where one or more ungrouped areas 17 are for unchanged logic. Area Groups 11A, 12A, 13A, 14A, 15A, and 16A are constrained to specific areas in FPGA 10. A range for each Area Group is specified in terms of x- and y-coordinates. So, for example, the range of FPGA 10 is from a starting point (X0,Y0) to an ending point (XM,YN) for M and N integers. Furthermore, (X0,Y0) to (XM,YN) may only range a portion of FPGA 10, such as when designing for an inserted core for a system-on-a-chip ("SoC"). So, for example, Area Group 16A may be designated by (X1,Y0) to (X2,Y1).One or more instances of logic, such as modules or entities, may be in a Logic Group. Though there is a one-to-one correspondence between Logic Groups 11L, 12L, 13L, 14L, 15L, and 16L to respective Area Groups 11A, 12A, 13A, 14A, 15A, and 16A, there need not be such correspondence as one or more Logic Groups may be in an Area Group. Moreover, a Logic Group need not be assigned to an Area Group. However, for purposes of clarity, a design synthesis and implementation is described in terms of a one-to-one correspondence between Logic and Area Groups.Placement of a Logic Group is anywhere in an Area Group. For example, Logic Group 11L is anywhere in, including up to the boundaries though a inset dashed-line is shown for Logic Groups, Area Group 11A. Accordingly, it should be understood that Area Group size and orientation (horizontal stacking, vertical stacking or otherwise) may vary, and that a Logic Group may be disposed anywhere within its Area Group taking up all or a portion of that specified area range. So, if Area Group range is adjusted for a smaller area, a Logic Group may have to be adjusted to stay in such an Area Group.FIG. 2 is a high-level flow diagram of an exemplary embodiment of a synthesis and implementation flow 100. At 20, Logic Groups are created using a hierarchical synthesis of a user provided design. At 30, Area Groups are created and Logic Groups from block 20 are assigned to them.FIG. 3 is a flow diagram of an exemplary embodiment of an Area Group generation and assignment flow 30. With continuing reference to FIG. 3 and renewed reference to FIG. 2, Area Group generation and assignment flow 30 is described in additional detail.At 31, a name is created for an Area Group. At 32, a Logic Group from block 20 is assigned to the Area Group named at block 31 for this iteration. At 33, a physical range is assigned to the Area Group for this iteration. Thus, after block 33, area constraints for an Area Group have been generated. Notably, blocks 31 and 32 may be done in one step.Area constraints may be influenced by other factors. For example, Area Groups, may, though should not overlap. Other guidelines for creating Area Groups are: Area Groups that communicate with one another, need not, but should be placed proximal to one another. Area Groups that communicate with I/Os, need not, but should be placed proximal to such I/Os. I/Os that communicate with an Area Group, need not, but should be placed proximal to one another. Resource utilization inside each Area Group, need not, but should be approximately the same percentage. With respect to FPGAs in particular, slice utilization inside each Area Group, need not, but should be approximately the same percentage-approximately within a spread of 20 percent, though larger deltas may be used.At 34, a check is made as to whether there is another Logic Group to have an Area Group created and assigned. If another Area Group is to be created and assigned, another Logic Group from output of block 20 is obtained at 39.At 35, constraints for Area Groups ("Area Group constraints") are output. Data output block 35 indicates that Area Group constraints may be output, and thus Area Group generation and assignment flow 30 can return to Synthesis and Implementation flow 100.However, optionally at 36, utilization of Area Groups may be checked for being within an acceptance range. If, at 36, utilization is outside of an acceptance range, at 37 one or more Area Group ranges may be adjusted, after which utilization checking is repeated. If utilization is not within an acceptable distribution after adjustment, then Area Group generation and assignment flow 30 may be revisited to adjust such utilization. This may involve at least in part reassignment of one or more Logic Groups. A Pinout and Area Constraints Editor (PACE) from Xilinx of San Jose, Calif., may be used to create an Area Group, as described above.Optionally, at 38, Area Group constraints are saved for later use. Notably, a designer may desire generating different sets of Area Group constraints for different incremental design iterations for comparison of results.Referring again to FIG. 2, after one or more constrained Area Groups are created and have respective (or one or more) Logic Groups assigned to them at 30, a user design is implemented by mapping, placing and routing, such as with MAP and PAR programs, at 40 to generate an initial set of guide files and to obtain an initial set of design outcomes ("results"). As guide files generation flow 40 is for creation of an initial set of guide files for a user design, another guide files generation flow is used incremental synthesis. For an implementation of an HDL synthesized design, there may be one set of guide files generated for the entire design. However, by using MAP and PAR for each Area Group, separate guide files may be generated for each Area Group. Separate sets of guide files further facilitate guiding on Area Groups and not logic instances, as described below in more detail.FIG. 4A is a flow diagram of an exemplary embodiment of a guide files generation flow 40A. At 41A, a user integrated circuit design is implemented for a device, such as an FPGA. This starts off with a design synthesis, as described above with respect to Verilog or VHDL, and, additionally, Area Groups constraints, including Logic Group membership information, from Area Group constraints output from block 35 or retrieved from storage, as input to MAP and PAR programs. Furthermore, a Logic Group includes membership information with respect to logic instances that make up the Logic Group. Accordingly, implementation is done with MAP and PAR programs using these inputs to generate an initial set of guide files. Notably, this initial set of entire design guide files does not have incremental design change but does have Logic Group membership information for each Area Group.Optionally, at 42A a MAP report, generated from outcome of a MAP program run at 41A, may be obtained to do some verification. (A PAR report may be generated as well from outcome of a PAR program run at 41A.) Optionally, at 43A, proper reading of Area Group constraints may be verified, namely, a check is made to ensure there is a one-to-one correspondence between Logic Groups and Area Groups ensuring each Logic Group belongs to an Area Group. Optionally, at 44A, verification that no Area Group is more than 100 percent utilized may be done. Optionally, at 45A, similar utilization of Area Groups may be checked (verified) to determine if utilization is (still-if decision block 36 in FIG. 3 is done) within an acceptable distribution.Notably, if one or more optional blocks 43A, 44A or 45A are not acceptable, then Area Group constraints generation may be revisited with Area Group generation and assignment flow 30 of FIG. 3. For example, at 48A-1, an optional check may be made as to whether Area Group constraints may be finalized. If results at 43A, 44A or 45A are not acceptable, then Area Group constraints may be modified at 48B-1, for example by re-executing all or a portion of Area Group generation and assignment flow 30 of FIG. 3 to obtain modified Area Group constraints for implementation at 41A.FIG. 4B is a flow diagram of an exemplary embodiment of a guide files generation flow 40B, where Area Groups are processed individually as opposed to processing a plurality of Area Groups at a time as in guide files generation flow 40A. However, as much of guide files generation flow 40B description is the same as in guide files generation flow 40A, redundant description is not repeated. At 41B, an Area Group, not all Area Groups, of a user integrated circuit design is implemented for a device, such as an FPGA. This starts off with a design synthesis for a Logic Group corresponding to its Area Group and, additionally, Area Group constraints, including Logic Group membership information, from Area Group constraints output from block 35 or retrieved from storage, as input to MAP and PAR programs, for such an Area Group. Accordingly, implementation is done with MAP and PAR programs using these inputs to generate an initial set of guide files for this Logic Group associated with this Area Group. Notably, this initial set of partial design guide files does not have incremental design change but does have Logic Group membership information for each Area Group. A MAP report may be obtained for such an Area Group at 42B, and optionally checks at 43B, 44B or 45B may be made as associated with such an Area Group.At 48A-2, an optional check may be made, as to whether design results for an Area Group may be finalized. If such results are not acceptable, such an Area Group may have its constraints modified at 48B-2, such as by re-executing all or a portion of Area Group generation and assignment flow 30 of FIG. 3 for the affected Area Group, for implementation with such modified Area Group constraints at 41B.If, however, at 48A-2 design results are acceptable, then at 46, a check is made to determine if another Logic Group is to be implemented. Notably, for a one-to-one correspondence between Logic and Area Groups, decision block 46 may be based on Logic Group or Area Group. However, if such one-to-one correspondence does not exist, then Area Group is used. If another Logic Group is to be implemented, then another Area Group, or Logic Group, is tagged or obtained at 47 for subsequent processing at 41B, as described above.Optionally, blocks 42B, 43B, 44B or 45B do not need to be within a loop back, as consolidated reports may used, as described with FIG. 4A.Additionally, with respect to flows 40A and 40B, Area Group constraints may be checked from outcomes of MAP program runs at design results check decision block 48 of FIG. 2, whereby blocks 48A and 48B could be omitted. Though, as mentioned above, blocks 41A and 41B are described for generating initial guide files, once initial guide files are obtained, one or more iterations for incremental design change may be used to generate one or more other sets of guide files.With renewed reference to FIG. 2, optionally at 48, a check as to whether design results may be made final. For example, design results may be made final if no incremental design change is to be made. If design results from an implementation are acceptable, then synthesis and implementation flow 100 ends at 63.If, however, incremental design change is to be done, then at 49 a synthesis of a design with such incremental design change ("incremental synthesis") is done. For incremental synthesis, only those Logic Groups with incremental design change have respective synthesis netlist outputs updated. For incremental synthesis, several synthesis tools may be used, such as: Leonardo Spectrum from Mentor Graphics of Wilsonville, Oreg.; FPGA Compiler II from Synopsys of Mountain View, Calif.; Synplify and Synplify Pro from Synplicity of Sunnyvale, Calif.; and Xilinx Synthesis Tool (XST) from Xilinx of San Jose, Calif.For example, in XST, block level incremental synthesis is supported within a single project. In XST, attributes to define Logic Group boundaries are applied to each Logic Group in an XST constraints file or to hardware description language (HDL) design source code. An HDL change to a module/entity in one of a plurality of Logic Groups will only affect that changed Logic Group, namely, a synthesis netlist output for such a changed Logic Group will be written. All unchanged Logic Groups will not have their associated netlists changed, namely, unmodified portions of a design are parsed but no netlists are written. For VHDL instantiated designs, detection of modified logic is done automatically in XST. For Verilog instantiated designs, a resynthesize attribute is used.At 50, a design with incremental design change ("revised design") is implemented with output from block 49, namely, a synthesized design with incremental design change incorporated into such synthesis.FIGS. 5A and 5B are more detailed flow diagrams of respective exemplary embodiments of revised design implementation flows 50A and 50B. With reference to FIG. 5A, at 52 logic instances for a design and a revised design of such design are obtained. In other words, design syntheses for a reference design and a revision thereof are obtained. Notably, because incremental design change as described herein may be iterative, a reference design may include incremental design change from a prior iteration. Accordingly, a Logic Group from a prior design is replaced with a modified Logic Group from an incrementally changed design, and the modified Logic Group is assigned to the same Area Group as the Logic Group as indicated in the prior design's guide file.At 52, logic instances of a reference design and its revision are compared for changes. This may be done by calling up by Area Group or Logic Group, and then checking logic instances within a called up Group for differences. As mentioned above with respect to U.S. Pat. No. 5,867,396, by comparing netlists of the respective syntheses for differences, actual changes in circuitry, as opposed to just in name, may be identified to provide a more accurate indication of whether a Group has an incremental design change.At 53, a determination as to whether a Group has an incremental design change is made. If revised design Logic Group or Area Group has had an incremental design change made to it as compared with a same Logic Group or Area Group, respectively, from a reference design, then at 59 such Group or its corresponding Group is tagged for re-implementation. Re-implementation, such as is done at 60 for Groups, if any, tagged for re-implementation, is a generation of incremental design results, as an Area Group is stripped of at least prior placing and routing, and may be stripped of prior mapping, placing and routing. In other words, an Area Group may be completely stripped of all prior implementation for implementation anew of logic instances of a Logic Group using a modified netlist therefor and Area Group constraints for such Area Group as input to MAP and PAR programs.If, however, at 53, there are no changed instances, then at 58 such a Group, Area or Logic, or its corresponding Group, Logic or Area, is tagged for guiding. Notably, guide files from a reference design are used for guiding unchanged Logic Groups, if any. At 56, a check is made for another Group, Area or Logic, to be processed for tagging. If another Group is to be processed, then such other Group is obtained at 57.If, however, all Groups have been processed at 56, then at 60 those Groups tagged for re-implementation, if any, are re-implemented and those Groups tagged for guiding, if any, are guided. Block 60 may be divided out for guiding and for re-implementing, respectively; however, available MAP and PAR tools allow for incremental modes to handle both. Additionally, it should be understood that such tools in an incremental mode will remove Logic Groups marked for re-implementation from guide files obtained for a reference implementation.When a MAP program is run in an incremental guide mode, such a MAP program is directed to map unchanged Logic Groups as they were mapped in a reference implementation. This MAP guiding uses unchanged guide files from such reference implementation, and thus mapping results stay the same. Because each Logic Group is assigned to its own distinct area, a MAP tool can preserve mapping of unchanged Logic Groups while being able to completely re-map changed Logic Groups. As mentioned above, changed Logic Groups may be mapped anew for re-implementation.A PAR program is run in an incremental guide mode, which uses placement and routing of all unchanged Logic Groups from a reference implementation. As mentioned above, changed Logic Groups are placed and routed anew for re-implementation. Because each Logic Group is assigned to its own distinct area, a PAR tool can preserve placement and routing of unchanged Logic Groups while being able to completely re-place and re-route changed Logic Groups.Notably, by being able to only re-implement changed Logic Groups and keep unchanged Logic Groups as in a reference implementation, performance of an unchanged portion of a design is maintained, while improving runtime since components and signals of only changed Logic Groups are re-implemented. Furthermore, re-implementation is localized to affected Area Groups, allowing such Area Groups to be completely stripped of prior mapping, placing and routing in order to facilitate ability to optimize implementation of such affected Logic Groups.Referring again to FIG. 2, output from MAP and PAR programs run at 50 in incremental mode is another set of guide files, namely, incremental guide files, as well as MAP and PAR report files. A PAR report file includes guide information, which may indicate which Area Groups are guided and which Area Groups are re-implemented. Notably, FIG. 5A was described in terms of a reference implementation having a set of guide files for an entire design, which may then be used to guide tagged Groups for guiding to reduce runtime. However, as mentioned with respect to FIG. 4B, separate sets of guide files may be generated for Area Groups.Accordingly, FIG. 5B is a flow diagram of an exemplary embodiment of revised design implementation flow 50B for use with separate sets of guide files for Area Groups. As portions of revised design implementation flows 50A of FIG. 5A and 50B are the same, repetitive description is avoided for clarity.If, there are one or more changed instances at 53, then at 54, a Logic Group in an Area Group is re-implemented using an incremental mode for MAP and PAR programs. An individual set of guide files associated with such a Logic Group is obtained, along with Area Group constraints for such an Area Group, as input to MAP and PAR programs for re-implementation. This complete stripping of a prior implementation in an Area Group facilitates re-implementing a Logic Group in such an Area Group to improve performance.If, however, there are not changed instances at 53, then at 55 a Logic Group in an Area Group is guided using an incremental guide mode for MAP and PAR programs. An individual set of guide files associated with such a Logic Group is obtained, along with Area Group constraints for such an Area Group, as input to MAP and PAR programs for guiding. By guiding, it should be understood that a prior mapping, placing and routing is used for an implementation thereby saving runtime and preserving performance of a reference implementation of unchanged Logic Groups.At 56, a check is made for another Group to be processed. Accordingly, it should be understood that individual sets of guide files corresponding to Logic Groups is used in the exemplary embodiment of FIG. 5B.Referring again to FIG. 2, optionally at 61, a check is made to determine if another incremental design change is to be implemented. If another incremental design change is to be done, then at 62 another incremental design change ("IDC") in an HDL is obtained for synthesis at 49. However, if at 61 no other IDC is to be implemented, then an optionally at 48, another check may be made to determine if design results are acceptable.Accordingly, it should be understood that incremental design to significantly reduce runtime and maintain performance in unchanged portions of a design has been described. Place and route tools are able to view a design in separate Logic Groups, where each Logic Group occupies a uniquely assigned space (an area that may extend to one or more semiconductor process levels) on a device. When a Logic Group is changed, these tools can completely re-place and re-route that Logic Group inside of its assigned space. Having an assigned space, which is open for complete re-placement and re-routing, allows place and route tools to have many degrees of freedom to facilitate working toward improving performance, including an optimal configuration for a changed Logic Group.Above, a one-to-one correspondence between Logic Groups and Area Groups was described. However, each Logic Group need not be assigned its own Area Group, but this can increase runtime due to having to implement around implementations of guided Logic Groups. In addition to having each Logic Group assigned its own Area Group, other Logic Group guidelines are: All logic for a circuit, except I/O logic and clock logic, should be part of a Logic Group. Avoid ungrouped logic.Notably, all the above guidelines are just that, namely, they are not always to be followed. For example, there may be instances where having ungrouped logic locked down outside of designated Area Groups is desirable.An incremental design flow has been described where runtime is significantly improved while maintaining performance of unchanged Logic Groups by assigning each Logic Group to an Area Group. Mapping, placing and routing is unchanged for unchanged Logic Groups, while re-mapping, re-placing and re-routing is done for one or more changed Logic Groups.Though an FPGA device has been described for purposes of clarity, it should be appreciated that the above-description is not limited to FPGA devices but includes any sufficiently complex integrated circuit implemented iteratively implemented. A precursor for such implementation may be use of one or more synthesis tools, schematics, high-level language compilers, and the like for providing one or more Logic Groups for implementation. Furthermore, though use for incremental design for debugging has been described, it should be appreciated that design operations other than debugging are applicable. For example, basic design of one or more logic sections may be done using one or more corresponding Area Groups.FIG. 6 is a block diagram of an exemplary embodiment of a computer system 110 that may programmed with one or more program products of synthesis and implementation flow 100 in accordance with one or more aspects of the invention. Computer system 110 may be implemented using configured personal computers, workstation computers, mini computers, mainframe computers, or a distributed network of computers. For purposes of clarity, a personal computer system 110 is described below though other computer systems may be used. Computer system 110 is configured with at least one of the following: processor 102, I/O interface 104, and memory 103. Additionally, one or more I/O devices 105, such as keyboards, displays, cursor pointing devices, and the like may be coupled to I/O interface 104. Furthermore, I/O interface 104 may be coupled to a network for access to information.Computer system 110 is programmed with an operating system, which may be OS/2, Java Virtual Machine, Linux, Solaris, Unix, Windows, Windows95, Windows98, Windows NT, and Windows2000, WindowsME, and WindowsXP, among other known platforms. At least a portion of an operating system may be disposed in memory 103. Memory 103 may include one or more of the following random access memory, read only memory, magneto-resistive read/write memory, optical read/write memory, cache memory, magnetic read/write memory, and the like.As will be described in detail below, an aspect of the invention is implemented as a program product for use with a computer system such as, for example, computer system 110, where computer system 110 may be programmed with one or more of synthesis, MAP and PAR tools. Program(s) of the program product defines functions of embodiments and can be contained on a variety of signal/bearing media, which include, but are not limited to: (i) information permanently stored on non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM or DVD-RAM disks readable by a CD-ROM drive or a DVD drive); (ii) alterable information stored on writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or read/writable CD or read/writable DVD); or (iii) information conveyed to a computer by a communications medium, such as through a computer or telephone network, including wireless communications. The latter embodiment specifically includes information downloaded from the Internet and other networks. Such signal-bearing media, when carrying computer-readable instructions that direct functions of the invention, represent embodiments of the invention.In addition to having one or more portions of synthesis and implementation flow 100 in memory 103, memory 103 may store a database 120 sets of guide files for each Area Group. Thus, rather than checking each set of guide files, when a change is made to a Logic Group it can be tagged such that changed and unchanged guide files can be easily identified. Accordingly, in FIG. 5B, decision block 53 may be avoided, and decision block 56 would be based on changed files. Additionally, block 55 may be omitted, a prior implementation of an Area Group, which would have been guided, is used. This avoids having to guide unchanged Area Groups.Another embodiment of the invention (see Appendix A, which is herein incorporated by reference in its entirety) includes a method for incrementally modifying a circuit design on an integrated circuit. The method includes determining one or more logic groups, where a logic group has a hardware description language (HDL) description of at least a portion of the circuit design. Next one or more area groups are created, where an area group identifies a physical area on the integrated circuit. The logic group is assigned to an area group. Lastly, the logic group is replaced with a modified logic group (i.e., re-designed logic group); wherein the modified logic group is assigned the same area group.An example of an integrated circuit is a FPGA. Logic groups in this embodiment are HDL parts of the circuit design that can be synthesized separately. Examples of logic groups include Verilog modules or VHDL entities instantiated at the top level. Each logic group is assigned an area group, where each area group is associated with a non-overlapping physical area on the FPGA.In this embodiment, the HDL for the circuit design is synthesized using the logic groups. Next, non-overlapping physical areas identified by area groups are reserved on the FPGA. Then, each logic group is assigned an area group. Each logic group is placed and routed in its reserved physical area as identified by its associated area group. When there is a design change, the appropriate modified logic group(s) is identified. The modified logic group(s) is re-synthesized. Using the same area group(s) associated with the unmodified logic group(s), this area group(s) is re-placed and re-routed. The other logic groups, which were not modified, keep their placement and routing. Thus the performance of the unchanged design is kept, while place and route time is reduced because only the area group(s) associated with the modified logic group(s) need be re-placed and re-routed.While the foregoing describes exemplary embodiment(s) in accordance with one or more aspects of the invention, other and further embodiment(s) in accordance with the one or more aspects of the invention may be devised without departing from the scope thereof, which is determined by the claim(s) that follow and equivalents thereof. Claim(s) listing steps do not imply any order of the steps.
A system for performing a scan test of a processor core includes a scan test module and a processor including a processor core and an input/output die, where the input/output die is coupled to the processor core. The scan test module transmits, in parallel to the input/output die, scan test input data. A serializer/deserializer module of the input/output die receives the input data, serializes the input data, and transmits the serialized input data to the processor core. A serializer/deserializer module of the processor core receives the serialized scan test input data, deserializes the input data, receives result data generated in dependence upon the input data, serializes the result data, and transmits the serialized result data to the input/output die. The input/output die serializer/deserializer module receives the result data, deserializes the result data, and provides the result data to the scan test module. Error detection can be carried out through redundancy.
CLAIMSWhat is claimed is:1. A method of performing a scan test of a processor core, the method comprising: receiving, in parallel by a serializer/deserializer module of an input/output die of a processor, scan test input data; serializing, by the serializer/deserializer module of the input/output die, the scan test input data; and transmitting, by the serializer/deserializer module of the input/output die to a processor core of the processor, the serialized scan test input data.2. The method of claim 1, wherein transmitting the serialized scan test input data to the processor core further comprises encoding the serialized scan test input data with data strobe encoding.3. The method of claim 1, further comprising: receiving, by a serializer/deserializer module of the processor core, the serialized scan test input data; deserializing, by the serializer/deserializer module of the processor core, the serialized scan test input data; responsive to receiving result data generated in dependence upon the deserialized scan test input data, serializing the result data; and transmitting, by the serializer/deserializer module of the processor core to the input/output die, the serialized result data.4. The method of claim 3, further comprising: receiving, by the serializer/deserializer module of the input/output die, the serialized result data; deserializing, by the serializer/deserializer module of the input/output die, the serialized result data; and providing, by the input/output die to a testing module, the deserialized result data.5. The method of claim 3, wherein transmitting the serialized result data to the input/output die further comprises encoding the serialized scan test input data with data strobe encoding.6. The method of claim 4, wherein: receiving, by a serializer/deserializer module of the processor core, the serialized scan test input data further comprises holding the scan test input data until an expiration of an indeterminacy window, the indeterminacy window comprising a period of time during which a symbol transmitted between serializer/deserializer modules is indeterminate; deserializing, by the serializer/deserializer module of the processor core, the serialized scan test input data further comprises releasing the held scan test input data responsive to expiration of the indeterminacy window; receiving, by the serializer/deserializer module of the input/output die, the serialized result data further comprises holding the result data until an expiration of the indeterminacy window; and deserializing, by the serializer/deserializer module of the input/output die, the serialized result data further comprises releasing the held result data responsive to expiration of the indeterminacy window.7. The method of claim 3, wherein the serializer/deserializer module of the input/output core and the serializer/deserializer module of the processor core comprise at least two lanes, wherein each lane carries a redundant transmission and the method further comprises detecting an error in a transmission by comparing redundant transmission of the lanes.8. The method of claim 1, wherein links carrying transmissions between the input/output die and the processor core are direct-coupled.9. The method of claim 1, wherein the input-output die comprises a monolithic data communications interface for the processor.10. The method of claim 1, wherein the serial/deserializer modules operate only for testing.11. The method of claim 3, wherein the serializer/deserializer modules comprise synthesizable digital logic.12. The method of claim 1, wherein performing a scan test comprises performing the scan test at a time after packaging of the processor, prior to installation of the processor in a computer system.13. The method of claim 1, wherein the processor further comprises a plurality of processor cores, wherein each processor core is coupled independently to the input/output core and each processor core is prohibited from data communications external to the processor except through the input/output die.14. A processor for performing a scan test of a processor core, the processor comprising: a processor core; and an input/output die, wherein the input/output die is coupled to the processor core and the processor is configured to carry out the steps of: receiving, in parallel by a serializer/deserializer module of the input/output die, scan test input data; serializing, by the serializer/deserializer module of the input/output die, the scan test input data; and transmitting, by the serializer/deserializer module of the input/output die to a processor core of the processor, the serialized scan test input data.15. The processor of claim 14, wherein transmitting the serialized scan test input data to the processor core further comprises encoding the serialized scan test input data with data strobe encoding.16. The processor of claim 14, wherein the processor is further configured to carry out the steps of: receiving, by a serializer/deserializer module of the processor core, the serialized scan test input data; deserializing, by the serializer/deserializer module of the processor core, the serialized scan test input data; responsive to receiving result data generated in dependence upon the deserialized scan test input data, serializing the result data; and transmitting, by the serializer/deserializer module of the processor core to the input/output die, the serialized result data.17. The processor of claim 16, wherein the processor is further configured to carry out the steps of: receiving, by the serializer/deserializer module of the input/output die, the serialized result data; deserializing, by the serializer/deserializer module of the input/output die, the serialized result data; and providing, by the input/output die to a testing module, the deserialized result data.18. The processor of claim 16, wherein transmitting the serialized result data to the input/output die further comprises encoding the serialized scan test input data with data strobe encoding.19. The processor of claim 17, wherein: receiving, by a serializer/deserializer module of the processor core, the serialized scan test input data further comprises holding the scan test input data until an expiration of an indeterminacy window, the indeterminacy window comprising a period of time during which a symbol transmitted between serializer/deserializer modules is indeterminate; deserializing, by the serializer/deserializer module of the processor core, the serialized scan test input data further comprises releasing the held scan test input data responsive to expiration of the indeterminacy window; receiving, by the serializer/deserializer module of the input/output die, the serialized result data further comprises holding the result data until an expiration of the indeterminacy window; and deserializing, by the serializer/deserializer module of the input/output die, the serialized result data further comprises releasing the held result data responsive to expiration of the indeterminacy window.20. The processor of claim 14, wherein links carrying transmissions between the input/output die and the processor core are direct-coupled.21. The processor of claim 14, wherein the input-output die comprises a monolithic data communications interface for the processor.22. The processor of claim 14, wherein the serial/deserializer modules operate only for testing.23. The processor of claim 16, wherein the serializer/deserializer modules comprise synthesizable digital logic.24. The processor of claim 14, wherein performing a scan test comprises performing the scan test at a time after packaging of the processor, prior to installation of the processor in a computer system.25. The processor of claim 14, wherein the processor further comprises a plurality of processor cores, wherein each processor core is coupled independently to the input/output core and each processor core is prohibited from data communications external to the processor except through the input/output die.26. A system for performing a scan test of a processor core, the system comprising: a scan test module; and a processor comprising a processor core and an input/output die, the input/output die coupled to the processor core, wherein: the scan test module transmits, in parallel to the processor via the input/output die, scan test input data; a serializer/deserializer module of the input/output die: receives the scan test input data; serializes the scan test input data; and transmits, the serialized scan test input data to the processor core of the processor; a serializer/deserializer module of the processor core: receives the serialized scan test input data; deserializes the serialized scan test input data; receives result data generated in dependence upon the deserialized scan test input data; serializes the result data; and transmits the serialized result data to the input/output die; and the serializer/deserializer module of the input/output die: receives the serialized result data; deserializes the serialized result data; and provides the deserialized result data to the scan test module.27. The system of claim 26, wherein: the serializer/deserializer module of the input/output die transmits the serialized scan test input data to the processor core by encoding the serialized scan test input data with data strobe encoding; the serializer/deserializer module of the processor core transmits the serialized result data to the input/output die by encoding the serialized scan test input data with data strobe encoding; the serializer/deserializer module of the processor core receives the serialized scan test input data by decoding the serialized scan test input data with data strobe decoding; and the serializer/deserializer module of the input/output die receives the serialized result data by decoding the serialized result data with data strobe decoding.
PERFORMING SCAN DATA TRANSFER INSIDE MULTI-DIE PACKAGE WITH SERDES FUNCTIONALITYBACKGROUND ART[0001] After packaging a processor or a system on chip that includes a processor, testing is often performed to ensure that various components of the processor operate within defined parameters. Some components of the processor are inaccessible in normal operation by external equipment except through other components of the processor. Consider, for example, a core or core complex of a processor. A core may have no direct connection to external testing equipment except through various input/output interface components of the processor. To access such a core for testing, some processors are implemented with test-only components. Such test-only components, however, increase the cost of the processor while also reducing the utilizable die area for components of the processor used in typical operation. Further, the test data itself which is input into the processor for testing the core after packaging has continued to grow in size.BRIEF DESCRIPTION OF THE DRAWINGS[0002] Figure 1 sets forth a block diagram of an example processor in which a scan test of a processor core is carried out according to embodiments of the present disclosure.[0003] Figure 2 sets for a functional block diagram of an example system for performing a scan test of a processor core in accordance with embodiments of the present disclosure.[0004] Figure 3 sets forth a flow chart illustrating an example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure.[0005] Figure 4 sets forth a flow chart illustrating another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure.[0006] Figure 5 sets forth a flow chart illustrating another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure.[0007] Figure 6 sets forth a flow chart illustrating another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure.[0008] Figure 7 sets forth a functional block diagram of an example system for performing a scan test of a processor core with error detection in accordance with embodiments of the present disclosure.DESCRIPTION OF EMBODIMENTS[0009] Example methods, apparatus, and products for performing a scan test of a processor core are described in this specification. Various embodiments of performing a scan test of a processor core are described with regard to the figures below in greater detail. A processor core in the following examples includes one or more cores (or, synonymously, core complexes) that are normally inaccessible by testing apparatus external to the processor once packaged. The processor also includes an input/output (I/O) die. The I/O die is a monolithic data communications interface for the processor. That is, the I/O die operates as a communications interface between the processor and external components. The I/O die, during normal operation of the processor within a computer system, (such as a server) provides memory interfaces such as DDR (Double Data Rate) interfaces or bus interfaces such as PCI-type (Peripheral Component Interconnect) interfaces. The I/O die in such a processor is coupled on a point-to-point basis to each of the processor cores within the processor through one or more internal links. Some or all of the links coupling the I/O die to the processor cores are direct coupled (DC coupling).[0010] In such a processor, performing a scan test of a processor core includes receiving, in parallel by a serializer/deserializer (‘SERDES’) module of an input/output die of a processor from a testing module, scan test input data. The SERDES module of the input/output (‘I/O’) die serializes the scan test input data. The SERDES module of the I/O die transmits, to a processor core of the processor, the serialized scan test input data. Such transmission is carried out over a subset of the above-mentioned DC coupled links after encoding the transmission with data strobe encoding.[0011] A SERDES module of the processor core receives the serialized scan test input data, deserializes the serialized scan test input data, and provides the deserialized test input data to a device under test within the processor core. The device under test performs the test utilizing the input data and generates result data which is passed along to the SERDES module of the processor core. Responsive to receiving the result data, the SERDES module of the processor core serializes the result data and transmits the serialized result data to the I/O die. Such transmission is carried out over another subset of the above-mentioned DC coupled links after encoding the transmission with data strobe encoding.[0012] The SERDES module of the EO die receives the serialized result data, deserializes the serialized result data, and provides the deserialized result data to a testing module.[0013] With multiple links, the SERDES modules of the I/O die and core enable large parallel packets of data to be transmitted over a finite and, in some cases, limited number of links. Further, such links are not ‘test only’ links but are utilized during normal operation of the processor outside of the testing phase. The data strobe encoding and decoding, along with the DC coupled nature of the links, eliminates the need for a phase lock loop to be implemented within the processor for clock recovery purposes during testing. Such phase lock loop circuitry is typically implemented with analog components within a processor that are higher cost, highly consuming of die area, and higher power consuming than digital equivalents.[0014] For further explanation, Figure 1 sets forth a block diagram of an example processor in which a scan test of a processor core is carried out according to embodiments of the present disclosure. The example of Figure 1 includes a processor (102). The processor (102) includes a number of cores (104a, 104b, 104c, 104d). Each of the cores (104a, 104b, 104c, 104d) is coupled to an I/O die (106). The cores are coupled to the I/O dies through a number of DC coupled links. None of the cores can perform data communications with components external to the processor (102) except through the I/O die (106).[0015] The I/O die (106) in the example processor (102) of Figure 1 couples the cores (104), as well as other components not depicted here, to different types of components external to the processor for data communications. In some embodiments, the EO die (106) provides PCIe, DDR bus, and other types of interfaces.[0016] In the example of Figure 1, the I/O die (106) includes a SERDES module (108a, 108b, 108c, 108d) for each core (104). As described below in greater detail, each SERDES module (108) of the I/O die (106) includes a transmitter component and a receive component. Each of the SERDES modules (108) of the I/O die (106) is coupled for communications through the links to a separate SERDES module (112a, 112b, 112c, 112d) included in each core. [0017] The processor (102) in the example of Figure 1 is coupled to a testing module (110) which provides scan test input data for the cores (104) to utilize in performing one or more various tests. Examples of such data include Automatic Test Pattern Generation (‘ ATPG’) data.[0018] In the example of Figure 1, the testing module (110) provides to the I/O die (106) of the processor (102) scan test input data. The scan test input data is received, by any one or more of the SERDES modules (108) of the I/O die (106), in a parallel fashion.[0019] The SERDES modules (108) that receive the scan test input data serialize the scan test input data and transmit the serialized test input data to the processor core (104) to which the SERDES module (108) is coupled. More specifically, the SERDES module (108) of the I/O die (106) provides the serialized scan test input data to the SERDES module (112) of the core (104).[0020] The SERDES module (112) of the core (104) receives the serialized scan test input data, deserializes the serialized scan test input data, and the processor core (104) utilizes the scan test input data to generate result data. The SERDES module (112) of the core (104) serializes the generated result data and transmits the serialized result data to the coupled SERDES module (108) of the EO die (106).[0021] The SERDES module (108) of the I/O die (106) receives the serialized result data, deserializes the result data, and provides the deserialized result data to the testing module (110). Once the testing is complete, the SERDES modules of the core and I/O die are disabled. The processor then utilizes, for normal operation within a computing system, the same links coupling the I/O die to the cores that were utilized to carry test data. That is, the links utilized to perform the scan test are not ‘test-only’ links, but are instead utilized for primary operation of the processor.[0022] For further explanation, Figure 2 sets forth a functional block diagram of an example system for performing a scan test of a processor core in accordance with embodiments of the present disclosure. The system of Figure 2 includes a processor (102) coupled through external links (212) to a testing module (110). The processor (102) includes a core (104) coupled to an I/O die (106) through a number of DC coupled links (210). The core (104) includes a SERDES module (112) that, in turn, includes a receive component (202) and a transmit component (204). The transmit component (204) of the core SERDES module (112) is coupled through one subset of the DC coupled links (210) to a receive component (208) of a SERDES module (108) of the I/O die (106). Likewise, the receive component (202) of the core SERDES module (112) is coupled through another subset of the DC coupled links (210) to a transmit component (206) of the I/O SERDES module (108).[0023] In the example of Figure 2, the core (104) is prohibited from being accessed by any external component (such as the testing module (110)) except through the I/O die (106). Rather than implement a test-only set of circuitry or logic, the processor (102) utilizes the existing DC coupled links and external links of the I/O die (106) to enable the testing module (110) to access the core (104) for testing.[0024] In this example, the testing module (110) transmits to the I/O die (106), scan test input data in the form of an ATPG test bit pattern. The transmit component (206) of the EO SERDES module (108) serializes the test bit pattern, encodes the test bit pattern with data strobe encoding, and transmits serialized portions of the test bit pattern over pairs of the DC coupled links (210a) to the receive component (202) of the core die SERDES module (112). Each pair of DC coupled links (210a) includes a link that carries the data portion of a serial transmission and a link that carries the strobe portion of the data transmission. In data strobe encoding, the clock is effectively encoded into the data and strobe signals. Further, because the links are DC coupled, a phase lock loop or other similar circuitry is not necessary for clock recovery at the receive components of the SERDES modules.[0025] The receive component (202) of the core SERDES module (112) receives the serialized scan test input data, decodes the data and strobe transmission, deserializes the data and passes the deserialized data to a device under test (214) within the core (104). The device under test (214) utilizes the deserialized data to generate test results. The test results are then passed to the transmit component (204) of the core SERDES module (112). The transmit component (204) serializes the result data, encodes the result data with data strobe encoding, and transmits the data and strobe signals over pairs of DC coupled links (210b) to the receive component (208) of the EO SERDES module (108). The receive component (208) decodes the received data and strobe signals, deserializes the result data, and passes the result data to the testing module (110).[0026] For further explanation, Figure 3 sets forth a flow chart illustrating an example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure. The method of Figure 3 includes receiving (302), in parallel by a serializer/deserializer module (108) of an input/output die (106) of a processor (102), scan test input data (308). Receiving such input data includes, in one example, storing the input data in one or more first-in-first-out (‘FIFO’) buffers.[0027] The method of Figure 3 also includes serializing (304), by the serializer/deserializer module (106b) of the input/output die (106), the scan test input data (310). Serializing (304) the scan test input data is carried out in some examples by clocking in each bit of the test data at a rate higher than the rate the parallel data is received.[0028] The method of Figure 3 also includes transmitting (306), by the serializer/deserializer module (108) of the input/output die (106) to a processor core (104) of the processor (102), the serialized scan test input data (310). In the example of Figure 3, transmitting (306) the serialized scan test input data (310) also includes encoding the serialized scan test input data (310) with data strobe encoding. Data strobe encoding utilizes a pair of signal lines - one carries the ‘Data’ and one carries the ‘Strobe.’ During transmission either ‘Data’ or ‘Strobe’ changes logical value in one clock cycle, but both are prohibited from doing so during the same clock cycle. Generally, the ‘Data’ line transmits the data as-is while the ‘Strobe’ changes logical value state if and only if the data stays constant between two data bits.[0029] For further explanation, Figure 4 sets forth another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure. The method of Figure 4 is similar to the method of Figure 3 in that the method of Figure 4 is carried out in a system that includes a processor core (104), an I/O die (106) and a testing module (110). The method of Figure 4 is also similar to the method of Figure 3 in that the method of Figure 4 also includes: receiving (302), in parallel by a serializer/deserializer module (108) of an input/output die (106) of a processor, scan test input data; serializing (304), by the serializer/deserializer module (108) of the input/output die (106), the scan test input data; and transmitting (306), by serializer/deserializer module (108) of the input/output die (106) to a processor core (104) of the processor (102), the serialized scan test input data (310).[0030] The method of Figure 4 differs from the method of Figure 3, however, in that the method of Figure 4 includes receiving (402), by a serializer/deserializer module (112) of the processor core (104), the serialized scan test input data (310). The serializer/deserializer module (112) receives the serialized scan test input data (310) by buffering the received data in one or more FIFO buffers.[0031] The method of Figure 4 also includes deserializing (404), by the serializer/deserializer module (112) of the processor core (104), the serialized scan test input data (310). The core module (112) provides the deserialized scan test input data (406) to a device under test (408) which generates result data (410) based on the input data (406). Responsive to receiving the result data (410), the core SERDES module (112) serializes (412) the result data (410) and transmits the serialized result data (416) to the I/O die (106). In the method of Figure 4, transmitting (414) the serialized result data (416) includes encoding the serialized scan test input data with data strobe encoding.[0032] For further explanation, Figure 5 sets forth another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure. The method of Figure 5 is similar to the method of Figure 4 in that the method of Figure 5 is carried out in a system that includes a processor core (104), an I/O die (106) and a testing module (110). The method of Figure 5 is also similar to the method of Figure 4 in that the method of Figure 5 also includes: receiving (302) scan test input data; serializing (304) the scan test input data; and transmitting (306) the serialized scan test input data to the processor core (104). The method of Figure 5 also includes receiving (402) the serialized scan test input data; deserializing (404) the serialized scan test input data; serializing (412) the result data; and transmitting (414) the serialized result data to the EO die (106).[0033] The method of Figure 5 differs from the method of Figure 4, however, in that the method of Figure 5 includes receiving (502), by the serializer/deserializer module (108) of the input/output die (106), the serialized result data (504), deserializing (506), by the serializer/deserializer module (108) of the input/output die (106), the serialized result data; and providing (510), by the input/output die (106) to a testing module (110), the deserialized result data (508).[0034] For further explanation, Figure 6 sets forth another example method of performing a scan test of a processor core in accordance with embodiments of the present disclosure. The method of Figure 6 is similar to the method of Figure 5 in that the method of Figure 6 is carried out in a system that includes a processor core (104), an I/O die (106) and a testing module (110) and also includes:• receiving (302) scan test input data;• serializing (304) the scan test input data;• transmitting (306) the serialized scan test input data to the processor core 104;• receiving (402) the serialized scan test input data;• deserializing (404) the serialized scan test input data;• serializing (412) the result data;• transmitting (414) the serialized result data to the I/O die 106;• receiving (502) the serialized result data (504);• deserializing (506) the serialized result data; and• providing (510) the deserialized result data (508) to a testing module (110). [0035] The method of Figure 6 differs from the method of Figure 5, however, in that in the method of Figure 6, receiving (402), by a SERDES module (112) of the processor core (104), the serialized scan test input data includes holding (606) the scan test input data until an expiration of an indeterminacy window and deserializing (404), by the SERDES module(112) of the processor core (104), the serialized scan test input data includes releasing (608) the held scan test input data responsive to expiration of the indeterminacy window. The term ‘indeterminacy window’ as utilized here refers to a period of time (expressed in some instances in terms of clock cycles) during which the reception of the test data results is indeterminate. Test results are indeterminate for a variety of reasons. From die to die, even with the same architecture, there is clock crossing uncertainty, die-to-die data and strobe routing skew, and intra-die clock skew. Such variances introduce indeterminate transmission of signals into and out of the processor core. However, tolerances for these variances are often known and quantifiable. As such, the time that a symbol of a signal is indeterminate is both known and finite. This time is referred to here as an indeterminacy window. To that end, the data is held during a period in which the symbols are indeterminate and released after that period. Such holding and releasing is carried out in buffers through the use of roll over counters that increment upon each clock cycle until the number of clock cycles making up an indeterminacy window is reached. Once the counter resets, the held data is released from the buffers.[0036] Also in the method of Figure 6, receiving (502), by the SERDES module (108) of the input/output die (106), the serialized result data includes holding (602) the result data until an expiration of an indeterminacy window. Deserializing (506), by the SERDES module (108) of the EO die (106), the serialized result data in the method of Figure 6 includes releasing (604) the held result data responsive to expiration of the indeterminacy window. Although shown in the example of Figure 6 as being carried out in the serial domain - before deserialization - holding of the data until expiration of the indeterminacy window in some embodiments is carried out in the parallel domain post deserialization.[0037] For further explanation, Figure 7 sets forth a functional block diagram of an example system for performing a scan test of a processor core with error detection in accordance with embodiments of the present disclosure. The example system of Figure 7 is similar to the example set forth in Figure 2. The example system of Figure 7 differs from that of Figure 2 in that the system of Figure 7 implements error detection. Such error detection is carried out through the use of an additional, unused lane of the SERDES connections between the EO die (106) and the core (104). In Figure 7, the core SERDES module (112) includes a two lane SERDES connection to the EO SERDES module (108). The transmit logic (204) of the core SERDES module (112) includes two transmit blocks (204a, 204b) that each provide a lane for traffic between the core SERDES module (112) and EO SERDES module (108). In Figure 7, one transmit block (204a) provides data and strobe signals in the form of serialized output data to the receive logic (208) of the I/O SERDES module (108). Instead of providing the data (706) and strobe (708) signals to a single receive block, the transmit block (204a) of the first lane is coupled to the receive blocks (208a, 208b) of both lanes of the receive logic (208). The second lane receive block (208b) would otherwise be unused. Instead, the second lane receive block (208b) as well as the first lane receive block (208a) receives the data and strobe output signals. In this way, the data from both receive blocks (both lanes) that should otherwise match is compared and errors between the two can be detected.[0038] In some embodiments, including the example of Figure 7, the same concept of providing a copy of the data and strobe on a second, otherwise unused SERDES line to detect errors through comparison of the two copies is also implemented on the input data. In Figure 7, for example, the transmit logic (206) of the I/O die (106) includes two different transmit blocks (one for each lane) (206a, 206b). The transmit block for a first lane (206a) may send a data (702) and strobe (704) signal for the input test data received from the testing module (110) to two different receive blocks (one for each SERDES lane) (202a, 202b) of the receive logic (202) of the core SERDES (112). The core SERDES (112) or other module may compare the two copies of the input data to determine whether any discrepancies and thus, data errors, are detected.[0039] Readers of skill in the art will recognize that the links providing communication between the I/O die and the processor core (or core complex -which is also referred to here as a core) are repurposed during testing for testing purposes and then utilized in normal operation after testing is complete. Further, the SERDES modules provide a means by which a very large amount of parallel test data is input into a core that cannot otherwise access testing apparatus external to the processor when a finite number of links (less than those needed to carry the parallel data in its entirety) are available. The SERDES module is also comprised for synthesizable digital logic, which can be expressed in register-transfer level (RTL) design abstraction. Such RTL is implemented in a low cost manner, utilizing existing structures and components in the die design of a processor. Further, the area within which such digital logic is implemented is less than that of analog components that operates in a similar manner. Further, the synthesizable digital logic consumes less power than analog components. The RTL in some embodiments, is utilized in hardware description language to simulate the operation of processor during the scan testing.[0040] No PLL is required due to the links between the I/O dies and the cores being DC coupled and the data strobe encoding described above. Removing the need for a PLL further reduces cost, power consumption, and modifications needed to an architecture of a processor. The test results are also deterministic due to the holding and releasing of results until the period during which the symbols are indeterministic passes.[0041] Example embodiments are described largely in the context of a fully functional testing system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure.[0042] Embodiments can be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.[0043] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD- ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se , such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber optic cable), or electrical signals transmitted through a wire.[0044] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.[0045] Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.[0046] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to some embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.[0047] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein is an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.[0048] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.[0049] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
A method for flexibly configuring default values of a network device through an EEPROM interface is disclosed. A header is received from an EEPROM through the EEPROM interface and it is determined from the header whether any default value of the network device should be updated, and if any, how many should be updated. At least one configuration instruction is fetched from the EEPROM when it is determined that the network device should be updated. The at least one configuration instruction is interpreted and a register default value of the default values corresponding to the interpreted at least one configuration instruction is changed <IMAGE>
A method for flexibly configuring default values of a network device through an EEPROM interface, comprising:receiving a header from an EEPROM through the EEPROM interface; determining from the header whether any default value of the network device should be updated;fetching at least one configuration instruction from the EEPROM when the determining step determines that the network device should be updated;interpreting said at least one configuration instruction; andchanging a register default value of said default values corresponding to said interpreted at least one configuration instruction.The method according to claim 1, wherein said method further comprises monitoring a reset signal to determine whether the default values of the network device should be updated.The method according to claim 1, wherein said step of determining from the header whether any default value of the network device should be updated comprises determining from the header a number of the default values of the network device that should be updated.The method according to claim 3, wherein said step of fetching at least one configuration instruction from the EEPROM comprises fetching a number of configuration instructions from the EEPROM equal to the number of the default values of the network device that should be updated.The method according to claim 1, wherein said step of determining from the header whether any default value of the network device should be updated comprises determining a key value from said header and comparing said key value with a magic number pre-defined inside network device to determine whether any default value of the network device should be updated.The method according to claim 1, wherein said at least one configuration instruction comprises a plurality of configuration instructions and the step of fetching at least one configuration instruction from the EEPROM is repeated until all of the plurality of configuration instructions have been fetched.A network device, having default values, that is flexibly configurable, comprising:an EEPROM interface;a register file containing the default values for the network device; anda configuration instruction interpreter;   wherein the EEPROM interface is configured to receive configuration instructions, with each configuration instruction of said configuration instructions being composed of an address index and a corresponding value and wherein the configuration instruction interpreter is configured to interpret the received configuration instructions such that the corresponding values are mapped corresponding default values of the register file.The network device according to claim 7, wherein said configuration instruction interpreter is configured to monitor a reset signal to determine if the default values should be updated.The network device according to claim 7, wherein said configuration instruction interpreter is configured to determine from the header a number of the default values of the network device that should be updated.The network device according to claim 9, wherein said configuration instruction interpreter is configured to fetch a number of configuration instructions from the EEPROM equal to the number of the default values of the network device that should be updated.The network device according to claim 7, wherein the configuration instruction interpreter is configured to receive a header from the EEPROM interface containing a key value from and configured to compare said key value with a pre-defined magic number to determine whether any default value of said default values should be updated.The network device according to claim 7, wherein the configuration instruction interpreter is configured to repeatedly fetch configuration instructions from the EEPROM until all of the configuration instructions have been fetched.
BACKGROUND OF THE INVENTIONFIELD OF INVENTIONThe present invention relates to a method and apparatus of selectively configuring a network device using an Electrically Erasable Programmable Read Only Memory (EEPROM). More specifically, the method and apparatus allows for the use of dynamic configuration settings in the EEPROM interface that increases flexibility, has fewer limitations and is a low cost alternative.DESCRIPTION OF RELATED ARTMany types of network devices are necessary to allow an network to function properly. These network devices are composed of chips, with these chips allowing for the control and monitoring of data through the network device. Chip vendors may pre-set some register default values inside a network device, such as a switch/hub chip, to provide a low cost switch and hub application. That means it is not necessary for system integrators to change the internal register default values to build a workable system. The preconfigured chips allow for the network devices to be setup and to function quickly for a majority of system integrators.Sometimes, these pre-set register default values might not suitable for some system integrators. Chip vendors should provide some methods so that system integrators can change some register values instead of using default values. Some chip vendors will provide a microprocessor interface (SPI, I2C, or PCI) to allow system integrators to change all write-able register. However, built-in microprocessors on the chip boards increase system costs and may not be needed by many customers.Another alternative method to allow users to change the default values is to provide an Electrically Eraseable Programmable Read Only Memory (EEPROM) interface. With an EEPROM interface, a system integrator can change some register default values using a very low cost EEPROM. Most of chip vendors have provided an EEPROM interface for a low cost switch and hub application.Fig. 1 provides as an example of a low cost pre-programmed EEPROM that is used to change some default values of a network switch/hub chip. When the external control signal (RESET) goes to in-active, network switch/hub chip start to change its some register default values via downloading the contents of EEPROM. And then network switch/hub chip start its normal operation after the download phase had been finished.When the RESET signal goes to in-active, network switch/hub chip start to fetch data from external EEPROM automatically. Most of the network switch/hub chips will fetch data from EEPROM address 00h (the first entry), and fetch the other data in sequence. In order to change some register default values or set chip configuration, the chip vendor will provide a register set (a part of chip register file) which are downloadable from EEPROM. Each entry of EEPROM is pre-defined and will directly map to one (or some) entry of register set inside network switch/hub chip as described in Fig. 2.However, in this kind of scenario, two major drawbacks may occur. First, different system integrators may want to change different registers. And it is not necessary for system integrators to configure all downloaded register. However, even if system integrators only want to configure some of chip downloadable registers, it is still necessary to fill all the contents of downloadable register set into EEPROM. Secondly, some register default values of network switch/hub chip are change-able via microprocessor interface, but they are not downloadable via EEPROM. In this case, the only way for system integrator to act is to build a microprocessor on his PCB instead of using a very low cost EEPROM.Thus, there is a need for a mechanism and a process to be used with a network device that allows for a system integrator to make changes to default settings of the network device that is not costly or cumbersome. Additionally, there is also a need such a mechanism to change only certain defaults on a network device without the limitations imposed by the prior art processes and devices.SUMMARY OF THE INVENTIONIt is an object of this invention to overcome the drawbacks of the above-described conventional network devices and methods. The present invention provides for a new approach for chip vendors to provide system integrators a dynamic configuration using low cost EEPROM. With this approach, system integrators will have flexibility to change the default values of all configure-able registers inside a network device, such as a switch/hub chip.According to one aspect of this invention, a method for flexibly configuring default values of a network device through an EEPROM interface. A header is received from an EEPROM through the EEPROM interface and it is determined from the header whether any default value of the network device should be updated. At least one configuration instruction is fetched from the EEPROM when it is determined that the network device should be updated. The at least one configuration instruction is interpreted and a register default value of the default values corresponding to the interpreted at least one configuration instruction is changed.Additionally, the method can include monitoring a reset signal to determine whether the default values of the network device should be updated. In addition, the method can also determine the number of default values of the network device need to be updated. Also, in determining whether any default value of the network device should be updated includes determining a key value from the header and comparing the key value with a magic number pre-defined inside network device to determine whether any default value of the network device should be updated. The at least one configuration instruction can also be a plurality of configuration instructions and the step of fetching at least one configuration instruction from the EEPROM can be repeated until all of the plurality of configuration instructions have been fetched.In another aspect of the invention, a network device, having default values, that is flexibly configurable, is also disclosed. The device includes an EEPROM interface, a register file containing the default values for the network device and a configuration instruction interpreter. The EEPROM interface is configured to receive configuration instructions, with each configuration instruction of the configuration instructions being composed of an address index and a corresponding value and wherein the configuration instruction interpreter is configured to interpret the received configuration instructions such that the corresponding values are mapped corresponding default values of the register file.Also, the network device may have a configuration instruction interpreter that is configured to monitor a reset signal to determine if the default values should be updated. The configuration instruction interpreter may also be configured to receive a header from the EEPROM interface containing a key value from and configured to compare the key value with a pre-defined magic number to determine whether any default value of the default values should be updated. Similarly, the configuration instruction interpreter may be configured to repeatedly fetch configuration instructions from the EEPROM until all of the configuration instructions have been fetched.These and other objects of the present invention will be described in or be apparent from the following description of the preferred embodiments.BRIEF DESCRIPTION OF THE DRAWINGSFor the present invention to be easily understood and readily practiced, preferred embodiments will now be described, for purposes of illustration and not limitation, in conjunction with the following figures:Fig. 1 illustrates a network device that interfaces with an EEPROM;Fig. 2 illustrates the how the contents of the EEPROM map into register file using the chip register map;Fig. 3 illustrates an embodiment of the present invention where the EEPROM has a dynamic configuration;Fig. 4 illustrates the operation of the system of the present invention.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSAt the heart of the present invention is the change in the contents of the EEPROM to a set of configuration instructions instead of configuration values only. Each configuration instruction is composed of addressindexand its corresponding desiredvalue.An in-direct mapping mechanism is used to map EEPROM contents to their corresponding registers inside network switch/hub chip instead of original direct mapping method. Besides, a header, encapsulated with a specifickeyvalue andtotal number of configuration instructions,should be filled in the first entry of EEPROM content. This header is designed as an identifier during EEPROM download cycle. One embodiment of the present invention is illustrated in Fig. 3.To achieve this flexible configuration apparatus, network switch/hub chip vendor should build-in a circuit (called Configuration Instruction Interpreter, CII) inside the chip to interpret configuration instruction. When RESET signal goes to in-active, the CII of network switch/hub chip start to fetchheader(the first entry) from external EEPROM automatically, then thekeyis obtained. If thekeyvalue is not matched with the magic number pre-defined inside network switch/hub chip, it indicates that it is not necessary to change any chip default value, and download sequence might be skipped. Whilekeyis match, CII continuously fetches configuration instruction from EEPROM, and changes the corresponding (defined in address index of configuration instruction) register default value to the desired value by interpreting instruction. This process will be repeated until all instruction download completely. Additionally, since the number of default values needing to be updated is determined from the start, the time needed to perform the updated is less than the equivalent updating performed in the prior art methods and systems.This process is illustrated in Fig. 4. The process continually checks to see if the RESET signal is set to in-active. Once the RESET signal is in-active, the header of the EEPROM is read. A key is determined and compared with a magic number inside the chip. If there is a match, then instructions are read from the EEPROM and the corresponding register default value is changed. If that instruction just read was the last, then the process ends. If it was not the last, then the next instruction is read from the EEPROM and the default values of the corresponding register are changed.With this new configuration instruction of EEPROM content, it is not necessary for chip vendor to provide a pre-defined downloadable set. And system integrator could use a very cost EEPROM to change any downloadable register default value. System integrator could decide which registers he wants to change the default values without too much limitation. Additionally, the system integrator also could decide how many registers he wants to change. Such that, less capacity EEPROM could be used due to few register default values changed.The above-discussed configuration of the invention is, in one embodiment, embodied on a semiconductor substrate, such as silicon, with appropriate semiconductor manufacturing techniques and based upon a circuit layout which would, based upon the embodiments discussed above, be apparent to those skilled in the art. A person of skill in the art with respect to semiconductor design and manufacturing would be able to implement the various modules, interfaces, and components, etc. of the present invention onto a single semiconductor substrate, based upon the architectural description discussed above. It would also be within the scope of the invention to implement the disclosed elements of the invention in discrete electronic components, thereby taking advantage of the functional aspects of the invention without maximizing the advantages through the use of a single semiconductor substrate.Although the invention has been described based upon these preferred embodiments, it would be apparent to those of skilled in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims.
A contact to a source or drain region. The contact has a conductive material, but that conductive material is separated from the source or drain region by an insulator.
CLAIMS I claim: 1. A device, comprising: a transistor with a source region and a drain region; a first interlay er dielectric layer adjacent the transistor; a trench through the first interlayer dielectric layer to the source region; and a conductive source contact in the trench, the source contact being separated from the source region by an insulating layer. 2. The device of claim 1 , wherein the transistor is a multigate transistor including a fin. 3. The device of claim 2, wherein the insulating layer is on a top surface and side walls of the fin. 4. The device of claim 1 , wherein the insulating layer has a thickness of about 4 nanometers or less. 5. The device of claim 1, further comprising: a second interlayer dielectric layer; a first metallization layer adjacent the second interlayer dielectric layer and having a plurality of conductive vias and a plurality of conductive lines; a third interlayer dielectric layer over the second interlayer dielectric layer; a second metallization layer adjacent the third interlayer dielectric layer and having a plurality of conductive vias and a plurality of conductive lines; and wherein at least some of the plurality of conductive vias and the plurality of conductive lines of the first metallization layer and at least some of the plurality of conductive vias and the plurality of conductive lines of the second metallization layer are conductively connected to the conductive source contact. 6. The device of claim 1, wherein the conductive source contact has a thickness of less than 100 nanometers. 7. The device of claim 6, further comprising a fill conductor on the conductive source contact and substantially filling the trench. 8. The device of claim 1 , wherein the transistor is a P-type transistor and the conductive source contact comprises a metal with a workfunction above about 5eV. 9. The device of claim 1 , wherein the transistor is an N-type transistor and the conductive source contact comprises a metal with a workfunction below about 3.2eV. 10. The device of claim 1, wherein the conductive source contact comprises Al or Ni. 11. A method to make a contact, comprising: depositing a dielectric layer on a substrate having a transistor; etching a first opening in the dielectric layer that extends to a source region; forming an insulator on the source region; forming a contact metal on the insulator, the insulator separating the contact metal from the source region; and filling substantially all of the first opening, wherein the contact metal remains separated from the source region after the first opening is filled. 12. The method of claim 11 , wherein the insulator has a thickness of about 4 nanometers or less. 13. The method of claim 12, wherein the insulator has a thickness of about 1 nanometer or less. 14. The method of claim 12, wherein forming the insulator comprises forming a conformal layer of the insulator. 15. The method of claim 11 , wherein the transistor is a multigate transistor, wherein the insulator is formed on a top and on side walls of a fin of the multigate transistor to result in an insulator top and insulator side walls, and wherein the contact metal is formed on the insulator top and on the insulator side walls. 16. A device, comprising : a transistor with a source region and a drain region; a source contact, wherein the source contact is not directly adjacent the source region, and wherein a first insulating layer separates the source contact from the source region; and a drain contact wherein the drain contact is not directly adjacent the drain region, and wherein a second insulating layer separates the drain contact from the drain region. 17. The device of claim 16, wherein the transistor is a multigate transistor, the source region has a top and side walls, and the drain region has a top and side walls. 18. The device of claim 16, where neither the source contact nor the drain contact comprises a suicide. 19. The device of claim 16, wherein the first and second insulating layer comprise HfO2. 20. The device of claim 16, wherein the transistor has a channel region that comprises a group III-V material.
METAL-INSULATOR-SEMICONDUCTOR TUNNELING CONTACTS BACKGROUND Background of the Invention In the manufacture of integrated circuits, devices such as transistors are formed on a wafer and connected together using multiple metallization layers. The metallization layers include vias and interconnects, as are well known in the art, that function as electrical pathways to interconnect the devices. Contacts connect the vias and interconnects to the devices. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a cross sectional side view that illustrates a device having an electrical contact where the conductive contact material is separated by an insulator from the region being contacted. Figure 2 is a flow chart that illustrates one method by which the device shown in Figure 1 may be fabricated. Figure 3 is a cross sectional side view that illustrates the first ILD layer deposited on the transistor. Figure 4 is a cross sectional side view that illustrates trenches formed in the first ILD layer. Figure 5 is a cross sectional side view that illustrates the insulating layer deposited in the trenches. Figure 6 is a cross sectional side view that illustrates the conductive layer deposited on the insulating layer. Figure 7 is a cross sectional side view that illustrates the fill material. Figure 8 is a cross sectional side view that illustrates additional ILD and conductive layers. Figure 9 is an isometric view that illustrates a multiple gate transistor. Figure 10 is a cross sectional side view cut through the source region portion of the fin, and that illustrates the first ILD layer. Figure 11 is a cross sectional side view that illustrates a trench formed in the first ILD layer. Figure 12 is a cross sectional side view that illustrates the insulating layer formed on the top surface and side walls of the source region of the fin, the conductive layer 116formed on the insulating layer, and the fill material that substantially fills the remaining volume of the trench. Figure 13 is a cross sectional side view that illustrates an embodiment that lacks fill material. Figure 14 is a cross sectional side view that illustrates a first transistor and a second transistor on the same substrate. DETAILED DESCRIPTION Various embodiments of a contact to a semiconductor device with an insulator separating a conductive contact from the device are discussed in the following description. One skilled in the relevant art will recognize that the various embodiments may be practiced without one or more of the specific details, or with other replacement and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the invention. Nevertheless, the invention may be practiced without specific details. Furthermore, it is understood that the various embodiments shown in the figures are illustrative example representations and are not necessarily drawn to scale. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but do not denote that they are present in every embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily referring to the same embodiment of the invention. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Various additional layers and/or structures may be included and/or described features may be omitted in other embodiments. Various operations will be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the invention. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Operations described may be performed in a different order, in series or inparallel, than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. Figure 1 is a cross sectional side view that illustrates a device 100 having an electrical contact where the conductive contact material 116 is separated by an insulator 114 from the region 106, 108 being contacted. In an embodiment, the device 100 is a transistor. The transistor includes a source region 106 and a drain region 108. There are contacts to the source and drain regions 106, 108. These contacts include a conductive material 116 that is separated from the source and drain regions 106, 108 by an insulating material 114. Such an arrangement avoids the need for a suicide or germanide contact common to transistors. By avoiding the use of a suicide or germanide contact, some embodiments of the device 100 may allow the use of conformal contact- formation processes, which allows contact formation in smaller trenches, enabling device 100 scaling to small dimensions. Some embodiments of the device 100 are easier to fabricate, as the ultra-pure metal deposition needed for a suicide or germanide is not required. Further, as devices 100 get ever-smaller, there is less semiconductor material available to form a suicide or germanide. Some embodiments of the device 100 avoid the issue of excessive consumption of the semiconductor material that forms a portion of the device 100 by not using a suicide or germanide. Also, it is possible for the formation of suicides and the like to impart strain to the device, or limit the strain it is possible to induce by other structures and materials. By omitting the suicide, it may be possible to increase the available strain modification possibilities and thus allow a better performing device 100. In the illustrated example, the device 100 includes a substrate 102. This substrate 102 may comprise any material that may serve as a foundation upon which a semiconductor device may be built. In one example, substrate 102 is a silicon containing substrate, although other materials may be used in other examples. The substrate 102 may be formed using a bulk silicon or a silicon-on-insulator substructure. In other implementations, the substrate 102 may be formed using alternate materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, or other Group III- V materials. The substrate 102 may be a single material, or have multiple layers and/or have multiple structures. Although a few examples of materials from which the substrate 102 may be formed are described here, any materialthat may serve as a foundation upon which a device may be built falls within the spirit and scope of the present invention. The device 100 in the illustrated example includes a transistor. The transistor includes a gate 104, a source region 106, and a drain region 108. The transistor may include several other regions and structures, but these are omitted for the sake of simplicity and clarity. While illustrated as a planar transistor as is typically found on a silicon substrate, the transistor may be a multigate transistor, may be on different types of materials (such as a III-V material); the contacts described herein are not limited to a particular type of device 100 or transistor. There is a first interlayer dielectric (ILD) layer 110 on the transistor in the illustrated example. Contacts to the source region 106 and the drain region 108 are formed in trenches through the first ILD layer 110. Note that for clarity, contacts to the gate 104 are not shown herein, but would normally be present. Contacts to the gate 104 similar to illustrated and described contacts to source and drain regions 106, 108 may be used in various embodiments. The contacts described herein are not limited to use for source and drain regions 106, 108, but can be used with the gate 104 or other components. The contacts allow operation of the transistor, and electrical communication between various transistors, and between the device 100 and external devices. The contact includes an insulating layer 114 that is conformal to the trench and is adjacent the source and drain regions 106, 108 in the illustrated embodiment. Adjacent the insulating layer 114 is a conducting layer 116. The insulating layer 114 separates the conducting layer 116 from the source and drain regions 106, 108 (or from whatever component the contact is for). While the conducting layer 116 is not in direct contact with the source and drain regions 106, 108, it still functions as an electrical contact. This may occur by the insulating layer 114 wholly or partially depinning the metal Fermi level from the semiconductor source or drain region 106, 108. Thus, the inclusion of an insulating layer 114 between the conducting layer 116 and the source or drain region 106, 108 may actually reduce the resistance of the contact over a situation where a conductor is in direct contact with the source or drain region 106, 108. Such contacts may allow a Specific Contact Resistivity, pc, of approx 1 x 10~7 ohm-μm2 (ohm-micrometer squared) or less on low- doped (doping level ~ 1 x 1017 at/cm3) silicon in some embodiments, which is 5X - 1OX less than traditional suicide contacts (e.g., NiSi, TiSi2, CoSi2) on Si of the same doping level. This type ofcontact may also allow the tuning of the Schottky barrier height and contact resistance as desired for optimal device 100 performance. In the illustrated embodiment, there is a fill material 118 that substantially fills the rest of the volume of the trench through the first ILD layer 110 not taken up by the insulating layer 114 and conductor layer 116. The fill material 118 may be a metal or other conductor, or may be another type of material. In some embodiments, there is not a separate fill material 118. Rather, the conductor layer 116 may substantially fill the rest of the volume of the trench through the first ILD layer 110 not taken up by the insulating layer 114. Figure 2 is a flow chart 200 that illustrates one method by which the device 110 shown in Figure 1 may be fabricated. Other methods are possible in other embodiments. At the start of this example method, the transistor, including the gate 104, source 106, and drain 108, has been formed on the substrate 102. The first ILD layer 110 is deposited 202 on the transistor. Figure 3 is a cross sectional side view that illustrates the first ILD layer 110 deposited 202 on the transistor, according to one embodiment of the present invention. The first ILD layer 110 may be formed using materials known for the applicability in dielectric layers for integrated circuit structures, such as low-k dielectric materials. Such dielectric materials include, but are not limited to, oxides such as silicon dioxide (SiO2) and carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. The dielectric first ILD layer 110 may include pores or other voids to further reduce its dielectric constant. Returning to Figure 2, an opening is formed 204 in the first ILD layer 110. Figure 4 is a cross sectional side view that illustrates trenches 112 formed 204 in the first ILD layer 110. Any suitable method, such as one or more wet or dry etches may be used to form 204 the trenches 112. As illustrated, the trenches 112 are only to the source and drain regions 106, 108. However, trenches 112 and contacts to the gate 104 may also be formed although they are not specifically shown and described herein. As shown in Figure 2, after the trenches 112 are formed 204, an insulating layer 114 may be deposited 206 in the trenches 112. Figure 5 is a cross sectional side view that illustrates the insulating layer 114 deposited 206 in the trenches 112. In someembodiments, the insulating layer 114 may be deposited 206 by a conformal deposition process such as chemical vapor deposition (CVD), atomic layer deposition (ALD), may be formed 206 by a thermal growth process (such as thermal growth of an oxide, nitride or oxynitride of the substrate material), or formed 206 by another suitable deposition process. The insulating layer 114 may comprise a dielectric material such as HfO2, AlO, ZrO, Si3N4, SiO2, SiON, or another insulating dielectric material. In some embodiments, the thickness of the insulating layer 114 is chosen to allow unpinning of the Fermi level of the subsequently-deposited conductor. The insulating layer 114 may be very thin to accomplish this in some embodiments, such as less than about 4 nanometers, less than about three nanometers, or about one nanometer or less in various embodiments. In an embodiment, the insulating layer 114 is between about 5 and 10 Angstroms. Other thicknesses of the insulating layer 114 may also be used. Note that while the insulating layer 114 is illustrated as being conformally deposited, this is not a requirement. In some embodiments, such as embodiments with a thermally-grown insulating layer 114, the insulating layer 114 may be formed non-conformally. Referring again to Figure 2, a conductive layer 116 is deposited 208 on the insulating layer 114. Figure 6 is a cross sectional side view that illustrates the conductive layer 116 deposited 208 on the insulating layer 114. The conductive layer 116 may be deposited 208 by a conformal deposition process such as chemical vapor deposition (CVD), atomic layer deposition (ALD), electroless plating, or another suitable deposition process. In some embodiments, such as embodiments where the conductive layer 116 is to fill the remaining volume of the trenches 112 (Figure 13 is a cross sectional side view that illustrates such an embodiment) or the trenches 112 are large enough, nonconformal deposition techniques such as PVD may be used to deposit 208 the conductive layer. The conductive layer 116 may be a metal or contain a metal in some embodiments. Various metals may be used. In some embodiments, the material of the conductive layer 116 may be chosen based on an appropriate workfunction for the type of transistor (high workfunction metal for a PMOS transistor, low workfunction metal for an NMOS transistor, with "high" workfunction being above about 5eV and "low" workfunction being about 3.2eV or lower), although this is not necessary. Materials used for the conductive layer 116 include aluminum, nickel, magnesium, copper or other metals. Conductive metal carbides, nitrides or other materials may also be used for the conductive layer 116. Any suitable thickness may be used for the conductive layer 116. In someembodiments, the conductive layer 116 is greater than 100 Angstroms, with the conductive layer 116 being considerably thicker than 100 Angstroms in some embodiments. In some embodiments, the gate 104 may be a sacrificial gate that is removed and a new gate formed after the first ILD layer 110 is deposited. In such an embodiment, the new gate may be formed with the same processes and at the same time as the conductive layer 114. The formation of the insulating layer 114 and conductive layer 116 as described herein may allow formation of contacts in trenches 112 that are very narrow. The processes used to form the extremely pure metal used in suicides and germanides may cause problems when used with trenches 112 that are very narrow. Thus, by using the conductor on insulator contact as described herein, it may be possible to scale the trenches 112 to small dimensions than if suicide or germanide contacts were used. Referring again to Figure 2, the remaining volume of the trench 112 is filled 210. Figure 7 is a cross sectional side view that illustrates the fill material 118. This fill material 118 may be a conductive material or any other suitable material, may be a single material or multiple materials, and may be deposited by any suitable method. As mentioned previously, in embodiments the conductive layer 116 may fill the trench. A separate fill material 118 is not used in such embodiments, as illustrated in Figure 13. Referring back to Figure 2, additional ILD and conductive layers may then be formed 212. Figure 8 is a cross sectional side view that illustrates additional ILD and conductive layers. In Figure 8, the insulating layer 114, conductive layer 116, and fill material 118 were planarized to be substantially coplanar with a top surface of the first ILD layer 110. After planarization, the conductive layer 116 in the trench 112 to the source region 106 is not continuous with the conductive layer 116 in the trench 112 to the drain region 108. The conductive layer 116 may thus be considered to be a first conductive layer in the trench 112 on the left to the source region 106 and a second conductive layer in the trench on the right to the drain region 108. A second ILD layer 120 has been deposited on the first ILD layer 110. Vias 122 and lines 124 in the second ILD layer 120 are conductive Iy connected to the source and drain regions 106, 108 by the contacts in the trenches 112. A third ILD layer 126 has been deposited on the second ILD layer 120. Vias 122 and lines 124 in the third ILD layer 126 are conductively connected to the source and drain regions 106, 108 by the contacts in thetrenches 112. Additional ILD layers and conductors may be present in other embodiments. Figure 9 is a isometric view that illustrates a multiple gate transistor. While Figures 1 and 3-8 illustrated contacts formed to planar transistors, the same conductor-on- insulator contact may be used to other types of transistors as well, such as a trigate transistor. The trigate transistor illustrated in Figure 9 includes a fin 130. There are isolation regions 138 on either side of the fin 130. There is a gate electrode 132 on the fin 130 adjacent the top and opposing side walls of the fin 130. On one side of the gate electrode 132 is a source region 134 and on another side of the gate electrode 132 is a drain region. Note that while Figure 9 only has arrows pointing to the top surface of the fin 132 for the source and drain regions 134, 136, the source and drain regions 134, 136 may extend along the top surface and side walls of the fin 130. Figure 10 is a cross sectional side view cut through the source region 134 portion of the fin 130, and that illustrates the first ILD layer 110 formed similarly to how a first ILD layer 110 may be formed on a planar transistor as shown in Figure 3. Figure 11 is a cross sectional side view that illustrates a trench 112 formed in the first ILD layer 110. The source region 134 is exposed by this trench 112. Figure 12 is a cross sectional side view that illustrates the insulating layer 114 formed on the top surface and side walls of the source region 134 of the fin 130, the conductive layer 116 formed on the insulating layer 114, and the fill material 118 that substantially fills the remaining volume of the trench 112. These materials may be formed similarly as described above with respect to a planar transistor. As with the planar transistor, the insulating layer 114 separates the conductive layer 116 from the source region 134, yet this may allow a lower resistance contact than if a conductor were in contact with the source region, via tunneling. Also, the conformal deposition of insulator 114 and conductor 116 leaves the fin 130 substantially intact. If a suicide, germanide or similar contact were formed, the contact would consume much of the semiconductor material of the fin 130, which might make a non-functioning device in situations where the fin 130 is quite small. Figure 14 is a cross sectional side view that illustrates a first transistor 302 and a second transistor 304 on the same substrate 102. Transistor 304 has contacts 306 that comprise a suicide, germanide, or the like, or otherwise has a conductor in contact with the source and drain regions 106, 108. The curved line A-A indicates that the transistors302, 304 may be separated from each other rather than right next to each other. In some embodiments, some transistors on a substrate 102, such as transistor 302, may include the contacts with the conductor 116 separated from the source and/or drain regions 106, 108 by an insulating layer 114, while other transistors on the same substrate, such as transistor 304, may include contacts 306 formed of a suicide, germanide or other material with a conductor in contact with the source and/or drain regions 106, 108. For example, transistor 302 with contacts having a conductor 116 separated from the source and drain regions 106, 108 by an insulator 114 may be an NMOS transistor while transistor 304 may be a PMOS transistor, or vice versa. All transistors of one type (N- or P-type) on a substrate may have one type of contact while all transistors of the opposite type may have another type of contact in an embodiment. In an alternative embodiment, some selected transistors may have contacts with the conductor 116 separated from the source and/or drain regions 106, 108 by an insulating layer 114, while the rest of the transistors may have more traditional contacts 306. These selected transistors may be of one type (N- or P-type) or may include multiple types of transistors (N- and P-type). In yet other embodiments, all transistors on a substrate 102 may have contacts with the conductor 116 separated from the source and/or drain regions 106, 108 by an insulating layer 114. In yet another embodiment, some or all of transistors of one type may have insulating, conducting and (if applicable) fill layers 114, 116, 118 that comprise different materials than the insulating, conducting and (if applicable) fill layers 114, 116, 118 of transistors of the other type. For example, N-type transistors may have a first set of materials that comprise the insulating, conducting and (if applicable) fill layers 114, 116, 118, and P- type transistors on the same substrate 102 may have a second different set of materials that comprise the insulating, conducting and (if applicable) fill layers 114, 116, 118. The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. This description and the claims following include terms, such as left, right, top, bottom, over, under, upper, lower, first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. For example, terms designating relative vertical position refer to a situation where a device side (or active surface) of a substrate or integrated circuit is the "top" surface of that substrate; the substrate may actually be in any orientation so that a "top" side of a substrate may be lower than the "bottom" side in a standard terrestrial frame ofreference and still fall within the meaning of the term "top." The term "on" as used herein (including in the claims) does not indicate that a first layer "on" a second layer is directly on and in immediate contact with the second layer unless such is specifically stated; there may be a third layer or other structure between the first layer and the second layer on the first layer. The embodiments of a device or article described herein can be manufactured, used, or shipped in a number of positions and orientations. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above teaching. Persons skilled in the art will recognize various equivalent combinations and substitutions for various components shown in the Figures. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
Aspects of the disclosure are related to a method for determining a touch pressure level on a touchscreen, comprising: detecting a touch event by the touchscreen; obtaining data relating to features associated with the touch event comprising a capacitance value, a touch area, and/or a touch duration; and determining a touch pressure level based on one or more of the features.
1.A method for determining a touch pressure level on a touch screen, the method comprising:Detecting touch events through the touch screen;Obtaining data relating to features associated with the touch event, the features including a capacitance value, a touch area, and/or a touch duration; andThe touch pressure level of the touch event is determined based on one or more of the features.2.The method of claim 1, further comprising normalizing the touch area.3.The method of claim 1, wherein determining the touch pressure level further comprises:Classifies each feature using a feature classifier; andClassifying the touch pressure level using a majority voting rule classifier based on the feature classification and the weight associated with the feature,Where a feature classifier and weights are established for each feature based on the training data associated with that particular user for a particular user.4.The method of claim 3, wherein the feature classifier is a Bayesian risk classifier BRC, and the weights are established using a promotion mechanism.5.The method of claim 4, wherein the weight of at least one feature is zero.6.The method of claim 3, wherein a level of confidence associated with a risk touch pressure level is punished in the feature classifier.7.The method of claim 3, wherein establishing a feature classifier for a particular user further comprises:Receiving training data related to multiple touch events associated with the user;Extract feature data from the training data; andThe feature classifier is established based on the feature data.8.The method of claim 3, further comprising:Adjusting the feature classifier based on application instructions; andEach feature is categorized using a corresponding adjusted feature classifier.9.The method of claim 8, wherein the application description includes a touch pressure level detection granularity.10.The method of claim 8, wherein the application description includes a risk associated with a particular touch pressure level.11.The method of claim 1, wherein the touch gesture is further determined using the determined touch pressure level.12.An apparatus for determining a touch pressure level on a touch screen, the apparatus comprising:Memory; andA processor coupled to the memory, the processor performing the following operations:Detecting touch events through the touch screen;Obtaining data relating to features associated with the touch event, the features including a capacitance value, a touch area, and/or a touch duration; andThe touch pressure level of the touch event is determined based on one or more of the features.13.The apparatus of claim 12, wherein the processor further normalizes the touch area.14.The apparatus of claim 12, wherein the processor for determining the touch pressure level further performs the following operations:Classifies each feature using a feature classifier; andClassifying the touch pressure level using a majority voting rule classifier based on the feature classification and the weight associated with the feature,Where a feature classifier and weights are established for each feature based on the training data associated with that particular user for a particular user.15.The apparatus of claim 14, wherein the feature classifier is a Bayesian risk classifier BRC and the weights are established using a promotion mechanism.16.The apparatus of claim 15, wherein the weight of at least one feature is zero.17.The apparatus of claim 14, wherein the level of confidence associated with the risk touch pressure level is punished in the feature classifier.18.The apparatus of claim 14, wherein the processor to establish a feature classifier for a particular user further performs the following operations:Receiving training data related to multiple touch events associated with the user;Extract feature data from the training data; andThe feature classifier is established based on the feature data.19.The apparatus of claim 14, wherein the processor further performs the following operations:Adjusting the feature classifier based on application instructions; andEach feature is categorized using a corresponding adjusted feature classifier.20.The apparatus of claim 19, wherein the application description includes a touch pressure level detection granularity.21.The apparatus of claim 19, wherein the application description includes a risk associated with a particular touch pressure level.22.The apparatus of claim 12, wherein the touch gesture is further determined using the determined touch pressure level.23.An apparatus for determining a touch pressure level on a touch screen, the apparatus comprising:a device for detecting a touch event through the touch screen;Means for obtaining data relating to features associated with the touch event, the features including a capacitance value, a touch area, and/or a touch duration; andMeans for determining a touch pressure level for the touch event based on one or more of the features.24.The apparatus of claim 23, wherein the means for determining the touch pressure level further comprises:An apparatus for classifying each feature using a feature classifier; andAn apparatus for classifying the touch pressure level using a majority decision rule classifier based on the feature classification and the weight associated with the feature,Where a feature classifier and weights are established for each feature based on the training data associated with that particular user for a particular user.25.The apparatus of claim 24, wherein the feature classifier is a Bayesian risk classifier BRC, and the weights are established using a promotion mechanism.26.The apparatus of claim 24, wherein the means for establishing a feature classifier for a particular user further comprises:Means for receiving training data related to multiple touch events associated with the user;Means for extracting feature data from the training data; andAn apparatus for establishing the feature classifier based on the feature data.27.A non-transitory computer-readable medium includes code that, when executed by a processor, causes the processor to perform a method comprising the following steps:Detecting touch events through the touch screen;Obtaining data relating to features associated with the touch event, the features including a capacitance value, a touch area, and/or a touch duration; andThe touch pressure level of the touch event is determined based on one or more of the features.28.The non-transitory computer-readable medium of claim 27, wherein the code for determining the touch pressure level further comprises:a code for classifying each feature using a feature classifier; anda code for classifying the touch pressure level using a majority voting rule classifier based on the feature classification and the weight associated with the feature,Where a feature classifier and weights are established for each feature based on the training data associated with that particular user for a particular user.29.The non-transitory computer-readable medium of claim 28, wherein the feature classifier is a Bayesian risk classifier BRC and the weights are established using a promotion mechanism.30.The non-transitory computer readable medium of claim 28, wherein the code for establishing a feature classifier for a particular user further comprises:a code for receiving training data related to multiple touch events associated with the user;a code for extracting feature data from the training data; andUsed to establish the code of the feature classifier based on the feature data.
Use capacitance to detect touch pressureCross reference for related applicationsThis application claims the benefit of the priority of U.S. Patent Application Serial No. 14/759,699, filed on July 9, 2015, entitled USING CAPACITANCE TO DETECT TOUCH PRESSURES. The application is incorporated herein by reference.Technical fieldThe subject matter disclosed herein relates to electronic devices, and more specifically to methods, devices, and systems for use with touch screen data and/or for use with touch screen data.Background techniqueA touch screen implemented with a conventional touch screen controller only provides the processor of the device with the location and duration (ie, coordinates) of the touch event.On the other hand, some operations are inconvenient to perform on a user interface that relies on a conventional touch screen as a main input device. For example, inputting a capital letter or a letter with a diacritical mark (eg, accented letters) on an on-screen keyboard displayed on the touch screen may be possible. Less convenient. The user may need to press a Shift key, a Caps Lock key, or the like on the on-screen keyboard, or may need to press a character and remain on it to view the character options. Entering complex non-Latin characters (such as certain Asian characters) with the on-screen keyboard on the touch screen may also be inconvenient. The user may need to enter a specific input mode to be able to input such complex characters.As yet another example, a touch-based user interface lacks the equivalent of a mouse-over or right-click operation that is typically found on a pointer- and cursor-based user interface. Therefore, calling an environment menu or help text may be inconvenient on a touch screen-based user interface.As yet another example, a visually impaired user may find it difficult to operate the touch screen user interface when it needs to enter text or select menu options. Conventional double-clicking of a feature (the user interface provides audible notification of a menu item with a single click and selection with only a double-click registration item) can be helpful but still inconvenient for use.In addition, there is usually room for improvement in certain operations performed on electronic devices equipped with touch screens. For example, a 4-digit Personal Identification Number (PIN) commonly used with mobile devices is insecure due to its short length and small symbol set. These 4-digit PINs are prone to violent guesses, shoulder peeping or guessing based on the smudges that remain on the glass surface of the touchscreen.Therefore, a method implemented with a touch screen for improving the user experience of any of the operations described above may be desired.Summary of the InventionAn aspect of the present invention relates to a method for determining a touch pressure level on a touch screen, the method comprising: detecting a touch event through the touch screen; obtaining data related to a feature associated with the touch event, the The features include a capacitance value, a touch area, and/or a touch duration; and a touch pressure level is determined based on one or more of the features.Description of the drawingsFIG. 1 illustrates an embodiment of a device equipped with a touch screen in which embodiments of the present invention may be practiced.FIG. 2 is a flowchart illustrating an exemplary method for determining a touch pressure level on a touch screen.FIG. 3A illustrates a plot of exemplary touch screen output data corresponding to a light touch.FIG. 3B illustrates a plot of exemplary touch screen output data corresponding to a natural touch.FIG. 3C illustrates a plot of exemplary touch screen output data corresponding to a heavy touch.FIG. 4 is a flowchart illustrating an exemplary method for training an individual touch pressure level classifier.FIG. 5 is a flowchart illustrating an exemplary method of determining a touch pressure level using an individual trained touch pressure level classifier.FIG. 6A illustrates an example user touch operation with a touch gesture "touched with the tip of a finger tip."FIG. 6B illustrates an example user touch operation with a touch gesture of "flat partial touch with a fingertip."FIG. 6C illustrates an example user touch operation with a touch gesture of “a right-side partial touch with a finger tip”.detailed descriptionAn exemplary apparatus 100 adapted to determine a touch pressure level on a touch screen is illustrated in FIG. Device 100 is shown as including hardware elements that may be electrically coupled (or, optionally, communicate in other ways) via bus 105 . The hardware components may include one or more processors 110 including but not limited to one or more general purpose processors and/or one or more dedicated processors (eg, digital signal processing chips, graphics acceleration processors, and/or the like). Similar thereto); one or more input/output devices 115, which include at least one touch screen 116, and may further include, without limitation, a mouse, a keyboard, a speaker, a printer, and/or the like. The touch screen 116 may be a capacitive touch screen.Device 100 may further include (and/or communicate with) one or more non-transitory storage devices 125, which may include, without limitation, local and/or network-accessible storage devices. And/or may include, without limitation, disk drives, drive arrays, optical storage devices, solid state storage devices, such as programmable random access memory ("RAM") and/or the like. Or read-only memory ("ROM"). Such storage devices may be configured to implement any suitable data storage including, without limitation, various file systems, database structures, and/or the like.The device 100 may also include a communications subsystem 130, which may include, but is not limited to, a modem, a network card (wireless or wired), an infrared communications device, a wireless communications device, and/or a chipset (eg, a Bluetooth device, an 802.11 device, a Wi-Fi device). , WiMAX devices, cellular communication facilities, etc.), and/or the like. Communication subsystem 130 may permit exchange of data with a network, other computer systems/devices, and/or any other devices described herein. In many embodiments, device 100 will further include working memory 135, which may include a RAM or ROM device, as described above.The apparatus 100 may also include software elements (illustrated as being located concurrently within the working memory 135), an operating system 140, device drivers, executable libraries, and/or other code, such as one or more application programs 145, which may Included and/or may be devised to implement methods and/or configuration systems, provided by other embodiments, as described herein. By way of example only, one or more programs described with respect to the methods discussed below may be implemented as code and/or instructions that may be executed by a computer (and/or a processor within a computer); in one aspect, then, This code and/or instructions may be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.A set of these instructions and/or code may be stored on a non-transitory computer-readable storage medium such as the storage device 125 described above. In some cases, storage media may be incorporated within a computer device (eg, device 100). In other embodiments, the storage media may be separate from a computer device (eg, a removable media, such as an optical disk) and/or provided in an installation package so that the storage media may be used to program, configure, and utilize the instructions/codes stored thereon. And/or adapted general-purpose computers. These instructions may be in the form of executable code (executable by the computerized device 100) and/or may be in the form of original code and/or installable code that is compiled and/or installed on the device 100 (eg, using various common When any of the available compilers, installers, compression/decompression utilities, etc.) is available, it is in the form of executable code.Those skilled in the art will appreciate that numerous changes may be made in accordance with specific requirements. For example, custom hardware may also be used, and/or specific elements may be implemented in hardware, software (including portable software, such as applets, etc.), or both. In addition, connections to other computing devices such as network input/output devices may be used.Embodiments of the present invention are directed to detecting a touch pressure level on a touch screen based on raw touch data, the raw touch data may include a capacitance value associated with the touch event, a touch area of ​​the touch event, and/or a touch duration of the touch event. . Each of the capacitance value, the touch area, and the touch duration may be referred to as a feature hereinafter. For different users, different features are preferably used to determine the touch pressure level. Therefore, the features are weighted for individual users with specific weight sets that are tailored to individual users.A capacitive touch screen, such as touch screen 116, includes an insulator, such as glass, covered on one side with a transparent conductor. Because the human body is also a conductor, touching the surface of the screen on the other side of the insulator causes a measurable change in capacitance. Different methods can be used to determine the location of the touch.It should be noted that the conventional touch screen controller only provides the location (ie, coordinates) and duration of the touch event. Thus, necessary hardware and/or software modifications may be required to enable devices such as device 100 to receive raw touch data related to all features. Modifications can be made in a variety of ways. For example, in one embodiment, the digitized output of the analog front end (AFE) of touch screen 116 may be directly fed into processor 110 without the use of a conventional touch screen controller. Coupled with appropriate software or firmware, the processor 110 may be able to process such raw touch screen data to obtain feature data/measurements and determine touch pressure levels. In another embodiment, the touch screen controller may be modified to provide the detected capacitance value (and the area and shape of the possibly existing touch area) to the processor 110 in addition to the location and duration of the touch event.FIG. 2 is a flow chart illustrating an exemplary method 200 implemented by the apparatus 100 for determining a touch pressure level on a touch screen. At block 210, a touch event may be detected through the touch screen 116. Any physical contact that may be detected by the touch screen 116 may be a touch event. Next, at block 220, data relating to features associated with the touch event is obtained, the features including a capacitance value, a touch area, and/or a touch duration. The capacitance value may be a normalized average capacitance value, for example, a value indicating an average capacitance per unit area within the touch area, such as an average capacitance per pixel area within the touch area. The average capacitance value may be determined by a touch screen controller adapted to determine and provide these normalized average capacitance values. Alternatively, the average capacitance value may be determined by processor 110 of apparatus 100 by processing digitized raw touch data received from touch screen 116 or from a component coupled to touch screen 116 . One method for determining the average capacitance value is to average the measured capacitance values ​​that exceed a predetermined threshold. At block 230, a touch pressure level may be determined based on at least one or more of the features. As will be described below, the average capacitance value associated with the touch event may be positively related to the touch pressure level and thus may be used to determine the touch pressure level. The average capacitance may be fused with other data related to the touch event (eg, the area of ​​the touch area and the duration of the touch event) to determine the touch pressure level. Hereinafter, one embodiment of determining the touch pressure level based on the average capacitance, the area of ​​the touch area, and the duration of the touch event will be described in detail. Different levels of touch pressure may be identified in different embodiments. For example, in one embodiment, touch pressure may be classified into three levels: light, natural, and heavy.When the user touches the touch screen 116 with different pressure levels, the average capacitance value per unit area within the touch area may be correspondingly different. This correlation may be explained by one or more of the following reasons. First, higher touch pressure can extrude more sweat from the exocrine sweat ducts, thereby causing higher average capacitance values ​​associated with touch events. Second, higher touch pressure may flatten the portion of the finger that is in contact with the touch screen and thus may increase the amount of skin that is in contact with the touch screen (eg, the valley between the epidermal defects of the fingerprint). Therefore, the average capacitance value associated with the touch event may also increase. Therefore, the average capacitance value can be positively correlated with the touch pressure level.FIGS. 3A-3C illustrate plots 300A, 300B, and 300C of exemplary touch screen output data corresponding to touch events having different pressure levels. In particular, FIG. 3A illustrates a plot 300A of exemplary touchscreen output data corresponding to a light touch; FIG. 3B illustrates a plot 300B of exemplary touchscreen output data corresponding to a natural touch; and FIG. 3C illustrates an example corresponding to a heavy touch. Touch screen output data plot 300C. In addition to the significant differences in the area of ​​the touch area, different average capacitance levels (not shown) may be observed between the touch events associated with FIGS. 3A-3C. For example, the average capacitance value associated with the heavy touch of plot 300C of FIG. 3C may be observed to be about 1.1 times the average capacitance value associated with the natural touch corresponding to plot 300B of FIG. 3B, and the latter may be The observation is about 1.2 times the average capacitance value associated with the light touch corresponding to plot 300A of FIG. 3A. Of course, the numerical relationships described above do not limit the invention.The touch pressure level adds another dimension to the touch user interface input. The inconvenient operation of the touch user interface can be simplified by implementing the detection of the touch pressure level.For example, different touch pressure levels may be used to facilitate text input. The device 100 may be adapted so that a particular touch pressure level may invoke a pop-up menu that provides for strokes or glyphs of capital letters, one or more characters with diacritical marks, numbers, or complex characters in some Asian languages, for example. access. As a non-limiting example, the user may enter the letter "a" with the on-screen keyboard on the touch screen 116 and re-touch the letter "a" on the touch screen 116. The device 100 can then recognize the re-touch pressure level and create a pop-up menu that allows the user to replace the letter "a" with a capital "A". In another non-limiting example, the user may touch the letter "a" on the touch screen 116 with a light touch after entering the letter "a". The device 100 may then recognize the light touch pressure level and provide a pop-up window that allows the user to replace the letter "a" with any of the letters "a" and "". In another embodiment, the particular touch pressure level may trigger the copy/cut/paste mode when the device 100 is in text viewing/editing mode.As yet another example, touch pressure levels may be used to enhance the security of a short numerical PIN by adding another dimension to such PINs. Each symbol of the PIN can then be a combination of numbers and one or more specific touch pressure levels. For example, instead of the conventional numeric value PIN1-2-3-4, the PIN enhanced with the touch pressure level may be 1 (with heavy touch) -2 (with light touch) -3 (with heavy touch) -4 (with light touch). The security of the PIN is enhanced by the increase in the size of the symbol set of the symbol from which the PIN is selected. Of course, the security of alphanumeric codes can be similarly enhanced with touch pressure levels.In another embodiment, touch pressure levels may be used to provide additional security for authorization or confirmation of financial transactions, purchases, or other sensitive actions. For example, the device 100 may be adapted such that it is possible to press or activate a "confirm" button or similar button on the pressure sensitive touch screen user interface (which activates completion of sensitive actions) only with heavy touch. To further enhance security, there is no need to advertise on the user interface the fact that the user interface button can only be activated with a specific touch pressure level.As another example, context menus, help text, or other functionality may be invoked using different touch pressure levels. In a non-limiting exemplary embodiment, a user may touch an icon on a pressure sensitive touch screen user interface with a strong touch to invoke a context menu that provides access to, for example, move and delete functionality. In other embodiments, different touch pressure levels may be used to invoke, for example, a menu showing recently bookmarked addresses or browser applications running on device 100 . Different touch pressure levels may be further used to invoke help texts or brief explanations such as icons or other user interface elements, or alternative texts that do not use language. In another embodiment, the user may use different touch pressure levels to zoom or pan the image or map.In addition, different touch pressure levels may be used to assist a visually impaired user in using a mobile device, such as device 100 . For example, a visually impaired user may use a heavy touch to invoke auditory notifications and use natural touch to select or interact with touch screen user interface elements.In one embodiment of the invention, the touch pressure level associated with the touch event may be determined based on one or more of the measured capacitance in the touch area, the area of ​​the touch area, and the duration of the touch event. It should be appreciated that determining a touch pressure level based on the disclosure below does not require a dedicated pressure sensor. The level of stress can be determined with different levels of granularity based on usage scenarios. For example, in some usage scenarios, secondary stress level (eg, click or press) detection may be sufficient; in some other use scenarios, tertiary pressure level (eg, light/natural/heavy) detection may be desirable; and In other use scenarios, it may be necessary to distinguish more than three stress levels. For example, a drawing application can benefit from very fine-grained touch pressure level determination and can apply progressively coarser brush strokes when the user presses more heavily. It should be understood that there is a trade-off between the level of granularity and the level of confidence in detection. In other words, a more coarse (eg, less sensitive) determination may have a higher level of confidence, and a more granular (eg, more sensitive) determination may have a lower level of confidence.Due to differences in finger size, intensity, past experience, and the like, people differ in how they touch the touch screen. Thus, touch pressure levels may be determined based on individual trained classifiers to account for variability among people. As described above, the touch pressure level may be positively correlated with the measured average capacitance in the touch area. In addition, since the heavier touch tends to flatten the fingers that are in contact with the touch screen and increase the area of ​​the touch area, the touch pressure level and the area of ​​the touch area may also be positively correlated. In addition, the touch pressure level and the duration of the touch event may also be positively correlated due to the psychological relationship between touch pressure and duration. Thus, merging three features in a touch pressure level determination may increase the robustness to the determination of variability of touch behavior when compared to using only one of three features. However, it should be understood that merging three features does not imply that all three features must be used in determining the touch pressure level for all users. For example, for a particular user, it may be that only one of the features exhibits a strong correlation with touch pressure while the other two features show very weak or no correlation with touch pressure. Using a trained classifier for this user (discussed in detail below) to determine the touch pressure level may cause only one feature to actually be broken down into factors that determine the touch pressure level. For another particular user, it is possible that both of the three features exhibit a fairly strong correlation with touch pressure while the other one feature shows very weak or no correlation with touch pressure. Using the trained classifier for this user (discussed in detail below) to determine the touch pressure level may cause both of the three features to actually be decomposed into a determination of the touch pressure level. For the third instance user, it is likely that all three features exhibit a fairly strong correlation with the touch pressure level. Using a trained classifier for this user (discussed in detail below) to determine the touch pressure level may cause all three features to actually be broken down into factors that determine the touch pressure level.In one example, the area of ​​the touch area may be normalized to remove the effect of the touch of the finger on the touch screen to the area change. In order to normalize the area of ​​the touch area, the width of the touch area may be kept constant, and the height of the touch area may be adjusted so that the adjusted height-to-width ratio is equal to the predetermined maximum ratio, and the area of ​​the adjusted touch area is normalized area. The normalized area of ​​the touch area can itself be used as a definitive feature of the touch pressure level. In addition, the normalized area of ​​the touch area can be used for the calculation of the average capacitance, which in turn serves as a determined feature of the touch pressure level.Referring to FIG. 4, a flow diagram illustrating an exemplary method 400 for training an individual touch pressure level classifier implemented with the device 100 is shown. At block 410, training data related to multiple touch events associated with the user may be received at the touch screen. The user may be instructed by the user interface to touch the touch screen multiple times, each with a particular touch pressure level (eg, heavy, natural, or light). Each touch may generate a touch event and may receive data associated with the touch event. At block 420, feature data relating to a plurality of features may be extracted from training data related to multiple touch events. Each touch event can be associated with multiple features, such as average capacitance, (normalized) area of ​​the touch area, and duration of the touch event. In other embodiments, different and/or additional features may be utilized to determine the touch pressure level. Data may be preprocessed with one or more data denoising methods (eg, normalization, smoothing, etc.) prior to feature extraction. At block 430, a feature classifier may be trained for each feature based on the feature data extracted from the training data. Thus, multiple features may correspond to multiple feature classifiers. In one embodiment, the feature classifier may be a Bayesian risk classifier (BRC) that minimizes Bayesian risk. Bayesian risk can be defined as: R(p|x)=∑jLijP(pj|x), where Lij represents the possible loss that may be incurred if the pressure level pi is the determined pressure level and the true pressure level pj (eg, , erroneous deletion of large amounts of data, and P(pj|x) represents the posterior probability of the pressure level pj for a given feature x. At block 440, weights may be assigned to multiple features to take into account variability between users. In one embodiment, a lifting mechanism may be applied to assign weights. Each feature can be assigned individual weights. Because different features perform better for different users than other features in indicating actual touch pressure levels, and no feature consistently performs better across different users, the promotion mechanism may assign individual weight distributions based on training data for a particular user. The features are optimized to optimize the performance of the user's touch pressure level classifier. It should be appreciated that if a feature exhibits a very weak or no correlation with a particular user's touch pressure level, the feature may be assigned a zero (0) weight such that it is not broken down into factors that determine the user's touch pressure level. Thus, for a particular user, a touch pressure level may be determined based on any number of one or more features based on individual weight distributions. In one embodiment, weights may be estimated using the following: SAMME (Segment Addition Modeling Using Multilevel Exponential Loss Function) algorithm, AdaBoost (Adaptive Boost) algorithm, LPboost (Linear Programming Boost) algorithm, TotalBoost algorithm, BrownBoost Algorithms, and so on. The algorithm for assigning weights does not limit the invention.Hereinafter, the feature classifier parameters and the weights of features associated with a particular user may be collectively referred to as user profiles. To accommodate multiple users on the same device, multiple user profiles may be individually established and stored on a device (eg, device 100).Referring to FIG. 5, a flowchart illustrating an exemplary method 500 for determining a touch pressure level using a individually trained touch pressure level classifier implemented with apparatus 100 is shown. The touch pressure level classifier used in method 500 may be a touch pressure level classifier trained with method 400 of FIG. 4, as described above. If there are multiple user profiles on the device 100, the user may be prompted to select a user profile associated with itself. At block 510, data relating to the touch event may be received at the touch screen. At block 520, feature data relating to a plurality of features may be extracted from data related to the touch event. Features can include average capacitance, (normalized) area of ​​the touch area, and the duration of the touch event. Data may be preprocessed with one or more data denoising methods (eg, normalization, smoothing, etc.) prior to feature extraction. At block 530, feature classifiers trained for the user may be adjusted based on application or user interface specifications. The specifications associated with the currently used application or user interface can be activated. The application or user interface may specify by its corresponding specification the number of touch pressure levels that need to be detected (ie, pressure level granularity) and the sensitivity to each of these touch pressure levels. The application or user interface may also specify possible losses associated with different touch pressure level determinations (eg, erroneous deletion of large amounts of data). As described above, the feature classifier may be adjusted to penalize a confidence level associated with a touch pressure level (ie, a risk touch pressure level) that causes a large loss, thereby reducing the touch pressure level with these touches instead of other touches. The likelihood of false positives associated with stress levels. At block 540, each extracted feature is categorized using a feature's adjusted feature classifier. Each adjusted feature classifier provides a level of confidence in different touch pressure levels and penalizes the level of confidence associated with the risk touch pressure level. The output of the feature classifier may be referred to as feature classification in the following. At block 550, the results of the feature classifier may be weighted using the weights established for that particular user at block 440 of FIG. 4. At block 560, a final classifier may be used to determine the final estimated touch pressure level. The final classifier may be a majority decision ruler in the form of, where T(x) represents a feature classifier decision. The touch pressure level associated with the maximum response under the majority ticket decision classifier may be determined as the estimated touch pressure level. In one embodiment, the level of confidence in the determined touch pressure level may also be provided.An algorithm for determining a touch gesture using one or more features related to a determined touch pressure level and/or touch event has been contemplated. The touch gesture may be, for example, a touch gesture of “touching with a tip portion of a fingertip”, a touch gesture of “touching with a flat portion of a fingertip,” or a touch gesture of “touching with a right side of a fingertip”, or the like.With reference to FIGS. 6A-6C , exemplary user touch operations with a particular touch gesture are illustrated. FIG. 6A illustrates an exemplary user touch operation with a touch gesture "touched with the tip of a finger tip"; FIG. 6B illustrates an exemplary user touch operation with a touch gesture with "flat partial touch with a fingertip"; and FIG. 6C illustrates An example user touch operation with a touch gesture "touch with the right side of the fingertip".Thus, the touch gesture associated with the touch event may be determined using the determined touch pressure level and/or one or more features related to the touch event.Various embodiments of applications or systems that determine touch pressure levels on a touch screen based on features related to touch events have previously been described in detail. It should be appreciated that an application or system that recognizes and/or utilizes touch pressure levels (as previously described) may be implemented as software, firmware, hardware, combinations thereof, or the like. In one embodiment, the previously described functionality may be implemented by one or more processors (eg, processor 110) of device 100 to implement previously desired functionality (eg, the method operations of FIGS. 2, 4, and 5). Various uses of touch triggers described below improve the general availability of touch screen devices and provide a better user experience for these devices.The example methods, apparatus, or articles of manufacture presented herein may be implemented in whole or in part for use with or in conjunction with a mobile communication device. As used herein, "mobile device," "mobile communication device," "handheld device," "tablet," and the like, or plural forms of such terms, are used interchangeably and may refer to any type of specialized computing platform or device. The dedicated computing platform or device may communicate via wireless transmission or reception of information via a suitable wireless network in accordance with one or more communication protocols, and may have changed locations from time to time. By way of illustration, a universal mobile communication device may include, for example, a cellular telephone, a satellite telephone, a smart phone, a heat map or radio map generation tool or device, an observed signal parameter generation tool or device, a personal digital assistant (PDA), Laptops, personal entertainment systems, e-book readers, tablet personal computers (PCs), personal audio or video devices, personal navigation units, wearable devices, or the like. However, it should be appreciated that there are only illustrative examples related to mobile devices that can be used to facilitate or support one or more of the processes or operations described herein.Depending on the particular application, the methods described herein can be implemented in different ways and in different configurations. For example, such techniques may be implemented in hardware, firmware, and/or combinations thereof along with the software. In a hardware implementation, for example, a processing unit may be implemented on one or more application specific integrated circuits ("ASIC"), digital signal processors ("DSP"), digital signal processing devices ("DSPD"), programmable logic devices ("PLD"), field programmable gate array ("FPGA"), processor, controller, microcontroller, microprocessor, electronic device, other device units designed to perform the functions described herein, and Or within its combination.The storage media described herein may include primary, secondary, and/or tertiary storage media. For example, the primary storage medium may include memory such as random access memory and/or read-only memory. Secondary storage media may include large-capacity memories such as magnetic or solid state hard disk drives. The tertiary storage medium may include removable storage media such as magnetic or optical disks, magnetic tape, solid state storage devices, and the like. In some embodiments, the storage medium or portions thereof may be operatively received or may be otherwise configured to be coupled to other components of a computing platform (eg, a processor).In at least some embodiments, one or more portions of the storage media described herein may store signals representing data and/or information as expressed by a particular state of the storage media. For example, an electronic signal representing data and/or information may be "affected by affecting or changing the state of such portions of a storage medium (eg, memory) to represent the data and/or information as binary information (eg, one or zero)." Store in the portion of the storage medium. As such, in certain embodiments, this change in the state of the portion of the storage medium to store the representation data and/or information constitutes a transformation of the storage medium to a different state or thing.In the foregoing detailed description, numerous specific details have been set forth to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that the claimed subject matter may be practiced without these specific details. In other instances, methods and apparatuses that would be known to those of ordinary skill in the art have not been described in detail so as not to obscure the claimed subject matter.Some portions of the foregoing detailed description are presented in terms of algorithms or symbolic representations of the operation of a binary digital electronic signal that is stored in the memory of a particular device or special-purpose computing device or platform. In the context of this particular specification, the term specific apparatus or the like includes a general purpose computer if it is programmed to perform particular functions in accordance with instructions from program software. Algorithm descriptions or symbols are represented as examples of techniques used by those skilled in the art in signal processing or related arts to convey the essence of their work to others skilled in the art. The algorithm is here and usually considered to be a self-consistent sequence of operations or similar signal processing that results in the desired result. In this context, the operations or processes involve physical manipulations of physical quantities. Usually, though not necessarily, such quantities can take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, or otherwise manipulated as electronic signals representing information. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numbers, information or the like. However, it should be understood that all of these or similar terms should be associated with appropriate physical quantities and are merely convenience labels.Unless specifically stated otherwise, as will be apparent from the following discussion, it should be understood throughout this specification using, for example, "process," "compute," "calculate," "identify," "determine," "establish," "obtain," and/or the like. Descriptions of similar terminology refer to the acts or processes of a particular device (eg, a dedicated computer or similar special-purpose electronic computing device). Thus, in the context of the present specification, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming a signal, typically represented within a memory, register or other information storage device, transmission device or display device of a dedicated computer or similar special purpose electronic computing device. For physical electronics or magnetic quantities. In the context of this particular patent application, the term "specific device" may include a general-purpose computer where it is programmed to perform particular functions in accordance with instructions from program software.Reference throughout this specification to "one example," "an example," "certain example," or "exemplary implementation" means that a particular feature, structure, or characteristic described in connection with a feature and/or example can be included in the claimed subject matter. At least one feature and/or instance of the object. Thus, appearances of the phrases "in one example," "in an example," "in some instances," or other similar phrases throughout various aspects of this specification are not necessarily all referring to the same characteristics, examples, and / Or restrictions. In addition, these specific features, structures, or characteristics may be combined in one or more examples and/or features.While there has been shown and described what is presently considered an example feature, those skilled in the art will appreciate that various other modifications can be made and can be substituted for equivalents without departing from the claimed subject matter. In addition, many modifications may be made to adapt a particular situation to the claimed subject matter without departing from the central concept described herein. As such, it is not intended to limit the claimed subject matter to the specific examples disclosed, but that the claimed subject matter may also encompass all aspects falling within the scope of the appended claims and their equivalents.
A method for ensuring reliable clock recovery in a client device in a mobile display digital interface (MDDI) communication system, the method comprising the steps of asserting a data enable signal in the client device; generating a recovered clock in the client device using a strobe signal and a data signal; receiving an all zero 1 field sent by a host; de-asserting a data enable signal in the client device; and generating the recovered clock in the client device solely from the strobe signal as well as a corresponding system.
A method for ensuring reliable clock recovery in a client device in a mobile display digital interface (MDDI) communication system, the method comprising the steps of:asserting a data enable signal in the client device;generating a recovered clock in the client device using a strobe signal anda data signal;receiving an all zero 1 field sent by a host;de-asserting a data enable signal in the client device; andgenerating the recovered clock in the client device solely from the strobe signal.The method of claim 1 further comprising the step of sending a turnaround 1 field by the host.A system for ensuring reliable clock recovery in a client device in a mobile display digital interface (MDDI) communication system, the system comprising:means for asserting a data enable signal in the client device;means for generating a recovered clock in the client device using a strobe signal and a data signal;means for receiving an all zero 1 field sent by a host;means for de-asserting a data enable signal in the client device; andmeans for generating the recovered clock in the client device solely from the strobe signal.The system of claim 3 further comprising a means for sending a turnaround 1 field by the host.A computer program product, comprising:computer readable medium comprising:code for causing reliable clock recovery in a client device in a mobile display digital interface (MDDI) communication system, the computer code comprising:code for causing a data enable signal to be asserted in the client device;code for causing a recovered clock to be generated in the client device using a strobe signal and a data signal;code for causing an all zero 1 field to be received by the client and sent by a host;code for causing a data enable signal to be de-asserted in the client device; andcode for causing the recovered clock to be generated in the client device solely from the strobe signal.The computer program product of claim 5 further comprising a turnaround 1 field to be sent by the host
The preset Application for Patent claims priority to Provisional Application No. 60/527,996 entitled "Switchable Threshold Differential Interface" filed December 8, 2003, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. BACKGROUND Field Embodiments of the invention in this disclosure relate to a digital signal protocol and process for communicating or transferring signals between a host device and a client device at high data rates. More specifically, the disclosure relates to a technique for transferring multimedia and other types of digital signals from a host or controller device to a client device for presentation or display to an end user using a low power high data rate transfer mechanism having internal and external device applications. Background Computers, electronic game related products, and varions video technologies (for example DVD's and High Definition VCRs) have advanced significantly over the last few years to provide for presentation of increasingly higher resolution still, video, video-on-demand, and graphics images, even when including some types of text, to end users of such equipment. These advances in turn mandated the use of higher resolution electronic viewing devices such as high definition video monitors, HDTV monitors, or specialized image projection elements. Combining such visual images with high-definition or quality audio data, such as when using CD type sound reproduction, DVDs, surround-sound, and other devices also having associated audio signal outputs, is used to create a more realistic, content rich, or true multimedia experience for an end user. In addition, highly mobile, high quality sound systems and music transport mechanisms, such as MP3 players, have been developed for audio only presentations to end users. This has resulted in increased expectations for typical users of commercial electronics devices, from computers to televisions and even telephones, now being accustomed to and expecting high or premium quality output.In a typical video presentation scenario, involving an electronics product, video data is typically transferred using current techniques at a rate that could be best termed as slow or medium, being on the order of one to tens of kilobits per second. This data is then either buffered or stored in transient, or longer-term memory devices, for delayed (later) play out on a desired viewing device. For example, images may be transferred "across" or using the Internet using a program resident on a computer having a modem or other type of Internet connection device, to receive or transmit data useful in digitally representing an image. A similar transfer can take place using wireless devices such as portable computers equipped with wireless modems, or wireless Personal Data Assistants (PDAs), or wireless telephones.Once received, the data is stored locally in memory elements, circuits, or devices, such as RAM or flash memory, including internal or external storage devices such as small size hard drives, for playback. Depending on the amount of data and the image resolution, the playback might begin relatively quickly, or be presented with longer-term delay. That is, in some instances, image presentation allows for a certain degree of real time playback for very small or low resolution images not requiring much data, or using some type of buffering, so that after a small delay, some material is presented while more material is being transferred. Provided there are no interruptions in the transfer link, or interference from other systems or users relative to the transfer channel being used, once the presentation begins the transfer is reasonably transparent to the end user of the viewing device. Naturally, where multiple users share a single communication path, such as a wired Internet connection, transfers can be interrupted or slower than desired.The data used to create either still images or motion video are often compressed using one of several well known techniques such as those specified by the Joint Photographic Experts Group (JPEG), the Motion Picture Experts Group (MPEG), and other well known standards organizations or companies in the media, computer, and communications industries to speed the transfer of data over a communication link. This allows transferring images or data faster by using a smaller number of bits to transfer a given amount of information.Once the data is transferred to a "local" device such as a computer having a storage mechanism such as memory, or magnetic or optical storage elements, or to other recipient devices, the resulting information is un-compressed (or played using special decoding payers), and decoded if needed, and prepared for appropriate presentation based on the corresponding available presentation resolution and control elements. For example, a typical computer video resolution in terms of a screen resolution of X by Y pixels typically ranges from as low as 480x640 pixels, through 600x800 to 1024x1024, although a variety of other resolutions are generally possible, either as desired or needed.Image presentation is also affected by the image content and the ability of given video controllers to manipulate the image in terms of certain predefined color levels or color depth (bits per pixel used to generate colors) and intensities, and any additional overhead bits being employed. For example, a typical computer presentation would anticipate anywhere from around 8 to 32, or more, bits per pixel to represent various colors (shades and hues), although other values are encountered.From the above values, one can see that a given screen image is going to require the transfer of anywhere from 2.45 Megabits (Mb) to around 33.55 Mb of data over the range from the lowest to highest typical resolutions and depth, respectively. When viewing video or motion type images at a rate of 30 frames per second, the amount of data required is around 73.7 to 1,006 Megabits of data per second (Mbps), or around 9.21 to 125.75 Megabytes per second (MBps). In addition, one may desire to present audio data in conjunction with images, such as for a multimedia presentation, or as a separate high resolution audio presentation, such as CD quality music. Additional signals dealing with interactive commands, controls, or signals may also be employed. Each of these options adding even more data to be transferred. Furthermore, newer transmission techniques Involving High Definition (HD) television and movie recordings may add even more data and control information. In any case, when one desires to transfer high quality or high resolution image data and high quality audio information or data signals to an end user to create a content rich experience, a high data transfer rate link is required between presentation elements and the source or host device that is configured to provide such types of data.Data rates of around 115 Kilobytes (KBps) or 920 Kilobits per second (Kbps) can be routinely handled by some modern serial interfaces, Other interfaces such as USB serial interfaces, can accommodate data transfers at rates as high as 12 MBps, and specialized high speed transfers such as those configured using the Institute of Electrical and Electronics Engineers (IEEE) 1394 standard, can occur at rates on the order of 100 to 400 MBps, Unfortunately, these rates fall short of the desired high data rates discussed above which are contemplated for use with future wireless data devices and other services for providing high resolution, content rich, output signals for driving portable video displays or audio devices. This includes computers for business and other presentations, gaming devices, and so forth. In addition, these interfaces require the use of a significant amount of host or system and client software to operate. Their software protocol stacks also create an undesirably large amount of overhead, especially where mobile wireless devices or telephone applications are contemplated. Such devices have severe memory and power consumption limitations, as well as already taxed computational capacity. Furthemore, some of these interfaces utilize bulky cables which are too heavy and unsatisfactory for highly aesthetic oriented mobile applications, complex connectors which add cost, or simply consume too much power.There are other known interfaces such as the Analog Video Graphics Adapter (AVGA), Digital Video Interactive (DVI) or Gigabit Video Interface (GVIF) interfaces. The first two of these are parallel type interfaces which process data at higher transfer rates, but also employ heavy cables and consume large amounts of power, on the order of several watts. Neither of these characteristics are amenable to use with portable consumer electronic devices. Even the third interface consumes too much power and uses expensive or bulky connectors.For some of the above interfaces, and other very high rate data systems/protocols or transfer mechanisms associated with data transfers for fixed installation computer equipment, there is another major drawback. To accommodate the desired data transfer rates also requires substantial amounts of power and/or operation at high current levels. This greatly reduces the usefulness of such techniques for highly mobile consumer oriented products.Generally, to accommodate such data transfer rates using alternative such as say optical fiber type connections and transfer elements, also requires a number of additional converters and elements that introduce much more complexity and cost, than desired for a truly commercial consumer oriented product. Aside from the generally expensive nature of optical systems as yet, their power requirements and complexity prevents general use for lightweight, low power, portable applications.What has been lacking in the industry for portable, wireless, or mobile applications, is a technique to provide a high quality presentation experience, whether it be audio, video, or multimedia based, for highly mobile end users. That is, when using portable computers, wireless phones, PDAs, or other highly mobile communication devices or equipment, the current video and audio presentation systems or devices being used simply cannot deliver output at the desired high quality level. Often, the perceived quality that is lacking is the result of unobtainable high data rates needed to transfer the high quality presentation data. This can include both transfer to more efficient, advanced or feature laden external devices for presentation to an end user, or transfer between hosts and clients internal to portable devices such as computers, gaming machines, and wireless devices such as mobile telephones.In this latter case, there have been great strides made in adding higher and higher resolution internal video screens, and other specialty input and/or output devices and connections to wireless devices like so called third generation telephones, and to so called laptop computers. However, internal data busses and connections which may include bridging across rotating or sliding hinge or hinge-like structures which mount or connect video screens or other elements to the main housing where the host and/or various other control elements and output components reside. These are generally high-bandwidth or high throughput interfaces. It is very difficult to construct high throughput data transfers interfaces using prior techniques which can require up to 90 conductors, or more, to achieve the desired throughput, on say a wireless telephone, as one example. Current solutions typically involve employing parallel type interfaces with relatively high signal levels which can cause the interconnection to be more costly, less reliable and potentially generate radiated emissions which could interfere with device functions. This presents many manufacturing, cost, and reliability challenging issues to overcome.Such issues and requirements are also being seen on fixed location installations where communication or computing type devices, as one example, are added to appliances and other consumer devices to provide advanced data capabilities, Internet and data transfer connections, or built in entertainment. Another example would be airplanes and busses where individual video and audio presentation screen are mounted in seat-backs. However, in these situations it is often more convenient, efficient, and easily serviceable to have the main storage, processing, or communication control elements located a distance from visible screens or audio outputs with an interconnecting link or channel for the presentation of information. This link will need to handle a significant amount of data to achieve the desired throughput, as discussed above.Therefore, a new transfer mechanism is needed to increase data throughput between host devices providing the data and client display devices or elements presenting an output to end users.Applicants have proposed such new transfer mechanisms in U.S. Patent Application Serial No. 10/020,520 filed December 14, 2001 , now U.S. Patent No 6,760,772, issued July 6, 2004 to Zou et al. , and U.S. Patent Application Serial No. 10/236,657 filed September 6, 2002 , both entitled "Generating And Implementing A Communication Protocol And Interface For High Data Rate Signal Transfer," now allowed, which are assigned to the assignee of the present invention and incorporated herein by reference. Also, U.S. Application Serial No. 10/860,116 filed on Jane 2, 2004, entitled "Generating and Implementing a Signal Protocol and Interface for Higher Data Rates." The techniques discussed in those applications can greatly improve the transfer rate for large quantities of data in high speed data signals. However, the demands for ever increasing data rates, especially as related to video presentations, continue to grow. Even with other ongoing developments in data signal technology, there is still a need to strive for even faster transfer rates, improved communication link efficiencies, and more powerful communication links. Therefore, there is a continuing need to develop a new or improved transfer mechanism which is needed to increase data throughput between host and client devices. SUMMARY The above drawback, and others, existent in the art are addressed by embodiments of the invention in which a new protocol and data transfer means, method and mechanism have been developed for transferring data between a host device and a recipient client device at high data rates.Embodiments for the invention are directed to a Mobile Data Digital Interface (MDDI) for transferring digital data at a high rate between a host device and a client device over a communication path which employs a plurality or series of packet structures to form a communication protocol for communicating a pre-selected set of digital control and presentation data between the host and client devices. The signal communications protocol or link layer is used by a physical layer of host or client link controllers, receivers, or drivers. At least one link controller or driver residing in the host device is coupled to the client device through the communications path or link, and is configured to generate, transmit, and receive packets forming the communications protocol, and to form digital presentation data into one or more types of data packets. The interface provides for bi-directional transfer of information between the host and client, which can reside within a common overall housing or support structure.The implementation is generally all digital in nature with the exception of differential drivers and receivers which can be easily implemented on a digital CMOS chip, requires a few as 6 signals, and operates at almost any data rate that is convenient for the system designer. The simple physical and link layer protocol makes it easy to integrate, and this simplicity plus a hibernation state enables the portable system to have very low system power consumption.To aid in use and acceptance, the interface will add very little to the cost of a device, will allow for consumption of very little power while able to power displays through the interface using standard battery voltages, and can accommodate devices having a poeket-able form-factor. The interface is scalable to support resolutions beyond HDTV, supports simultaneous stereo video and 7.1 audio to a display device, performs conditional updates to any screen region, and supports multiple data types in both directions.In further aspects of embodiments of the invention, at least one client link controller, receiver, device, or driver is disposed in the client device and is coupled to the host device through the communications path or link. The client link controller is also configured to generate, transmit, and receive packets forming the communications protocol, and to form digital presentation data into one or more types of data packets. Generally, the host or link controller employs a state machine for processing data packets used in commands or certain types of signal preparation and inquiry processing, but can use a slower general purpose processor to manipulate data and some of the less complex packets used in the communication protocol. The host controller comprises one or more differential line drivers; while the client receiver comprises one or more differential line receivers coupled to the communication path.The packets are grouped together within media frames that are communicated between the host and client devices having a pre-defined fixed length with a pre-determined number of packets having different variable lengths. The packets each comprise a packet length field, one or more packet data fields, and a cyclic redundancy check field. A Sub-frame Header Packet is transferred or positioned at the beginning of transfers of other packets from the host link controller. One or more Video Stream type packets and Audio Stream type packets are used by the communications protocol to transfer video type data and audio type data, respectively, from the host to the client over a forward link for presentation to a client device user. One or more Reverse Link Encapsulation type packets are used by the communications protocol to transfer data from the client device to the host link controller. These transfer in some embodiments include the transfer of data from internal controllers having at leas one MDDI device to internal video screens. Other embodiments include transfer to internal sound systems, and transfers from various input devices including joysticks and complex keyboards to internal host devices.Filler type packets are generated by the host link controller to occupy periods of forward link transmission that do not have data. A plurality of other packets are used by the communications protocol to transfer video information. Such packets include Color Map, Bit Block Transfer, Bitmap Area Fill, Bitmap Pattern Fill, and Transparent Color Enable type packets. User-Defined Stream type packets are used by the communications protocol to transfer interface-user defined data. Keyboard Data and Pointing Device Data type packets are used by the communications protocol to transfer data to or from user input devices associated with said client device. A Link Shutdown type packet is used by the communications protocol to terminate the transfer of data in either direction over said communication path.The communication path generally comprises or employs a cable having a series of four or more conductors and a shield. In addition, printed wires or conductors can be used, as desired, with some residing on flexible substrates.The host link controller requests display capabilities information from the client device in order to determine what type of data and data rates said client is capable of accommodating through said interface. The client link controller communicates display or presentation capabilities to the host link controller using at least one Client Capability type packet. Multiple transfer modes are used by the communications protocol with each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period, with each mode selectable by negotiation between the host and client link controllers. These transfer modes are dynamically adjustable during transfer of data, and the same mode need not be used on the reverse link as is used on the forward link.In other aspects of some embodiments of the invention, the host device comprises a wireless communications device, such as a wireless telephone, a wireless PDA, or a portable computer having a wireless modem disposed therein. A typical client device comprises a portable video display such as a micro-display device, and/or a portable audio presentation system. Furthermore, the host may use storage means or elements to store presentation or multimedia data to be transferred to be presented to a client device user.In still other aspects of some embodiments, the host device comprises a controller or communication link control device with drivers as described below residing within a portable electronic device such as a wireless communications device, such as a wireless telephone, a wireless PDA, or a portable computer. A typical client device in this configuration comprises a client circuit or integrated circuit or module coupled to the host and residing within the same device, and to an internal video display such as a high resolution screen for a mobile phone, and/or a portable audio presentation system, or in the alternative some type of input system or device. BRIEF DESCRIPTION OF THE DRAWINGS Further features and advantages, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements or processing steps, and the drawing in which an element first appears is indicated by the leftmost digit(s) in the reference number.FIG. 1A illustrates a basic environment in which embodiments of the invention might operate including the use of a micro-display device, or a projector, in conjunction with a portable computer or other data processing device.FIG. 1B illustrates a basic environment in which embodiments of the invention might operate including the use of a micro-display device or a projector, and audio presentation elements used in conjunction with a wireless transceiver.FIG. 1C illustrates a basic environment in which embodiments of the invention might operate including the use of internal display or audio presentation devices used in a portable computer.FIG. 1D illustrates a basic environment in which embodiments of the invention might operate including the use of internal display or audio presentation elements used in a wireless transceiver.FIG. 2 illustrates the overall concept of a Mobile Digital Data Interface with a host and client interconnection.FIG. 3 illustrates the structure of a packet useful for realizing data transfers from a client device to a host device.FIG. 4 illustrates the use of an MDDI link controller and the types of signals passed between a host and a client over the physical data link conductors for a Type 1 interface.FIG. 5 illustrates the use of an MDDI link controller and the types of signals passed between a host and a client over the physical data link conductors for Types 2, 3, and 4 interfaces,FIG. 6 illustrates the structure of frames and sub-fmmes used to implement the interface protocol.FIG. 7 illustrates the general structure of packets used to implement the interface protocol.FIG. 8 illustrates the format of a Sub-frame Header Packet.FIG. 9 illustrates the format and contents of a Filler Packet.FIG. 10 illustrates the format of a Video Stream Packet.FIGS. 11A-11E illustrate the format and contents for the Video Data Format Descriptor used in FIG. 10 .FIG. 12 illustrates the use of packed and unpacked formats for data.FIG. 13 illustrates the format of an Audio Stream Packet.FIG. 14 illustrates the use of byte-aligned and packed PCM formats for data.FIG. 15 illustrates the format of a User-Defined Stream Packet.FIG. 16 illustrates the format of a Color Map Packet.FIG. 17 illustrates the format of a Reverse Link Encapsulation Packet.FIG. 18 illustrates the format of a Client Capability Packet.FIG. 19 illustrates the format of a Keyboard Data Packet.FIG. 20 illustrates the format of a Pointing Device Data Packet.FIG. 21 illustrates the format of a Link Shutdown Packet.FIG. 22 illustrates the format of a Client Request and Status Packet.FIG. 23 illustrates the format of a Bit Block Transfer Packet.FIG. 24 illustrates the format of a Bitmap Area Fill Packet.FIG. 25 illustrates the format of a Bitmap Pattern Fill Packet.FIG. 26 illustrates the format of a Communication Link Data Channel Packet.FIG. 27 illustrates the format of a Interface Type Handoff Request Packet.FIG. 28 illustrates the format of an Interface Type Acknowledge Packet.FIG. 29 illustrates the format of a Perform Type Handoff Packet.F1G. 30 illustrates the format of a Forward Audio Channel Enable Packet.FIG. 31 illustrates the format of a Reverse Audio Sample Rate Packet.FIG. 32 illustrates the format of a Digital Content Protection Overhead Packet.FIG. 33 illustrates the format of a Transparent Color Enable Packet.FIG. 34 illustrates the format of a Round Trip Delay Measurement Packet.FIG. 35 illustrates the timing of events during the Round Trip Delay Measurement Packet.FIG. 36 illustrates a sample implementation of a CRC generator and checker useful for implementing the invention.FIG. 37A illustrates the timing of CRC signals for the apparatus of FIG. 36 when sending data packets.FIG. 37B illustrates the timing of CRC signals for the apparatus of FIG. 36 when receiving data packets.FIG. 38 illustrates processing steps for a typical service request with no contention.FIG. 39 illustrates processing steps for a typical service request asserted after the link restart sequence has begun, contending with link start.FIG. 40 illustrates how a data sequence can be transmitted using DATA-STB encoding.FIG. 41 illustrates circuitry useful for generating the DATA and STB signals from input data at the host, and then recovering the data at the client.FIG. 42 illustrates drivers and terminating resistors useful for implementing one embodiment.FIG. 43 illustrates steps and signal levels employed by a client to secure service from the host and by the host to provide such service.FIG. 44 illustrates relative spacing between transitions on the Data0, other data lines (DataX), and the strobe lines (Stb).FIG. 45 illustrates the presence of a delay in response that can occur when a host disables the host driver after transferring a packet.FIG. 46 illustrates the presence of a delay in response that can occur when a host enables the host driver to transfer a packet.FIG. 47 illustrates leakage current analysis.FIG. 48 illustrates switching characteristics and relative timing relationships for host and client output enabled and disable time.FIG. 49 illustrates a high level diagram of signal processing steps and conditions by which synchronization can be implemented using a state machine.FIG. 50 illustrates typical amounts of delay encountered for signal processing on the forward and reverse paths in a system employing the MDDI.FIG. 51 illustrates marginal round trip delay measurement.FIG. 52A illustrates Reverse Link data rate changes.FIG. 52B illustrates an example of advanced reverse data sampling.FIG. 53 illustrates a graphical representation of values of the Reverse Rate Divisor versus forward link data rate.FIGS. 54A and 54B illustrate steps undertaken in the operation of an interface.FIG. 55 illustrates an overview of the interface apparatus processing packets.FIG. 56 illustrates the format of a Forward Link Packet.FIG. 57 illustrates typical values for propagation delay and skew in an Type 1 Link interface.FIG. 58 illustrates Data, Stb, and Clock Recovery Timing on a Type 1 Link for exemplary signal processing through the interface.FIG. 59 illustrates typical values for propagation delay and skew in Type 2, Type 3 or Type 4 Link interfaces.FIGS. 60A, 60B, and 60C illustrate different possibilities for the timing of two data signals and MDDI_Stb with respect to each other, being ideal, early, and late, respectively.FIG. 61 illustrates interface pin assignments exemplary connectors used with a Type-I/Type 2 interfaces.FIGS. 62A and 62B illustrate possible MDDI_Data and MDDI_Stb waveforms for both Type 1 and Type 2 Interfaces, respectively.FIG. 63 illustrates a high level diagram of alternative signal processing steps and conditions by which synchronization can be implemented using a state machine.FIG. 64 illustrates exemplary relative timing between a series of clock cycles and the timing of a various reverse link packets bits and divisor values.FIG. 65 illustrates exemplary error code transfer processing.FIG. 66 illustrates apparatus useful for error code transfer processing.FIG. 67A illustrates error code transfer processing for code overloading.FIG. 67B illustrates error code transfer processing for code reception.FIG. 68A illustrates processing steps for a host initiated wake-up.FIG. 68B illustrates processing steps for a client initiated wake-up.FIG. 68C illustrates processing steps for host and client initiated wake-up with contention.FIG. 69 illustrates the format of a Request VCP Feature Packet.FIG. 70 illustrates the format of a VCP Feature Reply Packet.FIG. 71 illustrates the format of a VCP Feature Reply List.FIG. 72 illustrates the format of a Set VCP Feature Packet.FIG. 73 illustrates the format of a Request Valid Parameter Packet.FIG. 74 illustrates the format of a Valid Parameter Reply Packet.FIG. 75 illustrates the format of a Scaled Video Stream Capability Packet.FIG. 76 illustrates the format of a Scaled Video Stream Setup Packet.FIG. 77 illustrates the format of a Scaled Video Stream Acknowledgement Packet.FIG. 78 illustrates the format of a Scaled Video Stream Packet.FIG. 79 illustrates the format of a Request Specific Status Packet.FIG. 80 illustrates the format of a Valid Status Reply List Packet.FIG. 81A illustrates the format of a Packet Processing Delay Parameters Packet.FIG. 81B illustrates the format of a Delay Parameters List item.FIG. 82 illustrates the format of a Personal Display Capability Packet.FIG. 83 illustrates elements in the Points of Field Curvature List.FIG. 84A illustrates the format of a Client Error Report Packet.FIG. 84B illustrates the format of an Error Report List item.FIG. 85 illustrates the format of a Client Identification Packet.FIG. 86 illustrates the format of an Alternate Display Capability Packet.FIG. 87 illustrates the format of a Register Access Packet.FIG. 88A-88C illustrate use of two display buffers to reduce visible artifacts.FIG. 89 illustrates two buffers with display refresh faster than image transfer.FIG. 90 illustrates two buffers with display refresh slower than image transfer.FIG. 91 illustrates two buffers with display refresh much faster than image transfer.FIG. 92 illustrates three buffers with display refresh faster than image transfer.FIG. 93 illustrates three buffers with display refresh slower than image transfer.FIG. 94 illustrates one buffer with display refresh faster than image transfer.FIG. 95 illustrates host-client connection via daisy -chain and hub.FIG. 96 illustrates client devices connected via a combination of hubs and daisy chains.FIG. 97 illustrates a color map. DETAILED DESCRIPTION I. Overview A general intent of the invention is to provide a Mobile Display Digital Interface (MDDI), as discussed below, which results in or provides a cost-effective, low power consumption, transfer mechanism that enables high- or very-high- speed data transfer over a short-range communication link between a host device and a client device, such as a display element, using a "serial" type of data link or channel. This mechanism lends itself to implementation with miniature connectors and thin flexible cables which are especially useful in connecting internal (interior to a housing or support frame) display or output elements or devices, or input devices to a central controller or communication element or device. In addition, this connection mechanism is very useful for connecting external display elements or devices such as wearable micro-displays (goggles or projectors) or other types of visual, audible, tactile information presentation devices to portable computers, wireless communication devices, or entertainment devices.Although the terms Mobile and Display are associated with the naming of the protocol, it is to be understood that this is for convenience only in terms of having a standard name easily understood by those skilled in the art working with the interface and protocol. As it will relate to a VESA standard and various applications of that standard. However, it will be readily understood after a review of the embodiments presented below that many non-mobile and non-display related applications will benefit from application of this protocol, resulting interface structure, or transfer mechanism, and the MDDI label is not intended to imply any limitations to the nature or usefulness of the invention or its various embodiments.An advantage of embodiments of the invention is that a technique is provided for data transfer that is low in complexity, low cost, has high reliability, fits well within the environment of use, and is very robust, while remaining very flexible.Embodiments of the invention can be used in a variety of situations to communicate or transfer large quantities of data, generally for audio, video, or multimedia applications from a host or source device where such data is generated, manipulated, such as for transfer to specific deices, or otherwise processed, or stored; to a client or receiving device, such as a video display or projection element, audio speakers, or other presentation device at a high rate. A typical application, which is discussed below, is the transfer of data from either a portable computer or a wireless telephone or modem to a visual display device such as a small video screen or a wearable micro-display appliance, such as in the form of goggles or helmets containing small projection lenses and screens, or from a host to client device within such components. That is, from a processor or controller to an internal screen or other presentation element, as well as from various internal, or external input devices employing a client to an internally located (collocated within same device housing or support structure) host, or connected thereto by a cable or conductors.The characteristics or attributes of the MDDI are such that they are independent of specific display or presentation technology. This is a highly flexible mechanism for transferring data at a high rate without regards to the internal structure of that data, nor the functional aspects of the data or commands it implements. This allows the timing of data packets being transferred to be adjusted to adapt to the idiosyncrasies of particular client devices, such as for unique display desires for certain devices, or to meet the requirements of combined audio and video for some A-V systems, or for certain input devices such as joysticks, touch pads, and so forth. The interface is very display element or client device agnostic, as long as the selected protocol is followed. In addition, the aggregate serial link data, or data rate, can vary over several orders of magnitude which allows a communication system or host device designer to optimize the cost, power requirements, client device complexity, and client device update rates.The data interface is presented primarily for use in transferring large amounts of high rate data over a "wired" signal link or small cable. However, some applications may take advantage of a wireless link as well, including optical based links, provided it is configured to use the same packet and data structures developed for the interface protocol, and can sustain the desired level of transfer at low enough power consumption or complexity to remain practical. II. Environment A typical application can be seen in FIGS. 1A and 1B where a portable or laptop computer 100 and wireless telephone or PDA device 102 are shown communicating data with display devices 104 and 106, respectively, along with audio reproduction systems 108 and 112. In addition, FIG 1A shows potential connections to a larger display or screen 114 or an image projector 116, which are only shown in one figure for clarity, but are connectable to wireless device 102 as well. The wireless device can be currently receiving data or have previously stored a certain amount of multimedia type data in a memory element or device for later presentation for viewing and/or hearing by an end user of the wireless device. Since a typical wireless device is used for voice and simple text communications most of the time, it has a rather small display screen and simple audio system (speakers) for communicating information to the device 102 user.Computer 100 has a much larger screen, but still inadequate external sound system, and still falls short of other multimedia presentation devices such as a high definition television, or movie screens. Computer 100 is used for purposes of illustration and other types of processors, interactive video games, or consumer electronics devices can also be used with the invention. Computer 100 can employ, but is not limited to or by, a wireless modem or other built in device for wireless communications, or be connected to such devices using a cable or wireless link, as desired.This makes presentation of more complex or "rich" data a less than a useful or enjoyable experience. Therefore, the industry is developing other mechanisms and devices to present the information to end users and provide a minimum level of desired enjoyment or positive experience.As previously discussed above, several types of display devices have or are currently being developed for presenting information to end users of device 100. For example, one or more companies have developed sets of wearable goggles that project an image in front of the eyes of a device user to present a visual display. When correctly positioned such devices effectively "project" a virtual image, as perceived by a user's eyes, that is much larger than the element providing the visual output. That is, a very small projection element allows the eye(s) of the user to "see" images on a much larger scale than possible with typical LCD screens and the like, The use of larger virtual, screen images also allows the use of much higher resolution images than possible with more limited LCD screen displays. Other display devices could include, but are not limited to, small LCD screens or various flat panel display elements, projection lenses and display drivers for projecting images on a surface, and so forth.There may also be additional elements connected to or associated with the use of wireless device 102 or computer 100 for presenting an output to another user, or to another device which in turn transfers the signals elsewhere or stores them. For example, data may be stored in flash memory, in optical form, for example using a writeable CD media or on magnetic media such as in a magnetic tape recorder and similar devices, for later use.In addition, many wireless devices and computers now have built-in MP3 music decoding capabilities, as well as other advanced sound decoders and systems. Portable computers utilize CD and DVD playback capabilities as a general rule, and some have small dedicated flash memory readers for receiving pre-recorded audio files. The issue with having such capabilities is that digital music files promise a highly increased feature rich experience, but only if the decoding and playback process can keep pace. The same holds true for the digital video files.To assist with sound reproduction, external speakers 114 are shown in FIG. 1A , which could also be accompanied by addition elements such as sub-woofers, or "surround-sound" speakers for front and rear sound projection. At the same time, speakers or earphones 108 are indicated as built-in to the support frame or mechanism of micro-display device 106 of FIG. 1B . As would be known, other audio or sound reproduction elements can be used including power amplification or sound shaping devices.In any case, as discussed above, when one desires to transfer high quality or high resolution image data and high quality audio information or data signals from a data source to an end user over one or more communication links 110, a high data rate is required. That is, transfer link 110 is clearly a potential bottleneck in the communication of data as discussed earlier, and is limiting system performance, since current transfer mechanisms do not achieve the high data rates typically desired. As discussed above for example, for higher image resolutions such as 1024 by 1024 pixels, with color depths of 24-32 bits per pixel and at data rates of 30 fps, the data rates can approach rates in excess of 755 Mbps or more. In addition, such images may be presented as part of a multimedia presentation which includes audio data and potentially additional signals dealing with interactive gaming or communications, or various commands, controls, or signals, further increasing the quantity or data and the data rate,It is also clear that fewer cables or interconnections required for establishing a data link, means that mobile devices associated with a display are easier to use, and more likely to be adopted by a larger user base. This is especially true where multiple devices are commonly used to establish a full audio-visual experience, and more especially as the quality level of the displays and audio output devices increases.Another typical application related to many of the above and other improvements in video screens and other output or input devices can be seen in FIGS. 1C and 1D where a portable or laptop computer 130 and wireless telephone or PDA device 140 are shown communicating data with "internal" display devices 134 and 144, respectively, along with audio reproduction systems 136 and 146.In FIGS. 1C and 1D , small cut-away sections of the overall electronic devices or products are used to show the location of one or more internal hosts and controllers in one portion of the device with a generalized communication link, here 138 and 148, respectively, connecting them to the video display elements or screens having the corresponding clients, across a rotating joint of some known type used throughout the electronics industry today. One can see that the amount of data involved in these transfers requires a large number of conductors to comprise links 138 and 148. It is estimated that such communication links are approaching 90 or more conductors in order to satisfy today's growing needs for utilizing advanced color and graphical interfaces, display elements, on such devices because of the types of parallel or other known interface techniques available for transferring such data.Unfortunately, the higher data rates exceed current technology available for transferring data. Both in terms of the raw amount of data needing to be transferred per unit time, and in terms of manufacturing reliable cost effective physical transfer mechanism.What is needed is a technique, a structure, means or method, for transferring data at higher rates for the data transfer link or communication path between presentation elements and the data source, which allows for consistently low(or) power, light weight, and as simple and economical a cabling structure as possible. Applicants have developed a new technique, or method and apparatus, to achieve these and other goals to allow an array of mobile, portable, or even fixed location devices to transfer data to desired displays, mcro-displays, or audio transfer elements, at very high data rates, while maintaining a desired low power consumption, and complexity. III. High Rate Digital Data Interface System Architecture In order to create and efficiently utilize a new device interface, a signal protocol and system architecture has been formulated that provides a very high data transfer rate using low power signals. The protocol is based on a packet and common frame structure, or structures linked together to form a protocol for communicating a pre-selected set of data or data types along with a command or operational structure imposed on the interface. A. Overview The devices connected by or communicating over the MDDI link are called the host and client, with the client typically being a display device of some type, although other output and input devices are contemplated. Data from the host to the display travels in the forward direction (referred to as forward traffic or link), and data from the client to the host travels in the reverse direction (reverse traffic or link), as enabled by the host. This is illustrated in the basic configuration shown in FIG. 2 . In FIG. 2 , a host 202 is connected to a client 204 using a bi-directional communication channel 206 which is illustrated as comprising a forward link 208 and a reverse link 210. However, these channels are formed by a common set of conductors whose data transfer is effectively switched between the forward or reverse link operations. This allows for greatly reduced numbers of conductors, immediately addressing one of the many problems faced with current approaches to high speed data transfer in low power environments such as for mobile electronic devices.As discussed elsewhere, the host comprises one of several types of devices that can benefit from using the present invention. For example, host 202 could be a portable computer in the form of a handheld, laptop, or similar mobile computing device. It could also be a Personal Data Assistant (PDA), a paging device, or one of many wireless telephones or modems. Alternatively, host 202 could be a portable entertainment or presentation device such as a portable DVD or CD player, or a game playing device.Furthermore, the host can reside as a host device or control element in a variety of other widely used or planned commercial products for which a high speed communication link is desired with a client. For example, a host could be used to transfer data at high rates from a video recording device to a storage based client for improved response, or to a high resolution larger screen for presentations. An appliance such as a refrigerator that incorporates an onboard inventory or computing system and/or Bluetooth connections to other household devices, can have improved display capabilities when operating in an internet or Bluetooth connected mode, or have reduced wiring needs for in-the-door displays (a client) and keypads or scanners (client) while the electronic computer or control systems (host) reside elsewhere in the cabinet. In general, those skilled in the art will appreciate the wide variety of modern electronic devices and appliances that may benefit from the use of this interface, as well as the ability to retrofit older devices with higher data rate transport of information utilizing limited numbers of conductors available in either newly added or existing connectors or cables.At the same time, client 204 could comprise a variety of devices useful for presenting information to an end user, or presenting information from a user to the host. For example, a micro-display incorporated in goggles or glasses, a projection device built into a hat or helmet, a small screen or even holographic element built into a vehicle, such as in a window or windshield, or various speaker, headphone, or sound systems for presenting high quality sound or music. Other presentation devices include projectors or projection devices used to present information for meetings, or for movies and television images, Another example would be the use of touch pads or sensitive devices, voice recognition input devices, security scanners, and so forth that may be called upon to transfer a significant amount of information from a device or system user with little actual "input" other than touch or sound from the user. In addition, docking stations for computers and car kits or desk-top kits and holders for wireless telephones may act as interface devices to end users or to other devices and equipment, and employ either clients (output or input devices such as mice) or hosts to assist in the transfer of data, especially where high speed networks are involved.However, those skilled in the art will readily recognize that the present invention is not limited to these devices, there being many other devices on the market, and proposed for use, that are intended to provide end users with high quality images and sound, either in terms of storage and transport or in terms of presentation at playback. The present invention is useful in increasing the data throughput between various elements or devices to accommodate the high data rates needed for realizing the desired user experience.The inventive MDDI and communication signal protocol may be used to simplify the interconnect between a host processor, controller, or circuit component (for example), and a display within a device or device housing or structure (referred to as an internal mode) in order to reduce the cost or complexity and associated power and control requirements or constraints of these connections, and to improve reliability, not just for connection to or for external elements, devices, or equipment (referred to as an external mode).The aggregate serial link data rate on each signal pair used by this interface structure can vary over many orders of magnitude, which allows a system or device designer to easily optimize cost, power, implementation complexity, and the display update rate for a given application or purpose. The attributes of MDDI are independent of display or other presentation device (target client) technology. The timing of data packets transferred through the interface can be easily adjusted to adapt to idiosyncrasies of particular clients such as display devices, sound systems, memory and control elements, or combined timing requirements of audio-video systems. While this makes it possible to have a system with a very small power consumption, it is not a requirement of the various clients to have frame buffers in order to make use of the MDDI protocol at least at some level. B. Interface Types The MDDI is contemplated as addressing at least fours, and potentially more, somewhat distinct physical types of interfaces found in the communications and computer industries. These are labeled simply as Type 1, Type 2, Type 3, and Type 4, although other labels or designations may be applied by those skilled in the art depending upon the specific applications they are used for or industry they are associated with. For example, simple audio systems use fewer connections than more complex multimedia systems, and may reference features such as "channels" differently, and so forth.The Type 1 interface is configured as a 6-wire, or other type of conductor or conductive element, interface which makes it suitable for mobile or wireless telephones, PDAs, electronic games, and portable media players, such as CD players, or MP3 players, and similar devices or devices used on similar types of electronic consumer technology. In one embodiment, a an interface can be configured as an 8-wire (conductor) interface which is more suitable for laptop, notebook, or desktop personal computers and similar devices or applications, not requiring rapid data updates and which do not have a built-in MDDI link controller. This interface type is also distinguishable by the use of an additional two-wire Universal Serial Bus (USB) interface, which is extremely useful in accommodating existing operating systems or software support found on most personal computers.Type 2, Type 3, and Type 4 interfaces are suitable for high performance clients or devices and use larger more complex cabling with additional twisted-pair type conductors to provide the appropriate shielding and low loss transfers for data signals.The Type 1 interface passes signals which can comprise display, audio, control, and limited signaling information, and is typically used for mobile clients or client devices that do not require high-resolution full-rate video data. A Type 1 interface can easily support SVGA resolution at 30 fps plus 5.1 channel audio, and in a minimum configuration might use only three wire pairs total, two pairs for data transmission and one pair for power transfer. This type of interface is primarily intended for devices, such as mobile wireless devices, where a USB host is typically not available within the such device for connection and transfer of signals, In this configuration, the mobile wireless device is a MDDI host device, and acts as the "master" that controls the communication link from the host, which generally sends data to the client (forward traffic or link) for presentation, display or playback.In this interface, a host enables receipt of communication data at the host from the client, (reverse traffic or link) by sending a special command or packet type to the client that allows it to take over the bus (link) for a specified duration and send data to the host as reverse packets. This is illustrated in FIG. 3 , where a type of packet referred to as an encapsulation packet (discussed below) is used to accommodate the transfer of reverse packets over the transfer link, creating the reverse link. The time interval allocated for the host to poll the client for data is pre-determined by the host, and is based on the requirements of each specified application. This type of half-duplex bi-directional data transfer is especially advantageous where a USB port is not available for transfer of information or data from the client.High-performance displays capable of HDTV type or similar high resolutions require around 1.5 Gbps rate data streams in order to support full-motion video. The Type 2 interface supports high data rates by transmitting 2 bits in parallel, the Type 3 by transmitting 4 bits in parallel, and the Type 4 interface transfers 8 bits in parallel. Type 2 and Type 3 interfaces use the same cable and connector as Type 1 but can operate at twice and four times the data rate to support higher-performance video applications on portable devices. A Type 4 interface is suited for very high performance clients or displays and requires a slightly larger cable that contains additional twisted-pair data signals.The protocol used by the MDDI allows each Type 1, 2, 3, or 4 host to generally communicate with any Type 1, 2, 3, or 4 client by negotiating what is the highest data rate possible that can be used. The capabilities or available features of what can be referred to as the least capable device is used to set the performance of the link As a rule, even for systems where the host and client are both capable using Type 2, Type 3, or Type 4 interfaces, both begin operation as a Type 1 interface. The host then determines the capability of the target client, and negotiates a hand-off or reconfiguration operation to either Type 2, Type 3, or Type 4 mode, as appropriate for the particular application.It is generally possible for the host to use the proper link-layer protocol (discussed further below) and step down or again reconfigure operation at generally any time to a slower mode to save power or to step up to a faster mode to support higher speed transfers, such as for higher resolution display content. For example, a host may change interface types when the system switches from a power source such as a battery to AC power, or when the source of the display media switches to a lower or higher resolution format, or a combination of these or other conditions or events may be considered as a basis for changing an interface type, or transfer mode.It is also possible for a system to communicate data using one mode in one direction and another mode in another direction. For example, a Type 4 interface mode could be used to transfer data to a display at a high rate, while a Type 1 mode is used when transferring data to a host device from peripheral devices such as a keyboard or a pointing device. It will be appreciated by one skilled in the art that hosts and clients may communicate outgoing data at different rates.Often, users of the MDDI protocol may distinguish between an "external" mode and an "internal" mode. An external mode describes the use of the protocol and interface to connect a host in one device to a client outside of that device that is up to about 2 meters or so from the host. In this situation, the host may also send power to the external client so that both devices can easily operate in a mobile environment. An internal mode describes when the host is connected to a client contained inside the same device, such as within a common housing or support frame or structure of some kind. An example would be applications within a wireless phone or other wireless device, or a portable computer or gaming device where the client is a display or display driver, or an input device such as a keypad or touch-pad, or a sound system, and the host is a central controller, graphics engine, or CPU element. Since a client is located much closer to the host in internal mode applications as opposed to external mode applications, there are generally no requirements discussed for the power connection to the client in such configurations. C. Physical Interface Structure The general disposition of a device or link controller for establishing communications between host and client devices is shown in FIGS. 4 and 5 . In FIGS. 4 and 5 , a MDDI link controller 402 and 502 is shown installed in a host device 202 and a MDDI link controller 404 and 504 is shown installed in a client device 204. As before, host 202 is connected to a client 204 using a bi-directional communication channel 406 comprising a series of conductors. As discussed below, both the host and client link controllers can be manufactured as an integrated circuit using a single circuit design that can be set, adjusted, or programmed to respond as either a host controller (driver) or a client controller (receiver). This provides for lower costs due to larger scale manufacturing of a single circuit device.In FIG. 5 , a MDDI link controller 502 is shown installed in a host device 202' and a MDDI link controller 504 is shown installed in a client device 204'. As before, host 202' is connected to a client 204' using a hi-directional communication channel 506 comprising a series of conductors. As discussed before, both the host and client, link controllers can be manufactured using a single circuit design.Signals passed between a host and a client, such as a display device, over the MDDI link, or the physical conductors used, are also illustrated in FIGS. 4 and 5 . As seen in FIGS. 4 and 5 , the primary path or mechanism for transferring data through the MDDI uses data signals labeled as MDDI_Data0+/- and MDDI_Stb+/-. Each of these are low voltage data signals that are transferred over a differential pair of wires in a cable. There is only one transition on either the MDDI_Data0 pair or the MDDI_Stb pair for each bit sent over the interface. This is a voltage based transfer mechanism not current based, so static current consumption is nearly zero. The host drives the MDDI_Stb signals to the client display.While data can flow in both the forward and reverse directions over the MDDI_Data pairs, that is, it is a bi-directional transfer path, the host is the master or controller of the data link. The MDDI_Data0 and MDDI-Stb signal paths are operated in a differential mode to maximize noise immunity. The data rate for signals on these lines is determined by the rate of the clock sent by the host, and is variable over a range of about 1 kbps up to 400 Mbps or more.The Type 2 interface contains one additional data pair or conductors or paths beyond that of the Type 1, referred to as MDDI_Data1+/-. The Type 3 interface contains two additional data pairs or signal paths beyond that of the Type 2 interface referred to as MDDI_Data2+/-, and MDDI_Data3+/-. The Type 4 interface contains four more data pairs or signal paths beyond that of the Type 3 interface referred to as: MDDI_data4+/-, MDDI_Datas5+/-, MDDI_Data6+/-, and MDDI_Data7+/-, respectively. In each of the above interface configurations, a host can send power to the client or display using the wire-pair or signals designated as HOST_Pwr and HOST_Gnd. As discussed further below, power transfer can also be accommodated, if desired, in some configurations on the MDDI_data4+/-, MDMI_Data5+/-, MDDI_Data6+/-, or MDDI_Data7+/- conductors when an interface "Type" is being used that employs fewer conductors than are available or present for the other modes. This Power transfer is generally employed for external modes, there generally being no need for internal modes, although some applications may differ.A summary of the signals passed between the host and client (display) over the MDDI link for various modes are illustrated in Table I, below, in accordance with the interface type.Table IHOST_Pwr/GndHOST_Pwr/GndHOST_Pwr/GndHOST_Pwr/GndMDDI_Stb+/-MDDI_Stb+/-MDDI_Stb+/-MDDI_Stb+/-MDDI_Data0+/-MDDI_Data0+/-MDDI_Data0+/-MDDI_Data0+/-MDDI_Datal+/-MDDI_Data1+/-MDDI_Data1+/-MDDI_Data2+/-MDDI_Data2+/-MDDI_Data3+/-MDDI_Data3+/-Optional PwrOptional PwrOptional PwrMDDI_Data4+/-Optional PwrOptional PwrOptional PwrMDDI_Data5+/-Optional PwrOptional PwrOptional PwrMDDI_Data6+/-Optional PwrOptional PwrOptional PwrMDDI_Data7+/-Also note that the HOST_Pwr/Cnd connections for transfer from the host are provided generally for external modes. Internal applications or modes of operation generally have clients that draw power directly from other internal resources, and do not use MDDI to control power distribution, as would be apparent to one skilled in the art, so such distribution is not discussed in further detail here. However, it is certainly possible to allow power to be distributed through the MDDI to allow for certain kinds of power control, synchronization, or interconnection convenience, for example, as would be understood by one skilled in the art.Cabling generally used to implement the above structure and operation is nominally on the order of 1.5 meters in length, generally 2 meters or less, and contains three twisted pairs of conductors, each in turn being multi-strand 30 AWG wire. A foil shield covering is wrapped or otherwise formed above the three twisted pairs, as an additional drain wire. The twisted pairs and shield drain conductor terminate in the client connector with the shield connected to the shield for the client, and there is an insulating layer, covering the entire cable, as would be well known in the art. The wires are paired as: HOST_Gnd with HOST_Pwr; MDDI_Stb/+ with MDDI_Stb/-; MDDI_Data0+ with MDDI_Data0-; MDDI_Data1+ with MDDI_Data1-; and so forth. However, a variety of conductors and cabling can be used, as would be understood in the art, to implement the embodiments of the invention, depending upon specific applications. For example, heavier outside coatings or even metallic layers may be used to protect the cable in some applications, while thinner, flatter conductive ribbon type structures may be well suited to other applications. D. Data Types and Rates To achieve a useful interface for a full range of user experiences and applications, the Mobile Digital Data Interface (MDDI) provides support for a variety of clients and display information, audio transducers, keyboards, pointing devices, and many other input or output devices that might be integrated into or working in concert with a mobile display device, along with control information, and combinations thereof. The MDDI is designed to be able to accommodate a variety of potential types of streams of data traversing between the host and client in either the forward or reverse link directions using a minimum number of cables or conductors. Both isochronous streams and asynchronous stream (updates) ate supported. Many combinations of data types are possible as long as the aggregate data rate is less than or equal to the maximum desired MDDI link rate, which is limited by the maximum serial rate and number of data airs employed. These could include, but are not limited to, those items listed in Tables II and III below.Table IIisochronous video data720x480,12bit, 30f/s∼124.5 Mbpsisochronous stereo audio data44,1kHz, 16bit, stereo∼ 1.4 Mbpsasynchronous graphics data800x600,12bit, 10f/s, stereo∼115.2 Mbpsasynchronous controlMinimum<< 1.0 MbpsTable IIIisochronous voice data8 kHz, 8bit<< 1.0 Mbpsisochronous video data640x480,12bit, 24f/s∼ 88.5 Mbpsasynchronous status, user input, etc.minimum<< 1.0 MbpsThe interface is not fixed but extensible so that it can support the transfer of a variety of information "types" which includes user-defined data, for future system flexibility. Specific examples of data to be accommodated are: full-motion video, either in the form of full or partial screen bitmap fields or compressed video; static bitmaps it low rates to conserve power and reduce implementation costs; PCM or compressed audio data at a variety of resolutions or rates; pointing device position and selection, and user-definable data for capabilities yet to be defined. Such data may also be transferred along with control or status information to detect device capability or set operating parameters.Embodiments of the invention advance the art for use in data transfers that include, but are not limited to: watching a movie (video display and audio); using a personal computer with limited personal viewing (graphics display, sometimes combined with video and audio); playing a video game on a PC, console, or personal device (motion graphics display, or synthetic video and audio); "surfing" the Internet, using devices in the form of a video phone (bi-directional low-rate video and audio), a camera for still digital pictures, or a camcorder for capturing digital video images; using a phone, computer, or PDA docked with a projector to give a presentation or docked with a desktop docking station connected to a video monitor, keyboard, and mouse; and for productivity enhancement or entertainment use with cell phones, smart phones, or PDAs, including wireless pointing devices and keyboard data.The high speed data interface as discussed below is presented in terms of providing large amounts of A-V type data over a communication or transfer link which is generally configured as a wire-line or cable type link. However, it will be readily apparent that the signal structure, protocols, timing, or transfer mechanism could be adjusted to provide a link in the form of an optical or wireless media, if it can sustain the desired level of data transfer.The MDDI signals use a concept known as the Common Frame Rate (CPR) for the basic signal protocol or structure. The idea behind using of a Common Frame Rate is to provide a synchronization pulse for simultaneous isochronous data streams. A client device can use this common frame rate as a time reference. A low CF rate increases channel efficiency by decreasing overhead to transmit the sub-frame header. On the other hand, a high CF rate decreases the latency, and allows a smaller elastic data buffer for audio samples. The CF rate of the present inventive interface is dynamically programmable and may be set at one of many values that are appropriate for the isochronous streams used in a particular application. That is, the CF value is selected to best suit the given client and host configuration, as desired.The number of bytes generally required per sub-frame, which is adjustable or programmable, for isochronous data steams that are most likely to be used with an application, such as for a video or micro-display are shown in Table IV.Table IVComputer Game72048024301248.832103680Computer Graphics80060024101115.20048000Video6404801229.97 or 301221.18492160CD Audio11164410021.4112588Voice118800010.06426-2/3Fractional counts of bytes per sub-frame are easily obtained using a simple programmable M/N counter structure. For example, a count of 26-2/3 bytes per CF is implemented by transferring 2 frames of 27 bytes each followed by one sub-frame of 26 bytes. A smaller CF rate may be selected to produce an integer number of bytes per sub-frame. However, generally speaking, to implement a simple M/N counter in hardware should require less area within an integrated circuit chip or electronic module used to implement part or all of embodiments of the invention than the area needed for a larger audio sample FIFO buffer.An exemplary application that illustrates the impact of different data transfer rates and data types is a Karaoke system. For Karaoke, a system where an end user, or users, sings along with a music video program. Lyrics of the song are displayed somewhere on, typically at the bottom of, a screen so the user knows the words to be sung, and roughly the timing of the song. This application requires a video display with infrequent graphics updates, and mixing of the user's voice, or voices, with a stereo audio stream.If one assumes a common frame rate of 300 Hz, then each sub-frame will consist of: 92,160 bytes of video content and 588 bytes of audio content (based on 147 16-bit samples, in stereo) over the forward link to the client display device, and an average of 26.67 (26-2/3) bytes of voice are sent back from a microphone to the mobile Karaoke machine. Asynchronous packets are sent between the host and the display, possibly head mounted. This includes at most 768 bytes of graphics data (quarter-screen height), and less than about 200 bytes (several) bytes for miscellaneous control and status commands.Table V, shows how data is allocated within a sub-frame for the Karaoke example. The total rate being used is selected to be about 279 Mbps. A slightly higher rate of 280 Mbps allows about another 400 bytes of data per sub-frame to be transferred which allows the use of occasional control and status messages.Table VMusic Video at 640 x 480 pixels and 30 fps2 * 28 = 5692160Lyric Text at 640 x 120 pixels and 1 fps Updated in 10 sub-frames, 1/30 sec.28768CD Audio at 44,100 sps, stereo, 16-bint2 * 16 = 32588Voice at 8,000 sps, mono, 8-bit28+8+8+(4*16)+ (3*27) = 12526.67Sub-frame Header22Total Bytes/CF263115815Total Rate (Mbps)(263+115815)*8*300 = 278.5872 III. (Continued) High Rate Digital Data Interface System Architecture E. Link Layer Data transferred using the MDDI high-speed serial data signals consists of a stream of time-multiplexed packets that are linked one after the other. Even when a transmitting device has no data to send, a MDDI link controller generally automatically sends filler packets, thus, maintaining a stream of packets. The use of a simple packet structure ensures reliable isochronous timing for video and audio signals or data streams.Groups of packets are contained within signal elements or structures referred to as sub-frames, and groups of sub-frames are contained within signal elements or structures referred to as a media-frame. A sub-frame contains one or more packets, depending on their respective size and data transfer uses, and a media-frame contains one more sub-frames. The largest sub-frame provided by the protocol employed by the embodiments presented here is on the order of 232-1 or 4,294,967,295 bytes, and the largest media-frame size then becomes on the order of 216-1 or 65,535 sub-frames.A special sub-frame header packet contains a unique identifier that appears at the beginning of each sub-frame, as is discussed below. That identifier is also used for acquiring the frame timing at the client device when communication between the host and client is initiated. Link timing acquisition is discussed in more detail below.Typically, a display screen is updated once per media-frame when full-motion video is being displayed. The display frame rate is the same as the media-frame rate. The link protocol supports full-motion video over an entire display, or just a small region of full-motion video content surrounded by a static image, depending on the desired application. In some low-power mobile applications, such as viewing web pages or email, the display screen may only need to be updated occasionally. In those situations, it is advantageous to transmit a single sub-frame and then shut down or inactivate the link to minimize power consumption. The interface also supports effects such as stereo vision, and handles graphics primitives.Sub-frames allow a system to enable the transmission of high-priority packets on a periodic basis. This allows simultaneous isochronous streams to co-exist with a minimal amount of data buffering. This is one advantage embodiments provide to the display process, allowing multiple data streams (high speed communication of video, voice, control, status, pointing device data, etc.) to essentially share a common channel. It transfers information using relatively few signals. It also enables display-technology-specific actions to exist, such as horizontal sync pulses and blanking intervals for a CRT monitor, or for other client-technoiogy-specific actions. F. Link Controller The MDDI link controller shown in FIGS. 4 and 5 is manufactured or assembled to be a completely digital implementation with the exception of the differential line receivers which are used to receive MDDI data and strobe signals. However, even the differential line drivers and receivers can be implemented in the same digital integrated circuits with the link controller, such as when making a CMOS type IC. No analog functions or phase lock loops (PLLs) are required for bit recovery or to implement the hardware for the link controller. The host and client link controllers contain very similar functions, with the exception of the client interface which contains a state machine for link synchronization, Therefore, the embodiments of the invention allow the practical advantage of being able to create a single controller design or circuit that can be configured as either a host or client, which can reduce manufacturing costs for the link controllers, as a whole. IV. interface Link Protocol A. Frame structure The signal protocol or frame structure used to implement the forward link communication for packet transfer is illustrated in FIG. 6 . As shown in FIG. 6 , information or digital data is grouped into elements known as packets. Multiple packets are in turn grouped together to form what are preferred to as a "sub-frame," and multiple sub-frames are in turn grouped together to form a "media" frame. To control the formation of frames and transfer of sub-frames, each sub-frame begins with a specially predefined packet referred to as a Sub-frame Header Packet (SHP).The host device selects the data rate to be used for a given transfer. This rate can be changed dynamically by the host device based on both the maximum transfer capability of the host, or the data being retrieved from a source by the host, and the maximum capability of the client, or other device the data is being transferred to.A recipient client device designed for, or capable of, working with the MDDI or inventive signal protocol is able to be queried by the host to determine the maximum, or current maximum, data transfer rate it can use, or a default slower minimum rate may be used, as well as useable data types and features supported. This information could be transferred using a Client Capability Packet (DCP), as discussed further below. The client display device is capable of transferring data or communicating with other devices using the interface at a pro-selected minimum data rate or within a minimum data rate range, and the host will perform a query using a data rate within this range to determine the fall capabilities of the client devices.Other status information defining the nature of the bitmap and video frame-rate capabilities of the client can be transferred in a status packet to the host so that the host can configure the interface to be as efficient or optimal as practical, or desired within any system constraints.The host sends filler packets when there are no (more) data packets to be transferred in the present sub-frame, or when the host cannot transfer at a rate sufficient to keep pace with the data transmission rate chosen for the forward link Since each sub-frame begins with a sub-frame header packet, the end of the previous sub-frame contains a packet (most likely a filler packet) the exactly fills the previous sub-frame. In the case of a lack of room for data bearing packets per se, a filler packet will most likely be the last packet in a sub-frame, or at the end of a next previous sub-frame and before a sub-frame header packet. It is the task of the control operations in a host device to ensure that there is sufficient space remaining in a sub-frame for each packet to be transmitted within that sub-frame. At the same time, once a host device initiates the sending of a data packet, the host must be able to successfully complete a packet of that size within a frame without incurring a data under-run condition.In one aspect of embodiments, sub-frame transmission has two modes. One mode is a periodic sub-frame mode, or periodic timing epochs, used to transmit live video and audio streams. In this mode, the Sub-frame length is defined as being non-zero. The second mode is an asynchronous or non-periodic mode in which frames are used to provide bitmap data to a client when new information is available. This mode is defined by setting the sub-frame length to zero in the Sub-frame Header Packet. When using the periodic mode, sub-frame packet reception may commence when the client has synchronized to the forward link frame structure. This corresponds to the "in sync" states defined according to the state diagram discussed below with respect to FIG. 49 or FIG. 63 . In the asynchronous non-penodic sub-frame mode, reception commences after the first Sub-frame Header packet is received. B. Overall Packet Structure The format or structure of packets used to formulate the communication or signal protocol, or method or means for transferring data, implemented by the embodiments are presented below, keeping in mind that the interface is extensible and additional packet structures can be added as desired. The packets are labeled as, or divided into, different "packet types" in terms of their function in the interface, that is, commands, information, value, or data they transfer or are associated with. Therefore, each packet type denotes a pre-defined packet structure for a given packet which is used in manipulating the packets and data being transferred. As will be readily apparent, the packets may have pre-selected lengths or have variable or dynamically changeable lengths depending on their respective functions. The packets could also bear differing names, although the same function is still realized, as can occurs when protocols are changed during acceptance into a standard. The bytes or byte values used in the various packets are configured as multi-bit (8- or 16-bit) unsigned integers. A summary of the packets being employed along with their "type" designations, listed in type order, is shown in Tables VI-1 through VI-4.Each table represents a general "type" of packet within the overall packet structure for ease in illustration and understanding. There is no limitation or other impact implied or being expressed for the invention by these groupings, and the packets can be organized in many other fashions as desired. The direction in which transfer of a packet is considered valid is also noted.Table VI-1Sub-frame Header Packet15359xFiller Packet0xxReverse Link Encapsulation Packet65xLink Shutdown Packet69xInterface Type Handoff Request Packet75xInterface Type Acknowledge Packet76xPerform Type Handoff Packet77xRound Trip Delay Measurement Packet82xForward Link Skew Calibration Packet83xTable VI-2Video Stream Packet6xxAudio Stream Packet32xx1 - 15,Reserved Stream Packets18 - 31,xx33 - 55User-Defined Stream Packets56 - 63xxColor Map Packet64xxForward Audio Channel Enable Packet78xReverse Audio Sample Rate Packet79xTransparent Color Enable Packet81xTable VI -3Client Capability Packet66xKeyboard Data Packet67xxPointing Device Data Packet68xxClient Request and Status Packet70xDigital Content Protection Overhead Packet80xxRequest VCP Feature Packet128xVCP Feature Reply Packet129xSet VCP Feature Packet130xRequest Valid Parameter Packet131xValid Parameter Reply Packet132xRequest Specific Status Packet138xValid Status Reply List Packet139xPacket Processing Delay Parameters Packet140xPersonal Display Capability Packet141xClient Error Report Packet142xScaled Video Stream Capability Packet143xClient Identification Packet144xAlternate Display Capability Packet145xRegister Access Packet146xxTable VI -4Bit Block Transfer Packet71 x Bitmap Area Fill Packet72 x Bitmap Pattern Fill Packet73 x Read Frame Buffer Packet74 x Scaled Video Stream Capability Packet143 x Scaled Video Stream Setup Packet136 x Scaled Video Stream Acknowledgement Packet137 x Scaled Video Stream Packet18 x Something that is clear from other discussions within this text is that while the Reverse Encapsulation Packet, Client Capability Packet, and Client Request and Status Packet are each considered very important to, or even required in many embodiments of communication interfaces, for External Mode operation, while they can be or are more likely to be considered optional for Internal Mode operation. This creates yet another type of MDDI protocol which allows communication of data at very high speeds with a reduced set of communications packets, and corresponding simplification of control and timing.Packets have a common basic structure or overall set of minimum fields comprising a Packet Length field, a Packet Type field, Data Bytes field(s), and a CRC field, which is illustrated in FIG. 7 . As shown in FIG. 7 , the Packet Length field contains information, in the form of a multi-bit or -byte value, that specifies the total number of bits in the packet, or its length between the packet length field and the CRC field. In one embodiment, the packet length field contains a 16-bit or 2-byte wide, unsigned integer, that specifics the packet length. The Packet Type field is another multi-bit field which designates the type of information that is contained within the packet. In an exemplary embodiment, this is an 16-bit or 2-byte wide value, in the form of an 16-bit unsigned integer, and specifies such data types as display capabilities, handoff, video or audio streams, status, and so forth.A third field is the Data Bytes field, which contains the bits or data being transferred or sent between the host and client devices as part of that packet. The format of the data is defined specifically for each packet type according to the specific type of data being transferred, and may be separated into a series of additional fields, each with its own format requirements. That is, each packet type will have a defined format for this portion or field. The last field is the CRC field which contains the results of a 16-bit cyclic redundancy check calculated over the Data Bytes, Packet Type, and Packet Length fields, which is used to confirm the integrity of the information in the packet. In other words, calculated over the entire packet except for the CRC field itself. The client generally keeps a total count of the CRC errors detected, and reports this count back to the host in the Client Request and Status Packet (see further below).Generally, these field widths and organization are designed to keep 2-byte fields aligned on an even byte boundary, and 4-byte fields aligned on 4-byte boundaries. This allows packet structures to be easily built in a main memory space of, or associated with, a host and a client without violating the data-type alignment rules encountered for most or typically used processors or control circuits.During transfer of the packets, fields are transmitted starting with the Least Significant Bit (LSB) first, and ending with the Most Significant Bit (MSB) transmitted last. Parameters that are more than one byte in length are transmitted using the least significant byte first, which results in the same bit transmission pattern being used for a parameter greater than 8 bits in length, as is used for a shorter parameter where the LSB is transmitted first. The data fields of each packet are generally transmitted in the order that they are defined in the subsequent sections below, with the first field listed being transmitted first, and the last field described being transmitted last. The data on the MDDI_Data0 signal path is aligned with bit '0' of bytes transmitted on the interface in any of the modes, Type 1, Type 2, Type 3, or Type-4.When manipulating data for displays, the data for arrays of pixels are transmitted by rows first, then columns, as is traditionally done in the electronics arts. In other words, all pixels that appear in the same row in a bit map are transmitted in order with the left-most pixel transmitted first and the right-most pixel transmitted last. After the right-most pixel of a row is transmitted then the next pixel in the sequence is the left-most pixel of the following row. Rows of pixels are generally transmitted in order from top to bottom for most displays, although other configurations can be accommodated as needed. Furthermore, in handling bitmaps, the conventional approach, which is followed here, is to define a reference point by labeling the upper-left corner of a bitmap as location or offset "0,0." The X and Y coordinates used to define or determine a position in the bitmap increase in value as one approaches the right and bottom of the bitmap, respectively. The first row and first column (upper left corner of an image) start with an index value of zero. The magnitude of the X coordinate increases toward the right side of the image, and the magnitude of the Y coordinate increases toward the bottom of the image as viewed by the user of the display.A display window is the visible portion of a bitmap, the portion of the pixels in the bitmap that can be seen by the user on the physical display medium. It is often the case that the display window and the bitmap are the same size. The upper-left corner of a display window always displays bitmap pixel location '0,0'. The width of the display window corresponds to the X axis of the bitmap, and the display window width for this embodiment is less than or equal to the width of the corresponding bitmap. The height of the window corresponds to the Y axis of the bitmap, and the display window height for this embodiment is less than or equal to the height of the corresponding bitmap. The display window itself is not addressable in the protocol because it is only defined as the visible portion of a bitmap.The relationship between a bitmaps and display windows is well known in the computer, electronic art, Internet communication, and other electronics related arts. Therefore, no further discussion or illustration of these principles is provided here. C. Packet Definitions 1. Sub-Frame Header Packet The Sub-Frame Header packet is the first packet of every sub-frame, and has a basic structure as illustrated in FIG. 8 . The Sub-Frame Header Packet is used for host-client synchronization, every host should be able to generate this packet, while every client should be able to receive and interpret this packet. As can be seen in one embodiment in FIG. 8 , this type of packet is structured to have Packet Length, Packet Type, Unique Word, Reserved 1, Sub-Frame Length, Protocol Version, Sub-Frame Count, and Media-frame Count fields, generally in that order. In one embodiment, this type of packet is generally identified as a Type 15359 (0x3bff hexadecimal) packet and uses a pre-selected fixed length of 20 bytes, not including the packet length field.The Packet Type field and the Unique Word field each use a 2 byte value (16-bit unsigned integer), The 4-byte combination of these two fields together forms a 32-bit unique word with good autocorrelation. In one embodiment, the actual unique, word is 0x005a3bff, where the lower 16 bits are transmitted first as the Packet Type, and the most significant 16 bits are transmitted afterward.The Reserved 1 field contains 2 bytes that are reserved space for future use, and is generally configured at this point with all bits set to zero. One purpose of this field is to cause subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address. The least significant byte is reserved to indicate whether or not a host is capable of addressing multiple client devices. A value of zero for this byte is reserved to indicate that the host is capable of operating only with a single client device.The Sub-frame Length field contains 4 bytes of information, or values, that specifies the number of bytes per sub-frame. In one embodiment, the length of this field may be set equal to zero to indicate that only one sub-frame will be transmitted by the host before the link is shut down into an idle state. The value in this field can be dynamically changed "on-the-fly" when transitioning from one sub-frame to the next. This capability is useful in order to make minor timing adjustments in the sync pulses for accommodating isochronous data streams. If the CRC of the Sub-frame Header packet is not valid then the link controller should use the Sub-frame Length of the previous known-good Sub-frame Header packet to estimate the length of the current sub-frame.The Protocol Version field contains 2 bytes that specify the protocol version used by the host. The Protocol Version field may be set to '0' to specify the first or current version of the protocol as being used. This value will change over time as new versions are created, and is already being upgraded to a value of '1' for some version fields. Version values will probably or generally follow a current version number for an approved standards document which covers interfaces such as MDDI, as would be known.The Sub-frame Count field contains 2 bytes that specify a sequence number that indicates the number of sub-frames that have been transmitted since the beginning of the media-frame. The first sub-frame of the media-frame has a Sub-frame Count of zero. The last sub-frame of the media-frame has a value of n-1, where n is the number of sub-frames per media-frame. The value of the Sub-frame Count field is equal to the Sub-frame Count sent in the previous Sub-Frame packet plus 1, except for a first sub-frame of a media-frame which will have a count of zero. Note that if the Sub-frame Length is set equal to zero (indicating a non-periodic sub-frame) then the Sub-frame count is also set equal to zero.The Media-frame Count field contains 4 bytes (32-bit unsigned integer) that specify a sequence number that indicates the number of media-frames that have been transmitted since the beginning of the present media item or data being transferred. The first media-frame of a media item has a Media-frame Count of zero. The Media-frame Count increments just prior to the first sub-frame of each media-frame and wraps back to zero after the maximum Media-frame Count (for example, media-frame number 23-1 = 4,294,967,295) is used. The Media-frame Count value may be reset generally at any time by the Host to suit the needs of an end application.2. Filler PacketA filler packet is a packet that is transferred to, or from, a client device when no other information is available to be sent on either the forward or reverse link. It is recommended that filler packets have a minimum length in order to allow maximum flexibility in sending other packets when required. At the very end of a sub-frame or a reverse link encapsulation packet (see below), a link controller sets the size of the filler packet to fill the remaining space to maintain packet integrity. The Filler Packet is useful to maintain timing on the link when the host or client have no information to send or exchange. Every host and client needs to be able to send and receive this packet to make effective use of the interface.An exemplary embodiment of the format and contents of a Filler Packet are shown in FIG. 9 . As shown in FIG. 9 , this type of packet is structured to have Packet Length, Packet Type, Filler Bytes, and CRC fields. In one embodiment, this type of packet is generally identified as a Type 0, which is indicated in the 2-byte Type field. The bits or bytes in the Filler Bytes field comprise a variable number of all zero bit values to allow the filler packet to be the desired length. The smallest filler packet contains no bytes in this field. That is, the packet consists of only the packet length, packet type, and CRC, and in one embodiment uses a pre-selected fixed length of 6 bytes or a Packet Length value of 4. The CRC value is determined for all bytes in the packet including the Packet Length, which may be excluded in some other packet types. 3. Video Stream Packet Video Stream Packets carry video data to update typically rectangular regions of a display device. The size of this region may be as small as a single pixel or as large as the entire display. There may be an almost unlimited number of streams displayed simultaneously, limited by system resources, because all context required to display a stream is contained within the Video Stream Packet. The format of one embodiment of a Video Stream Packet (Video Data Format Descriptor) is shown in FIG. 10 . As seen in FIG. 10 , in one embodiment, this type of packet is structured to have Packet Length (2 bytes), Packet Type, bClient ID, Video Data Descriptor, Pixel Display Attributes, X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X and Y Start, Pixel Count, Parameter CRC, Pixel Data, and Pixel Data CRC fields. This type of packet is generally identified as a Type 16, which is indicated in the 2-byte. Type field. In one embodiment, a client indicates an ability to receive a Video Stream Packet using RGB, Monochrome, and Y Cr Cb Capability fields of the Client Capability Packet.In one embodiment, the bClient ID field contains 2 bytes of information that are reserved for a Client ID. Since this is a newly developed communications protocol actual client IDs are not yet known or sufficiently communicable. Therefore, the bits in this field are generally set equal to zero until such ID values are known, at which time the ID values can be inserted or used, as would be apparent to those skilled in the art. The same process will generally be true for the client ID fields discussed below.The common frame concept discussed above is an effective way to minimize the audio buffer size and decrease latency. However, for video data it may be necessary to spread the pixels of one video frame across multiple Video Stream Packets within a media-frame. It is also very likely that the pixels in a single Video Stream Packet will not exactly correspond to a perfect rectangular window on the display. For the exemplary video frame rate of 30 frames per second, there are 300 sub-frames per second, which results in 10 sub-frames per media-frame. If there are 480 rows of pixels in each frame, each Video Stream Packet in each sub-frame will contain 48 rows of pixels. In other situations, the Video Stream Packet might not contain an integer number of rows of pixels. This is true for other video frame sizes where the number of sub-frames per media-frame does not divide evenly into the number of rows (also known as video lines) per video frame. For efficient operation, each Video Stream Packet generally must contain an integer number of pixels, even though it might not contain an integer number of rows of pixels. This is important if pixels are more than one byte each, or if they are in a packed format as shown in FIG. 12 .The format and contents employed for realizing the operation of an exemplary Video Data Descriptor field, as mentioned above, are shown in FIGS. 11A-11E . In FIGS. 11A-11E , the Video Data Format Descriptor field contains 2 bytes in the form of a 16-bit unsigned integer that specifies the format of each pixel in the Pixel Data in the present stream in the present packet. It is possible that different Video Stream packets may use different pixel data formats, that is, use a different value in the Video Data Format Descriptor, and similarly, a stream (region of the display) may change its data format on-the-fly. The pixel data format should comply with at least one of the valid formats for the client as defined in the Client Capability Packet. The Video Data Format Descriptor defines the pixel format for the present packet only which does not imply that a constant format will continue to be used for the lifetime of a particular video stream.FIGS. 11A through 11D illustrate how the Video Data Format Descriptor is coded. As used in these figures, and in this embodiment, when bits [15:13] are equal to '000', as shown in FIG. 11A , then the video data consists of an array of monochrome pixels where the number of bits per pixel is defined by bits 3 through 0 of the Video Data Format Descriptor word. Bits 11 through 4 are generally reserved for future use or applications and are set to zero in this situation. When bits [15:13] are instead equal to the values '001', as shown in FIG. 11B , then the video data consists of an array of color pixels that each specify a color through a color map (palette). In this situation, bits 5 through 0 of the Video Data Format Descriptor word define the number of bits per pixel, and bits 11 through 6 are generally reserved for future use or applications and set equal to zero. When bits [15:13] are instead equal to the values '010', as shown in FIG. 11C , then the video data consists of an array of color pixels where the number of bits per pixel of red is defined by bits 11 through 8, the number of bits per pixel of green is defined by bits 7 through 4, and the number of bits per pixel of blue is defined by bits 3 through 0. In this situation, the total number of bits in each pixel is the sum of the number of bits used for red, green, and blue.However, when bits [15:13] are instead equal to the values or string '011', as shown in FIG. 11D , then the video data consists of an array of video data in 4:2:2 YCbCr format with luminance and chrominance information, where the number of bits per pixel of luminance (Y) is defined by bits 11 through 8, the number of bits of the Cb component is defined by bits 7 through 4, and the number of bits of the Cr component is defined by bits 3 through 0. The total number of bits in each pixel is the sum of the number of bits used for red, green, and blue. The Cb and Cr components are sent at half the rate as Y. In addition, the video samples in the Pixel Data portion of this packet are organized as follows: Cbn, Yn, Cm, Yn+1, Cbn+2, Yn+2, Crn+2, Yn+3, ... where Cbn and Crn are associated with Yn and Yn+1, and Cbn+2 and Cm+2 are associated with Yn+2 and Yn+3, and so on.Yn, Yn+1, Yn+2 and Yn+3 are luminance values of four consecutive pixels in a single row from left to right. If there are an odd number of pixels in a row (X Right Edge - X Left Edge + 1) in the window referenced by the Video Stream Packet then the Y value corresponding to the last pixel in each row will be followed by the Cb value of the first pixel of the next row, and a Cr value is not sent for the last pixel in the row. It is recommended that windows using Y Cb Cr format have a width that is an even number of pixels. The Pixel Data in a packet should contain an even number of pixels. It may contain an odd or even number of pixels in the case where the last pixel of the Pixel Data corresponds to the last pixel of a row in the window specified in the Video Stream Packet header, i.e. when the X location of the last pixel in the Pixel Data is equal to X Right Edge.When bits [15:13] are instead equal to the values '100' then the video data consists of an array of Bayer pixels where the number of bits per pixel is defined by bits 3 through 0 of the Video Data Format Descriptor word. The Pixel Group Pattern is defined by bits 5 and 4 as shown in FIG. 11E . The order of pixel data may be horizontal or vertical, and the pixels in rows or columns may be sent in forward or backward order and is defined by bits 8 through 6. Bits 11 through 9 should be set to zero. The group of four pixels in the pixel group in the Bayer format resembles what is often referred to as a single pixel in some display technologies. However, one pixel in the Bayer format is only one of the four colored pixels of the pixel group mosaic patternFor all five formats shown in the figures, Bit 12, which is designated as "P," specifies whether or not the Pixel Data samples are packed, or byte-aligned pixel data. A value of '0' in this field indicates that each pixel in the Pixel Data field is byte-aligned with an MDDI byte boundary. A value of '1' indicates that each pixel and each color within each pixel in the Pixel Data is packed up against the previous pixel or color within a pixel leaving no unused bits. The difference between Byte-Aligned and Packed Pixel data format is shown in more detail in FIG. 12 , where one can clearly see that byte-aligned data may leave unused portions of the data sub-frame, as opposed to packed pixel format which does not.The first pixel in the first video stream packet of a media frame for a particular display window will go into the upper left corner of the stream window defined by an X Left Edge and a Y Top Edge, and the next pixel received is placed in the next pixel location in the same row, and so on. In this first packet of a media frame, the X start value will usually be equal to X Left Edge, and Y start value will usually be equal to Y Top Edge. In subsequent packets corresponding to the same screen window, the X and Y start values will usually be set to the pixel location in the screen window that would normally follow after the last pixel sent in the Video Stream Packet that was transmitted in the previous sub-frame.4. Audio Stream. PacketThe audio stream packets carry audio data to be played through the audio system of the client, or for a stand alone audio presentation device. Different audio data streams may be allocated for separate audio channels in a sound system, for example: left-front, right-front, center, left-rear, and right-rear, depending on the type of audio system being used. A full complement of audio channels is provided for headsets that contain enhanced spatial-acoustic signal processing. A client indicates an ability to receive an Audio Stream Packet using the Audio Channel Capability and Audio Sample Rate fields of the Client Capability Packet, The format of Audio Stream Packets is illustrated in FIG. 13 .As shown in FIG. 13 , this type of packet is structured in one embodiment, to have Packet Length, Packet Type, bClient ID, Audio Channel ID, Reserved 1, Audio Sample Count, Bits Per Sample and Packing, Audio Sample Rate, Parameter CRC, Digital Audio Data, and Audio Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 32 packet.The bClient ID field contains 2 bytes of information that are reserved for a Client ID, as used previously. The Reserved 1 field contains 2 bytes that is reserved for future use, and is generally configured at this point with all bits set to zero.The Bits Per Sample and Packing field contains 1 byte in the form of an 8-bit unsigned integer that specifies the packing format of audio data. The format generally employed is for Bits 4 through 0 to define the number of bits per PCM audio sample. Bit 5 then specifies whether or not the Digital Audio Data samples are packed. The difference between packed and byte-aligned audio samples, here using 10-bit samples, is illustrated in FIG. 14 . A value of '0' indicates that each PCM audio sample in the Digital Audio Data field is byte-aligned with an MDDI byte boundary, and a value of '1' indicates that each successive PCM audio sample is packed up against the previous audio sample. This bit is generally effective only when the value defined in bits 4 through 0 (the number of bits per PCM audio sample) is not a multiple of eight. Bits 7 through 6 are reserved for future use and are generally set at a value of zero. 5. Reserved Stream Packets In one embodiment, packet types 1 to 15, 18 to 31, and 33 through 55 are reserved for stream packets to be defined for use in future versions or variations of the packet protocols, as desired for various applications encountered. Again, this is part of making the MDDI more flexible and useful in the face of ever changing technology and system designs as compared to other techniques. 6. User-Defined Stream Packets Eight data stream types, known as Types 56 through 63, are reserved for use in proprietary applications that may be defined by equipment manufacturers for use with a MDDI link. These are known as User-defined Stream Packets. Such packets may be used for any purpose, but the host and client should only employ such packets in situations where the result of such use is very well understood or known. The specific definition of the stream parameters and data for these packet types is left to the specific equipment manufacturers or interface designers implementing such packet types or seeking their use. Some exemplary uses of the User-defined Stream Packets are to convey test parameters and test results, factory calibration data, and proprietary special use data. The format of the user-defined stream packets as used in one embodiment is illustrated in FIG. 15 . As shown in FIG. 15 , this type of packet is structured to have Packet Length (2 bytes), Packet Type, bClient ID number, Stream Parameters, Parameter CRC, Stream Data, and Stream Data CRC fields. 7. Color Map Packets The color map packets specify the contents of a color map look-up table used to present colors for a client. Some applications may require a color map that is larger than the amount of data that can be transmitted in a single packet. In these cases, multiple Color Map packets may be transferred, each with a different subset of the color map by using the offset and length fields described below. The format of the Color Map Packet in one embodiment is illustrated in FIG. 16 . As shown in FIG. 16 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Color Map Item Count, Color Map Offset, Parameter CRC, Color Map Data, and Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 64 packet (Video Data Format and Color Map Packet) as specified in the Packet Type Field (2 bytes). A client indicates an ability to receive Color Map Packets using the Color Map Size and Color Map Width fields of the Client Capability Packet. 8. Reverse Link Encapsulation Packets In an exemplary embodiment, data is transferred in the reverse direction using a Reverse Link Encapsulation Packet. A forward link packet is sent and the MDDI link operation (transfer direction) is changed or turned around in the middle of this packet so that packets can be sent in the reverse direction. The format of the Reverse Link Encapsulation packet in one embodiment is illustrated in FIG. 17 . As shown in FIG. 17 , this type of packet is structured to have Packet Length, Packet Type, hCLient ID, Reverse Link Flags, Reverse Rate Divisor, Turn-Around 1 Length, Turn-Around 2 Length, Parameter CRC, All Zero 1, Turn-Around 1, Reverse Data Packets, Turn-Around 2, and All Zero 2 fields. In one embodiment, this type of packet is generally identified as a Type 65 packet. For External Mode every host must be able to generate this packet and receive data, and every client must be able to receive and send data to the host in order to efficiently make use of the desired protocol and resulting speed. Implementation of this packet is optional for Internal Mode, but the Reverse Link Encapsulation Packet is used for the host to receive data from the client.The MDDI link controller behaves in a special manner while sending a Reverse Link Encapsulation Packet. The MDDI has a strobe signal that is generally always driven by the host as controller of the link. The host behaves as if it were transmitting a zero for each bit of the Tnrn-Around and Reverse Data Packets portions of the Reverse Link Encapsulation packet. The host toggles a MDDI_Strobe signal at each bit boundary during the two turn-around times and during the time allocated for reverse data packets. That is, the host toggles MDDI_Stb from the beginning of the All Zero 1 field to the end of the All Zero 2 field. (This is the same behavior as if it were transmitting all-zero data.)The host disables its MDDI data signal line drivers and generally assures they have been completely disabled prior to the last bit of the Turn-Around 1 field, and then re-enables its line drivers during the Turn-Around 2 field, and generally assure that the drivers have been completely re-enabled prior to the last bit of the Turn-Around 2 field. The client reads the Turn-Around Length parameter and drives the data signals toward the host immediately after the last bit in the Turn-Around 1 field. That is, the client clocks new data into the link on certain rising edges of the MDDI strobe as specified in the packet contents description below, and elsewhere. The client uses the Packet Length and Turn-Around Length parameters to know the length of time it has available to send packets to the host. The client may send filler packets or drive the data lines to a zero state when it has no data to send to the host. If the data lines are driven to zero, the host interprets this as a packet with a zero length (not a valid length) and the host does not accept any more packets from the client for the duration of the current Reverse Link Encapsulation Packet.In one embodiment, the Reverse Link Request field of the Client Request and Status Packet may be used to inform the host of the number of bytes the client needs in the Reverse Link Encapsulation Packet to send data back to the host. The host attempts to grant the request by allocating at least that number of bytes in the Reverse Link Encapsulation Packet. The host may send more than one Reverse Link Encapsulation Packet in a sub-frame. The client may send a Client Request and Status Packet at almost any time, and the host will interpret the Reverse Link Request parameter as the total number of bytes requested in one sub-frame. 9. Client Capability Packet A host needs to know the capability of the client (display) it is communicating with in order to configure the host-to-client, link in an generally optimum or desired manner. It is recommended that a display send a Client Capability Packet to the host after forward link synchronization is acquired. The transmission of such a packet is considered required when requested by the host using the Reverse Link Flags in the Reverse Link Encapsulation Packet. The Client Capability Packet is used to inform the host of the capabilities of a client. For External Mode every host should be able to receive this packet, and every client should be able to send this packet to fully utilize this interface and protocol. Implementation of this packet is optional for Internal Mode, since the capabilities of the client, such as a display, keyboard or other input/output device, in this situation should already be well defined and known to the host at the time of manufacture or assembly into a single component or unit of some type.The format of the Client Capability packet in one embodiment is illustrated in FIG. 18 . As shown in FIG. 18 , for this embodiment, this type of packet is structured to have Packet Length, Packet Type, reserved cClientID, Protocol Version, Min Protocol Version, Data Rate Capability, Interface Type Capability, Number of Alt Displays, Reserved 1, Bitmap Width, Bitmap Height, Display Window Width, Display Window Height. Color Map Size, Color Map RGB Width, RGB Capability, Monochrome Capability, Reserved 2, Y Cr Cb Capability, Bayer Capability, Alpha-Cursor Image Planes, Client Feature Capability, Max Video Frame Rate, Min Video Frame Rate, Min Sub-frame rate, Audio Buffer Depth, Audio Channel Capability, Audio Sample Rate Capability, Audio Sample Resolution, Mic Audio Sample Resolution, Mic Sample Rate Capability, Keyboard Data Format, Pointing Device Data Format, Content Protection Type, Mfr. Name, Product Code, Reserved 3, Serial Number, Week of Mfr., Year of Mfr., and CRC fields. In an exemplary embodiment, this type of packet is generally identified as a Type 66 packet. 10. Keyboard Data Packets A keyboard data packet is used to send keyboard data from the client device to the host. A wireless (or wired) keyboard may be used in conjunction with various displays or audio devices, including, but not limited to, a head mounted video display/audio presentation device. The Keyboard Data Packet relays keyboard data received from one of several known keyboard-like devices to the host. This packet can also be used on the forward link to send data to the keyboard. A client indicates an ability to send and receive Keyboard Data Packets using the Keyboard Data Field in the Client Capability Packet.The format of a Keyboard Data Packet is shown in FIG. 19 , and contains a variable number of bytes of information from or for a keyboard. As shown in FIG. 19 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Keyboard Data Format, Keyboard Data, and CRC fields. Here, this type of packet is generally identified as a Type 67 packet.The bClient ID is a reserved field, as before, and the CRC is performed over all bytes of the packet. The Keyboard Data Format field contains a 2 bytes value that describes the keyboard data format. Bits 6 through 0 should be identical to the Keyboard Data Format field in the Client Capability Packet. This value is not to equal 127. Bits 15 through 7 are reserved for future use and are, therefore, currently set to zero. 11 .Pointing Device Data Packets A pointing device data packet is used as a method, structure, or means to send position information from a wireless mouse or other pointing device from the client to the host. Data can also be sent to the pointing device on the forward link using this packet. An exemplary format of a Pointing Device Data Packet is shown in FIG. 20 , and contains a variable number of bytes of information from or for a pointing device. As shown in FIG. 20 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Pointing Device Format, Pointing Device Data, and CRC fields. In an exemplary embodiment, this type of packet is generally identified as a Type 68 packet in the 1-byte type field. 12. Link Shutdown Packets A Link Shutdown Packet is sent from the host to the client as a method or means to indicate that the MDDI data and strobe will be shut down and go into a low-power consumption "hibernation" state. This packet is useful to shut down the link and conserve power after static bitmaps are sent from a mobile communication device to the display, or when there is no further information to transfer from a host to a client for the time being. Normal operation is resumed when the host sends packets again. The first packet sent after hibernation is a sub-frame header packet. The format of a Client Status Packet for one embodiment is shown in FIG. 21 . As shown in FIG. 21 , this type of packet is structured to have Packet Length, Packet Type, CRC and All Zeros fields. In one embodiment, this type of packet is generally identified as a Type 69 packet in the 1-byte type field.The packet length field uses 2 bytes to specify the total number of bytes in the packer not including the packet length field. In one embodiment, the Packet Length of this packet is dependent on the Interface Type or link mode in effect at the time when the Link Shutdown Packet is sent. Therefore, the typical packet length becomes 20 bytes for Type 1 mode (22 bytes total in the packet), 36 bytes for a Type 2 mode (38 bytes total in the packet), 68 bytes for a Type 3 mode link (70 bytes total in the packet), and 132 bytes for a Type 4 mode (with 134 bytes total in the packet).The All Zeros field uses a variable number of bytes to ensure that MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using only MDDI_Stb prior to disabling a host's line drivers. The length of the All Zeros field is dependent on the Interface Type or link operating mode in effect at the time when the Link Shutdown Packet is sent. The length of the All Zeros field is intended to produce 64 pulses on MDDI_Stb for any Interface Type setting. Therefore, the All Zeros length for each interface type becomes 16 bytes for Type 1, 32 bytes for Type 2, 64 bytes for Type 3, and 128 bytes for Type 4.The CRC field uses 2 bytes that contain a 16-bit CRC of bytes from the Packet Length to the Packet Type.In the low-power hibernation state, the MDDI_Data0 driver is disabled into a high-impedance state starting after the 16th to 48th MDDI_Stb cycle or pulse after the last bit of the All Zeros field. For Type-2, Type-3, or Type-4 links the MDDI_Data1 through MDDI_DataPwr7 signals are also placed in a high-impedance state at the same time that the MDDI_Data0 driver is disabled Either the host or client may cause the MDDI link to "wake up" from the hibernation state as described elsewhere, which is a key advance for and advantage of the present invention.As described in the definition of the All Zeros field, MDDI_Stb toggles for 64 cycles following the MSB of the CRC field of the Link Shutdown Packet to facilitate an orderly shutdown in the client controller. One cycle is a low-to-high transition followed by a high-to-low transition, or a high-to-low transition followed by a low-to-high transition. After the All Zeros field is sent, the MDDI_Stb driver in the host is disabled. 13. Client Request and Status Packets The host needs a small amount of information from the client so it can configure the host-to-client link in a generally optimum manner. It is recommended that the client send one Client Request and Status Packet to the host each sub-frame. The client should send this packet as the first packet in the Reverse Link Encapsulation Packet to ensure that it is delivered reliably to the host. The forwarding of this packet is also accomplished when requested by a host using the Reverse Link Flags in the Reverse Link Encapsulation Packet. The Client Request and Status Packet is used to report errors and status to the host. For external mode operation, every host should be able to receive this packet, and every client should be able to send this packet in order to properly or optimally employ the MDDI protocol. While it is also recommended that for internal operations, that is internal hosts and internal clients, there should be support for this packet, it is not required.The format of a Client Request and Status Packet is shown in FIG. 22 . As shown in FIG. 22 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, Reverse Link Request, Capability Change, Client Busy, CRC Error Count, and CRC fields. This type of packet is generally identified as a Type 70 packet in the 1-byte type field, and typically uses a pre-selected fixed length of 12 bytes.The Reverse Link Request field may be used to inform the host of the number of bytes the client needs in the Reverse Link Encapsulation Packet to send data back to the host The host should attempt to grant the request by allocating at least that number of bytes in the Reverse Link Encapsulation Packet. The host may send more than one Reverse Link Encapsulation Packet in a sub-frame in order to accommodate data. The client may send a Client Request and Status Packet at any time and the host will interpret the Reverse Link Request parameter as the total number of bytes requested in one sub-frame. Additional details and specific examples of how reverse link data is sent back to the host are shown below. 14. Bit Block Transfer Packets The Bit Block Transfer Packet provides a means, structure, or method to scroll regions of the display in any direction. Displays that have this capability will report the capability in bit 0 of the Display Feature Capability Indicators field of the Client Capability Packet. The format for one embodiment of a Bit Block Transfer Packet is shown in FIG. 23 . As shown in FIG. 23 , this type of packet is structured to have Packet Length, Packet Type, hClient ID. Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Window X Movement, Window Y Movement, and CRC fields. This type of packet is generally identified as a Type 71 packet, and in one embodiment uses a pre-selected fixed length of 15 bytes.The fields are used to specify the X and Y values of the coordinate of the upper left corner of the window to be moved, the width and height of the window to be moved, and the number of pixels that the window is to be moved horizontally, and vertically, respectively. Positive values for the latter two fields cause the window to be moved to the right, and down, and negative values cause movement to the left and up, respectively. 15. Bitmap Area Fill Packets The Bitmap Area Fill Packet provides a means, structure, or method to easily initialize a region of the display to a single color. Displays that have this capability will report the capability in bit 1 of the Client Feature Capability Indicators field of the Client Capability Packet. One embodiment for the format of a Bitmap Area Fill Packet is shown in FIG. 24 . As shown in FIG. 24 , in this case this type of packet is structured to have Packet Length, Packet Type, hClient ID, Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Data Format Descriptor, Pixel Area Fill Value, and CRC fields. This type of packet is generally identified as a Type 72 packet in the 1-byte type field, and uses a pre-selected fixed length of 17 bytes. 16. Bitmap Pattern Fill Packets The Bitmap Pattern Fill Packet provides a means or structure to easily initialize a region of the display to a pre-selected pattern. Displays that have this capability will report the capability in bit 2 of the Client Feature Capability field of the Client Capability Packet. The upper left corner of the fill pattern is aligned with the upper left corner of the window to be filled, unless the horizontal or vertical pattern offset is non-zero. If the window to be filled is wider or taller than the fill pattern, then the pattern may repeated horizontally or vertically a number of times to fill the window. The right or bottom of the last repeated pattern is truncated as necessary. If the window is smaller than the fill pattern, then the right side or bottom of the fill pattern may be truncated to fit the window.If a horizontal pattern offset is non-zero, then the pixels between the left side of the window and the left side plus the horizontal pattern offset are filled with the right-most pixels of the pattern. The horizontal pattern offset is to be less than the pattern width. Similarly, if a vertical pattern offset is non-zero, then the pixels between the top side of the window and the top of the side plus vertical pattern offset are filled with the lower-most pixels of the pattern. The vertical pattern offset is to be less than the pattern height.One embodiment for the format of a Bitmap Pattern Fill Packet is shown in FIG. 25 . As shown in FIG. 25 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Upper Left X Value, Upper Left Y Value, Window Width, Window Height, Pattern Width, Pattern Height, Horizontal Pattern Offset, Vertical Pattern Offset, Data Format Descriptor, Parameter CRC, Pattern Pixel Data, and Pixel Data CRC fields. In some embodiments, this type of packet is generally identified as a Type 73 packet in the 1-byte type field. 17. Communication Link Data Channel Packets The Communication Link Data Channel Packet provides a structure, means, or method for a client with high-level computing capability, such as a PDA, to communicate with a wireless transceiver such as a cell phone or wireless data port device. In this situation, the MDDI link is acting as a convenient high-speed interface between the communication device and the computing device with the mobile display, where this packet transports data at a Data Link Layer of an operating system for the device. For example, this packet could be used if a web browser, email client, or an entire PDA were built into a mobile display. Displays that have this capability will report the capability in bit 3 of the Client Feature Capability field of the Client Capability Packet.The format of an embodiment for a Communication Link Data Channel Packet is shown in FIG. 26 . As shown in FIG. 26 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Parameter CRC, Communication Link Data, and Communication Data CRC fields. In one embodiment, this type of packet is generally identified as a Type 74 packet in the type field. 18 .Interface Type Handoff Request Packets The Interface Type Handoff Request Packet provides a means, method or structure that enables the host to request that the client or display shift from an existing or current mode to the Type 1 (serial), Type 2 (2-bit parallel), Type 3 (4-bit parallel), or Type 4 (8-bit parallel) modes. Before the host requests a particular mode it should confirm that the client is capable of operating in the desired mode by examining bits 6 and 7 of the Display Feature Capability Indicators field of the Client Capability Packet. One embodiment for the format of a Interface Type Handoff Request Packet is shown in FIG. 27 . As shown in FIG. 27 , this type of packet is structured to have Packet Length, Packet Type, Interface Type, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 75 packet, and uses a pre-selected fixed length of 4 bytes. 19. Interface Type Acknowledge Packets The Interface Type Acknowledge Packet is sent by a client and provides a means, method or structure that enables a client to confirm receipt of the Interface Type Handoff Packet. The requested mode. Type 1 (serial), Type 2 (2-bit parallel), Type 3 (4-bit parallel), or Type 4 (8-bit parallel) mode, is echoed back to the host as a parameter in this packet. The format of one embodiment for a Interface Type Acknowledge Packet is shown in FIG. 28 . As shown in FIG. 28 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, Interface Type, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 76 packet, and uses a pre-selected fixed length of 4 bytes. 20. Perform Type Handoff Packets The Perform Type Handoff Packet is a means, structure, or method for the host to command the client to handoff to the mode specified in this packet. This is to be the same mode that was previously requested and acknowledged by the Interface Type Handoff Request Packet and Interface Type Acknowledge Packet. The host and client should switch to the agreed upon mode after this packet is sent. The client may lose and re-gain link synchronization during the mode change. The format of one embodiment for a Perform Type Handoff Packet is shown in FIG. 29 . As shown in FIG. 29 , this type of packet is structured to have Packet Length, Packet Type, Packet Type, Reserve 1, and CRC fields. This type of packet is generally identified as a Type 77 packet in the 1-byte type field, and uses a pre-selected fixed length of 4 bytes. 21. Forward Audio Channel Enable Packets This packet provides a structure, method, or means that allows a host to enable or disable audio channels in a client. This capability is useful so that a client (a display for example) can power off audio amplifiers or similar circuit elements to save power when there is no audio to be output by the host. This is significantly more difficult to implement implicitly simply using the presence or absence of audio streams as an indicator. The default state when the client system is powered-up is that all audio channels are enabled. The format of one embodiment of a Forward Audio Channel Enable Packet is shown in FIG. 30 . As shown in FIG 30 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Audio Channel Enable Mask, and CRC fields. This type of packet is generally identified as a Type 78 packet in the 1-byte type field, and uses a pre-selected fixed length of 4 bytes. 22. Reverse Audio Sample Rate Packets This packet allows the host to enable or disable the reverse-link audio channel, and to set the audio data sample rate of this stream. The host selects a sample rate that is defined to be valid in the Client Capability Packet. If the host selects an invalid sample rate then the client will not send an audio stream to the host, and an appropriate error, error value, or error signal, may be sent to the host in the Client Error Report Packet. The host may disable the reverse-link audio stream by setting the sample rate to a value of 255. The default state assumed when the client system is initially powered-up or connected is with the reverse-link audio stream disabled. The format of one embodiment for a Reverse Audio Sample Rate Packet is shown in FIG. 31 . As shown in FIG. 31 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, Audio Sample Rate, Reserved 1, and CRC fields. This type of packet is generally identified as a Type 79 packet, and uses a pre-selected fixed length of 4 bytes. 23. Digital Content Protection Overhead Packets This packet provides a structure, method, or means that allows a host and a client to exchange messages related to the digital content protection method being used. Presently two types of content protection are contemplated, Digital Transmission Content Protection (DTCP), or High-bandwidth Digital Content Protection System (HDCP), with room reserved for future alternative protection scheme designations. The method being used is specified by a Content Protection Type parameter in this packet. The format of an embodiment of a Digital Content Protection Overhead Packet is shown in FIG. 32 . As shown in FIG. 32 , this type of packet is structured to have Packet Length, Packet Type, bClient ID, Content Protection Type, Content Protection Overhead Messages, and CRC fields. This type of packet is generally identified as a Type 80 packet. 24. Transparent Color Enable Packets The Transparent Color Enable Packet is a structure, method, or means that used to specify which color is transparent in a display and to enable or disable the use of a transparent color for displaying images. Displays that have this capability will report that capability in bit 4 of the Client Feature Capability field of the Client Capability Packet. When a pixel with the value for transparent color is written to the bitmap, the color does not change from the previous value. The format of a Transparent Color Enable Packet is shown in FIG. 33 . As shown in FIG. 33 , in one embodiment this type of packet is structured to have Packet Length, Packet Type, hClient ID, Transparent Color Enable, Reserved 1, Alpha-Cursor Identifier, Data Format Descriptor, Transparent Pixel Value, and CRC fields. This type of packet is generally identified as a Type 81 packet in the 1-byte type field, and uses a pre-selected fixed length of 10 bytes. 25. Round Trip Delay Measurement Packets The Round Trip Delay Measurement Packet provides a structure, method, or means that is used to measure the propagation delay from the host to a client (display) plus the delay from the client (display) back to the host. This measurement inherently includes the delays that exist in the line drivers and receivers, and an interconnect subsystem. This measurement is used to set the turn around delay and reverse link rate divisor parameters in the Reverse Link Encapsulation Packet, described generally above. This packet is most useful when the MDDI link is running at the maximum speed intended for a particular application. The packet may be sent in Type 1 mode and at a lower data rate in order to increase the range of the round trip delay measurement. The MDDI_Stb signal behaves as though all zero data is being sent during the following fields: both Guard Times, All Zero, and the Measurement Period, This causes MDDI_Stb to toggle at half the data rate so it can be used as periodic clock in the client during the Measurement Period.In one embodiment, a client generally indicates an ability to support the Round Trip Delay Measurement Packet through use of bit 18 of the Client Feature Capability Indicators field of the Client Capability Packet. It is recommended that all clients support round trip delay measurement, but it is possible for the host to know the worst-case round trip delay based on a maximum cable delay, and on maximum driver and receiver delays. The host may also know the round-trip delay in advance for an MDDI link used in internal mode, since this is an aspect of known design elements (conductor lengths, circuitry type, and features, and so forth) of the device in which the interface is being used.The format of a Round Trip Delay Measurement Packet is shown in FIG. 34 . As shown in FIG. 34 , in one embodiment this type of packet is structured to have Packet Length, Packet Type, hClient ID, Parameter CRC, Guard Time 1, Measurement Period, All Zero, and Guard Time 2 fields. This type of packet is generally identified as a Type 82 packet, and uses a pre-selected fixed length of 159 bits.The timing of events that take place during me Round Trip Delay Measurement Packet are illustrated in FIG. 35 . In FIG. 35 , the host transmits the Round Trip Delay Measurement Packet, shown by the presence of the Parameter CRC and Strobe Alignment fields followed by the All Zero 1 and Guard Time 1 fields. A delay 3502 occurs before the packet reaches the client display device or processing circuitry. As the client receives the packet, it transmits the 0xff, 0xff, and 30 bytes of 0x00 pattern as precisely as practical at the beginning of the Measurement Period as determined by the client. The actual time the client begins to transmit, this sequence is delayed from the beginning of the Measurement Period from the point of view of the host. The amount of this delay is substantially the time it takes for the packet to propagate through the line drivers and receivers and the interconnect subsystem (cables, conductors). A similar amount of delay 3504 is incurred for the pattern to propagate from the client back to the host.In order to accurately determine the round trip delay time for signals traversing to and from the client, the host counts the number of forward link bit time periods occurring after the start of the Measurement Period until the beginning of the 0xff, 0xff, and 30 bytes of 0x00 sequence is detected upon arrival. This information is used to determine the amount of time for a round trip signal to pass from the host to the client and back again. Then, about one half of this amount is attributed to a delay created for the one way passage of a signal to the client.The host and client both drive the line to a logic-zero level during both guard times to keep the MDDI_DATA lines in a defined state. The enable and disable times of the host and client during both guard times are such that the MDDI_Data signals are at a valid low level for any valid round-trip delay time. 26. Forward Link Skew Calibration Packet The Forward Link Skew Calibration Packet allows a client or display to calibrate itself for differences in the propagation delay of the NIDDI_Data signals with respect to the MDDI_Stb signal. Without delay skew compensation, the maximum data rate is generally limited to account for potential worst-case variation in these delays. Generally, this packet is only sent when the forward link data rate is configured to a rate of around 50 Mbps or lower. After sending this packet to calibrate the display, the data rate may be stepped up above 50 Mbps. If the data rate is set too high during the skew calibration process, the display might synchronize to an alias of the bit period which could cause the delay skew compensation setting to be off by more than one bit time, resulting in erroneous data clocking. The highest data rate type of interface or greatest possible Interface Type is selected prior to sending the Forward Link Skew Calibration Packet so that all existing data bits are calibrated.One embodiment of the format of a Forward Link Skew Calibration Packet is shown in FIG. 56 . As shown in FIG. 56 , this type of packet is structured to have Packet Length (2 bytes), Packet Type, hClient ID, Parameter CRC, All Zero, Calibration Data Sequence, and CRC fields. This type of packet is generally identified as a Type 83 packet in the type field, and in one embodiment has a pre-selected length of 515. Virtual Control Panel The use of a Virtual Control Panel (VCP) allows a host to set certain user controls in a client. By allowing these parameters to be adjusted by the host, the user interface in the client can be simplified because screens that allow a user to adjust parameters such as audio volume or display brightness can be generated by host software rather than by one or more microprocessors in the client. The host has the ability to read the parameter settings in the client and to determine the range of valid values for each control. The client generally has the capability to report back to the host which control parameters can be adjusted.The control codes (VCP Codes) and associated data values generally specified, are utilized to specify controls and settings in the client. The VCP Codes in the MDDI specification are expanded to 16 bits to preserve proper data field alignment in the packet definitions, and in the future to support supplementary values that are unique to this interface or future enhancements. 27. Request VCP Feature Packet The Request VCP Feature Packet provides a means, mechanism, or method for the host to request the current setting of a specific control parameter or all valid control parameters. Generally, a clients responds to a VCP Packet with the appropriate information in a VCP Feature Reply Packet. In one embodiment, the client indicates an ability to support the Request VCP Feature Packet using bit 13 of the Client Feature Capability Indicators field of the Client Capability Packet.The format of the Request VCP Feature Packet in one embodiment is shown in FIG. 69 . As seen in FIG. 69 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, MCCS VCP code, and CRC fields. This type of packet is generally identified in one embodiment as a Type 128, which is indicated in the 2 byte type field. The packet length, which specifies the total number of bytes in the packet not including the packet length field, is typically fixed for this type of packet at a length of 8 bytes.The hClient ID field is reserved for use as a Client ID in future implementations and is typically set to zero. The MCCS VCP Code field comprises 2 bytes of information that specifies the MCCS VCP Control Code Parameter. A value in the range of 0 to 255 causes a VCP Feature Reply Packet to be returned with a single item in the VCP Feature Reply List corresponding to the specified MCCS code. An MCCS VCP Code of 65535 (0xffff) requests a VCP Feature Reply Packet with a VCP Feature Reply List containing a Feature Reply List Item for each control supported by the client. The values of 256 through 65534, for this field are reserved for future use and presently not in use. 28. VCP Feature Reply Packet The VCP Feature Reply Packet provides a means, mechanism, or method for a client to respond to a host request with the current setting off specific control parameter or all valid control parameters. Generally, a client sends the VCP Feature Reply Packet in response to a Request VCP Feature Packet. This packet is useful to determine the current setting of a specific parameter, to determine the valid range for a specific control, to determine if a specific control is supported by the client, or to determine the set of controls that are supported by the client. If a Request VCP Feature is sent that references a specific control that is not implemented in the client then a VCP Feature Reply Packet is returned with a single VCP Feature Reply List item corresponding to the unimplemented control that contains the appropriate error code. In one embodiment, the client indicates an ability to support the VCP Feature Reply Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.The format of the VCP Feature Reply Packet in one embodiment is shown in FIG. 70 . As seen in FIG. 70 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, MCCS Version, Reply Sequence Number, VCP Feature Reply List, and CRC fields. This type of packet is generally identified in one embodiment as a Type 129, as indicated in the 2 byte type field.The cClient ID field contains information reserved for a Client ID. This field is reserved for future use and is generally set to zero. MCCS Version field contains 2 bytes of information that specifies the Version of the VESA MCCS Specification implemented by the client.The 2 byte Reply Sequence Number field contains information or data that specifies the sequence number of the VCP Feature Reply Packets returned by the client. The client returns one or more VCP Feature Reply Packets in response to a Request VCP Feature Packet with an MCCS Control Code value of 65535. The client may spread the feature reply list over multiple VCP Feature Reply Packets. In this case, the client assigns a sequence number to each successive packet, and the sequence numbers of the VCP Feature Reply Packets sent in response to a single Request VCP Feature Packet starts at zero and increments by one. The last VCP Feature List Item in the last VCP Feature Reply Packet should contain an MCCS VCP Control Code value equal to Oxffff to identify that the packet is the last one and contains the highest sequence number of the group of packets returned. If only one VCP Feature Reply Packet is sent in response to a Request VCP Feature Packet then the Reply Sequence Number in that single packet is zero and the VCP Feature Reply List contains a record having an MCCS VCP Control Code equal to 0xffff,The Number of Features in List field contains 2 bytes that specify the number of VCP Feature List Items that are in the VCP Feature Reply List in this packet, while the VCP Feature Reply List field is a group of bytes that contain one or more VCP Feature Reply List Items. The format of a single VCP Feature Reply List Item in one embodiment is shown in FIG. 71 .As shown in FIG. 71 , each VCP Feature Reply List Item is 12 bytes in length, and comprises the MCCS VCP Code, Result Code, Maximum Value, and Present Value fields. The 2-byte MCCS VCP Code field contains data or information that specifies the MCCS VCP Control Code Parameter associated with this list item. Only the Control Code values defined in the VESA MCCS Specification version 2 and later are considered as valid for this embodiment. The 2-byte Result Code field contains information that specifies an error code related to the request for information regarding the specified MCCS VCP Control. A value of '0' in this field means there is no error, while a value of '1' means the specified control is not implemented in the client. Further values for this field of 2 through 65535 are currently reserved for future use and implementation of other application contemplated by the art, but are not to be used for now.The 4-byte Maximum Value field contains a 32-bit unsigned integer that specifies the largest possible value to which the specified MCCS Control can be set. If the requested control is not implemented in the client this value is set to zero. If the value returned is less than 32 bits (4 bytes) in length, then the value is cast into a 32-bit integer leaving the most significant (unused) bytes set to zero. The 4-byte Present Value field contains information that specifies the present value of the specified MCCS VCP Continuous (C) or non-continuous (NC) control. If the requested control is not implemented in the client or if the control is implemented but is a table (T) data type, then this value is set to zero. If the value returned is less than 32 bits (4 bytes) in length per the VESA MCCS specification then the value is cast into a 32-bit integer leaving the most significant (unused) bytes set to zero. 29. Set VCP Feature Packet The Set VCP Feature Packet provides a means, mechanism, or method for a host to set VCP control values for both continuous and non-continuous controls in a client. In one embodiment, the client indicates the ability to support the Set VCP Feature Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.The format of the Set VCP Feature Packet in one embodiment is shown in FIG. 72 . As seen in FIG. 72 , this type of packet is structured to have Packet Length, Packet Type, hClient ID, MCCS VCP Code, Number of Values in List, Control Value List, and CRC fields. This type of packet is generally identified as a Type 130, as indicated in the 2 byte type field, is 20 bytes long exclusive of the Packet Length field.The hClient ID field again uses a 2-byte value to specify or act as a Client ID. This field is reserved for future use and is currently set to zero. The MCCS VCP Code field uses 2 bytes of information or values to specify the NICCS VCP Control Code Parameter to be adjusted. The 2-byte Number of Values in List Field contains information or values that specify the number of 16-bit values that exist in the Control Value List. The Control Value List will usually contain one item unless the MCCS Control Code relates to a table in the client. In the case of non-table-related controls, The Control Value List will contain a value that specifies the new value to be written to the control parameter specified by the MCCS VCP Code field. For table-related controls the format of the data in the Control Value List is specified by the parameter description of the specified MCCS VCP Code. If the list contains values that are larger than one byte, then the least-significant byte is transmitted first, consistent with the method defined elsewhere. Finally, the 2-byte CRC field contains a 16-bit CRC of all bytes in the packet including the Packet Length. 30. Request Valid Parameter Packet The Request Valid Parameter Packet is used as a means or structure useful to request that a client return a Valid Parameter Reply Packet containing a list of parameters supported by the specified non-continuous (NC) or table (T) control. This packet should only specify non-continuous controls or controls that relate to a table in the client, and not specify a MCCS VCP Code value of 65535 (0xffff) to specify all controls. If a non-supported or invalid MCCS VCP Code is specified then an appropriate error value is returned in the Valid Parameter Reply Packet. In one embodiment, the client indicates an ability to support the Request Valid Parameter Packet using bit 13 of the Client Feature Capability field of the Display Capability Packet.The format of the Request Valid Parameter Packet in one embodiment is shown in FIG. 73 . As seen in FIG. 73 , this type of packet is structured to have Packet. Length, Packet Type, hClient ID, MCCS VCP Code, and CRC fields. This type of packet is generally identified in one embodiment as a Type 131, as indicated in the 2 byte type field.The packet length, as indicated in the 2-bytes Packet Length Field is generally set to have a total number of bytes in the packet, not including the packet length field of 8. The hClient ID again specifies the Client ID, but is currently reserved for future use, as would be apparent to one skilled in the art, and is set to zero. The 2-byte MCCS VCP Code Filed contains a value that specifies the non-continuous MCCS VCP Control Code Parameter to be queried. The value in this field should correspond to a non-continuous control that is implemented in the client. The values 256 through 65535 (0xffff) are typically reserved or considered as invalid, and are considered as an unimplemented control in the error response. 31. Valid Parameter Reply Packet A Valid Parameter Reply Packet is sent in response to a Request Valid Parameter Packet. It is used as a means, method, or structure to identify the valid settings for a non-continuous MCCS VCP control or a control that returns the contents of a table. If the control relates to a table in the client, then the VCP Parameter Reply List simply contains the specific list of sequential table values that were requested. If the contents of the table cannot fit into a single Valid Parameter Reply Packet then multiple packets with sequential Reply Sequence Numbers can be sent by the client. In one embodiment, a client indicates an ability to support a Valid Parameter Reply Packet using bit 13 of the Client Feature Capability field of the Client Capability Packet.A host may request the contents of a table in the following manner: the host sends a Set VCP Feature Packet containing the necessary or desired parameters such as read/write parameter, LUT offset, and RGB selection; then a Request Valid Parameter Packet that specifies the desired control is sent by the host; then the client returns one or more Valid Parameter Reply Packets containing the table data. This sequence of operations performs a similar function as the table reading functions described in the MCCS operation model.If a specific client parameter is not supported by the client then in one embodiment the corresponding field of this packet will contain a value of 255. For parameters that are used in the client, the corresponding field should contain a value of the parameter in the client.The format of the Valid Parameter Reply Packet for one embodiment is shown in FIG. 74 . As seen in FIG. 74 , this type of packet is structured to have Packet Length, Packet Type, cClient ID, MCCS VCP Code, Response Code, Reply Sequence Number, Number Values in List, VCP Parameter Reply List, and CRC fields. This type of packet is generally identified for one embodiment as a Type 132, as indicated in the 2 byte type field.The cClient ID field is reserved for the future Client ID, as is known from the above discussions, while the 3-byte MCCS VCP Code Packet contains a value that specifies a non-continuous MCCS VCP Control Code Parameter that is described by this packet. If an invalid MCCS VCP Control Code is specified by a Request Valid Parameter Packet, then the same invalid parameter value will be specified in this field with the appropriate value in the Response Code field. If the MCCS Control Code is invalid then the VCP Parameter Reply List will have zero length.The Response Code field contains 2 bytes of information or values that specify the nature of the response related to the request for information regarding the specified MCCS VCP Control. If the value in this field is equal to 0, then no error is considered as being present for this data type, and the last Valid Parameter Reply Packet in the sequence is sent, it having the highest Reply Sequence Number. If the value in this field is equal to 1, then no error is considered as being present, but other Valid Parameter Reply Packets will be sent that have higher sequence numbers. If the value in this field is equal to 2, then the specified control is not considered as being implemented in the client. If the value in this field id equal to 3, then the specified control is not a non-continuous control (it is a continuous control that always has a valid set of all values from zero to its maximum value). Values for this field equal to 4 through 65535 are reserved for future use and generally not to be used.The 2-byte Reply Sequence Number field specifies the sequence number of the Valid Parameter Reply Packets returned by the client. The client returns one or more Valid Parameter Reply Packets in response to a Request Valid Parameter Packet. The client may spread the VCP Parameter Reply List over multiple Valid Parameter Reply Packets. In this latter case, the client will assign a sequence number to each successive packet, and set the Response Code to 1 in all but the last packet in the sequence. The last Valid Parameter Reply Packet in the sequence will have the highest Reply Sequence Number and the Response Code contains a value of 0.The 2-byte Number of Values in List field specifies the number of 16-bit values that exist in the VCP Parameter Reply List. If the Response Code is not equal to zero then the Number of Values in List parameter is zero. The VCP Parameter Reply List field contains a list of 0 to 32760 2-byte values that indicate the set of valid values for the non-continuous control specified by the MCCS Control Code field. The definitions of the non-continuous control codes are specified in the VESA MCCS Specification. Finally, in this embodiment, the CRC field contains a 16-bit CRC of all bytes in the packet including the Packet Length. Scaled Video Stream Images The MDDI or protocol mechanism, structure, means, or method provides support for scaled video stream images that allow the host to send an image to the client that is scaled larger or smaller than the original image, and the scaled image is copied to a main image buffer. An overview of the Scaled Video Stream functionality and associated protocol support is provided elsewhere. An ability to support scaled video streams is defined by or within the Scaled Video Stream Capability Packet, which is sent in response to a Request Specific Status Packet. 32. Scaled Video Stream Capability Packet The Scaled Video Stream Capability Packet defines the characteristics of the scaled video stream source image in or used by a client. The format of the Scaled Video Stream Capability Packet is shown generally in FIG. 73 . As seen in FIG. 75 , in one embodiment, a Scaled Video Stream Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Max Number of Streams, Source Max X Size, Source Max Y size, ROB Capability, Monochrome Capability, Reserved 1, Y Cr Cb Capability, Reserved 2, and CRC fields. The packet length, in one embodiment, is selected to be a fixed 20 bytes, as shown in the length field, including the 2-byte cClient ID field, which is reserved for use for a Client ID, otherwise set to zero, and the CRC field. In one embodiment, the client indicates an ability to support the Scaled Video Stream Capability Packet using a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The 2-byte Maximum Number of Streams field contains a value to identify the maximum number of simultaneous scaled video steams that may be allocated at one time. In one embodiment, a client should deny a request to allocate a scaled video stream if the maximum number of scaled video streams is already allocated. If less than the maximum number of scaled video streams are allocated the client may also deny an allocation request based on other resource limitations in the client.The Source Maximum X Size and Y size fields (2 bytes) specify values for the maximum width and height, respectively, of the scaled video stream source image expressed as a number of pixels.The RGB Capability field uses values to specify the number of bits of resolution that can be displayed in RGB format. If the scaled video stream cannot use the RGB format then this value is set equal to zero. The RGB Capability word is composed of three separate unsigned values with: Bits 3 through 0 defining a maximum number of bits of blue (the blue intensity) in each pixel, Bits 7 through 4 defining the maximum number of bits of green (the green intensity) in each pixel, and Bits 11 through 8 defining the maximum number of bits of red (the red intensity) in each pixel, while Bits 15 through 12 are reserved for future use in future capability definitions, and are generally set to zero.The 1-byte Monochrome Capability field contains a value that specifies the number of bits of resolution that can be displayed in monochrome format. If the scaled video stream cannot use the monochrome format then this value is set to zero. Bits 7 through 4 are reserved for future use and should, therefore, be set to zero ('0') for current applications, although this may change over time, as will be appreciated by those skilled in the art. Bits 3 through 0 define the maximum number of bits of grayscale that can exist in each pixel. These four bits make it possible to specify that each pixel consists of 1 to 15 bits. If the value is zero, then the monochrome format is not supported by the scaled video stream.The Reserved 1 field (here 1 byte) is reserved for future use in providing values related to the Scaled Video Stream Packet information or data. Therefore, currently, all bits in this field are set to a logic '0'. One purpose of this field is to cause all subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The 2-byte Y Cb Cr Capability field contains values that specify the number of bits of resolution that can be displayed in Y Cb Cr format. If the scaled video stream cannot use the Y Cb Cr format then this value is zero. The Y Cb Cr Capability word is composed of three separate unsigned values with: Bits 3 through 0 defining the maximum number of bits that specify the Cr sample; Bits 7 through 4 defining the maximum number of bits that specify the Cb sample; Bits 11 through 8 defining the maximum number of bits specify the Y sample; and with Bits 15 through 12 being reserved for future use and is generally set to zero.The 1-byte Capability Bits field contains a set of flags that specify capabilities associated with the scaled video stream. The flags are defined as follows: Bit 0 covers Pixel data in the Scaled Video Stream Packet can be in a packed format. An example of packed and byte-aligned pixel data is shown earlier in FIG. 12 . Bit 1 is reserved for future use and is generally set to zero; Bit 2 is also reserved for future use and is set to zero; Bit 3 covers scaled video streams that can be specified in the color map data format. The same color map table is used for the scaled video streams as is used for the main image buffer and the alpha-cursor image planes. The color map is configured using the Color Map Packet described elsewhere; and Bits 7 through 4 are reserved for future use and are generally set to be zero.The Reserved 2 field (here 1 byte) is reserved for future use in providing values related to the Scaled Video Stream Packet information or data. Therefore, currently, all bits in this field are set to a logic '0'. One purpose of this field is to cause all subsequent 2-byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address. 33. Scaled Video Stream Setup Packet The Scaled Video Stream Setup Packet is used to define the parameters of the scaled video stream and the client uses the information to allocate internal storage for buffering and scaling of the image. A stream may be de-allocated by sending this packet with the X Image Size and Y Image Size fields equal to zero. Scaled video streams that have been de-allocated may be reallocated later with the same or different stream parameters. In one embodiment a client indicates an ability to support the Scaled Video Stream Setup Packet using a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet, and by using a non-zero value in the Maximum Number of Streams field of the Scaled Video Stream Capability Packet.The format of the Scaled Video Stream Setup Packet is shown generally in FIG. 76 . As seen in FIG. 76 , in one embodiment, a Scaled Video Stream Setup Packet is structured to have Packet Length Packet Type, hClient, Stream ID, Visual Data Format Descriptor, Pixel Data Attributes, X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X Image Size, Y Image Size, and CRC fields.The 2-byte Packet Length field specifies the total number of bytes in the packet not including the packet length field. In one embodiment, this packet length is fixed at 24. The 2-byte Packet Type field employs a value of 136 to identify the packet as a Scaled Video Stream Setup Packet. The 2-byte hClient ID field is reserved for future use as a Client ID, and is generally set to an all bits at logic-zero value for the moment, or until a protocol user determines what ID values are to be used, as would be known.The Stream ID field uses 2 bytes to specify a unique identifier for the Stream ID. This value is assigned by the host and ranges in value from zero to the maximum Stream ID value specified in the Client Capability Packet. The host must manage the use of Stream ID values carefully to ensure that each active stream is assigned a unique value, and that streams that are no longer active are de-allocated or reassigned.In one embodiment, the Video Data Format Descriptor field uses 2 bytes to specify the format of each pixel in the Pixel Data in the present stream in the present packet. The pixel data format should comply with at least one of the valid formats for the alpha-cursor image plane as defined in the Alpha-Cursor Image Capability Packet. The Video Data Format Descriptor defines the pixel format for the current packet only and does not imply that a constant format will continue to be used for the lifetime of a particular video stream. Fig. 11 illustrates an embodiment of how the Video Data Format Descriptor is coded, and as discussed above for other packets,In one embodiment, the 2-byte Pixel Data Attributes field has values that are interpreted as follows with Bits 1 and 0 selecting the display where the pixel data is to be routed. For bit values of '11' or '00' pixel data is displayed to or for both eyes, for bit values '10', pixel data is routed only to the left eye, and for bit values '01', and pixel data is routed only to the right eye.Bit 2 indicates whether or not the Pixel Data is in interlace format. When Bit 2 is 0, then the Pixel Data is in the standard progressive format. The row number (pixel Y coordinate) is incremented by 1 when advancing from one row to the next. When Bit 2 is 1, then the Pixel Data is in interlace format. The row number (pixel Y coordinate) is incremented by 2 when advancing from one row to the next.Bit 3 indicates whether or not the Pixel Data is in alternate pixel format. This is similar to the standard interlace mode enabled by bit 2, but the interlacing is vertical instead of horizontal. When Bit 3 is 0, the Pixel Data is in the standard progressive format. The column number (pixel X coordinate) is incremented by 1 as each successive pixel is received. When Bit 3 is 1, then the Pixel Data is in alternate pixel format. The column number (pixel X coordinate) is incremented by 2 as each pixel is received.Bit 4 indicates whether the Pixel data is related to the display or the camera. When Bit 4 is 0, the Pixel Data is to or from the display frame buffer. When Bit 4 is 1, the Pixel Data is to or from the camera. Bit 5 is reserved for future use and is, therefore, generally set to be zero.Bits 7 and 6 are the Display Update Bits that specify the frame buffer where the pixel data is to be written. The effects of the Frame Update Bits are described in more detail elsewhere. When Bits [7:6] are '01', the Pixel data is written to the offline image buffer. When Bits [7:6] are '00', the Pixel data is written to the image buffer used to refresh the display. When Bits [7:6] are '11', the Pixel data is written to all image buffers. If Bits [7:6] are '10', this is treated as an invalid value. These bits are currently reserved for future use. In this situation, Pixel data would be ignored and not written to any of the image buffers.Bits 8 through 15 are reserved for future use and are generally be set to logic-zero level or values. 34. Scaled Video Stream Acknowledgement Packet The Scaled Video Stream Acknowledgement Packet allows a client to acknowledge the receipt of a Scaled Video Stream Setup Packet. The client can indicate an ability to support the Scaled Video Stream Acknowledgement Packet via a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet and via a non-zero value in the Maximum Number of Streams field of the Scaled Video Stream Capability Packet.The format of the Scaled Video Stream Acknowledgement Packet is shown generally in FIG. 77 . As seen in FIG. 77 , in one embodiment, a Scaled Video Stream Acknowledgement Packet is structured to have Packet Length, Packet Type, cClient, Stream ID, ACK Code, and CRC fields. The 2-byte packet Length field is used to specify the total number of bytes, excluding the packet length field, with a value of 10 for this packet type, while a Packet Type of 137 identifies a packet as a Scaled Video Stream Acknowledgement Packet.The 2-byte cClient ID field is reserved for future use for the Client ID, and is generally set to zero. The 2-byte Stream ID field specifies a unique identifier for the Stream ID. This is the same value assigned by the host in the Scaled Video Stream Setup Packet.The 2-byte Ack Code field provides values containing a code that describes the outcome of an attempt to update the specified scaled video stream. In one embodiment, the codes are defined as follows:0 - The stream allocation attempt was successful.1 - the stream de-allocation attempt was successful.2 - invalid attempt to allocate a stream ID that has already been allocated.3 - invalid attempt to de-allocate a stream ID that is already de-allocated.4 - the client does not support scaled video streams5 - the stream parameters are inconsistent with the capability of the client.6 - stream ID value larger than the maximum value allowed by the client.7 - insufficient resources available in the client to allocate the specified stream.The 2-byte CRC field contains the CRC of all bytes in the packet including the Packet Length. 35. Scaled Video Stream Packet The Scaled Video Stream Packet is used to transmit the pixel data associated with a specific scaled video stream. The size of the region reference by this packet is defined by the Scaled Video Stream Setup Packet. The client can indicate an ability to support the Scaled Video Stream Packet via a parameter value of 143 in the Valid Parameter Reply List of the Valid Status Reply List Packet and using a successful scaled video stream allocation response in the Ack Code field of the Scaled Video Stream Acknowledgement Packet.The format of one embodiment of the Scaled Video Stream Packet is shown generally in FIG. 78 . As seen in FIG. 78 , a Scaled Video Stream Packet is structured to have Packet Length, Packet Type, hClient ID, Stream ID, Parameter CRC, Pixel Count, Pixel Data, and Pixel Data CRC fields. The 2-byte Packet Type field uses a value of 18 to identify a packet as a Scaled Video Stream Packet. The hClient ID field is reserved for the Client ID, and generally set to zero. As before, the 2-byte Stream ID field specifies a unique identifier for the Stream ID. This value is specified by the host in the Scaled Video Stream Setup Packet and confirmed in the Scaled Video Stream Acknowledgement Packet.The 2-byte Pixel Count field specifies the number of pixels in the Pixel Data field below. The 2-byte Parameter CRC field has the CRC of all bytes from the Packet Length to the Pixel Count. If this CRC fails to check then the entire packet is discarded, The 2-byte Pixel Data field contains the raw video information that is to be scaled and then displayed. Data is formatted in the manner described by the Video Data Format Descriptor field. The data is transmitted a row at a time as defined previously.The 2-byte Pixel Data CRC field contains a CRC of only the Pixel Data. If this CRC fails to check then the Pixel Data can still be used but the CRC error count is incremented. 36. Request Specific Status Packet The Request Specific Status Packet provides a means, mechanism, or method for a host to request that the client send a capability or status packet back to the host as specified in this packet. The client returns the packet of the specified type in the next Reverse Link Encapsulation Packet. The client will generally set bit 17 in the Client Feature Capability field of the Client Capability Packet if the client has the capability to respond to the Request Specific Status Packet. A convenient method for the host to use to determine all of the types of status packets to which a client can respond is to use the Valid Status Reply List Packet described elsewhere. The client can indicate an ability to respond with the Valid Status Reply List Packet using bit 21 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Request Specific Status Packet is shown generally in FIG. 79 . As seen in FIG. 79 , a Request Specific Status Packet is structured to have Packet Length, Packet Type, hClient ID, Status Packet ID, and CRC fields. Packet Length field specifies the total number of bytes in the packet not including the packet length field, and is generally fixed at a value of 10 for this packet type. A Packet Type of 138 identifies the packet as a Request Specific Status Packet. The hClient ID field (2 bytes) is reserved for future use for a Client ID, and is set to zero for now, while a 2-byte Status Packet ID field specifies the type of capability or status packet that the client is going to send to the host. Typical packets types are:66 - Client Capability Packet is sent by the client.133 - Alpha-Cursor Image Capability Packet is sent by the client.139 - Valid Status Reply List Packet is sent that identifies the exact types of capability and status packets that the client can send.140 - Packet Processing Delay Parameters Packet is sent by the client.141 - Personal Client Capability Packet is sent by the client.142 - Client Error Report Packet is sent by the client.143 - Scaled Video Stream Capability Packet is sent by the client.144 - Client Identification Packet is sent by the client.Packet Types 56 through 63 can be used for manufacturer-specific capability and status identifiers.The CRC field again contains a CRC of all bytes in the packet including the Packet Length. 37. Valid Status Reply List Packet The Valid Status Reply List Packet provides the host with a structure, means, or method to have a list of status and capability packets to which the client has the capability to respond. A client can indicate an ability to support the Valid Status Reply List Packet using bit 21 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Valid Status Reply List Packet is shown generally in FIG. 80 . As seen in FIG. 80 , a Valid Status Reply List Packet is structured to have Packet Length, Packet Type, cClient ID, Number of Values in List, Valid Parameter Reply List, and CRC fields. The packet length for this type of packet is generally fixed at a value of 10, and a type value of 139 identifies the packet as a Valid Status Reply Packet. The cClient ID field is reserved for future use as the Client ID, and is generally be set to zero. The 2- byte Number of Values in List field specifies the number of items in the following Valid Parameter Reply List.The Valid Parameter Reply List field contains a list of 2-byte parameters that specify the types of capability or status packets that the client can send to the host. If the client has indicated that it can respond to the Request Specific Status Packet (using bit 21 of the Client Feature Capability field the in the Client Capability Packet) then it is capable of sending at least the Client Capability Packet (Packet Type = 66) and the Valid Status Reply List Packet (Packet Type = 139). The Packet Types that can be sent by the client and may be included in this list, along with their respective assignments for purposes of the one embodiment, are:66 - Client Capability Packet.133 - Alpha-Cursor Image Capability Packet.139 - Valid Status Reply List Packet, that identities the exact types of capability and status packets that the client can send.140 - Packet Processing Delay Parameters Packet.141 - Personal Display Capability Packet.142 - Client Error Report Packet.143 - Scaled Video Stream Capability Packet.144 - Client Identification Packet.145 - Alternate Display Capability Packet.Packet Types 56 through 63 can be used for manufacturer-specific capability and status identifiers.The CRC field contains a CRC of all bytes in the packet including the Packet Length. 38. Packet Processing Delay Parameters Packet The Packet Processing Delay Parameters Packet provides a set of parameters to allow the host to compute the time required to complete the processing associated with the reception of a specific packet type. Some commands sent by the host cannot be completed by the client in zero time. The host may poll the status bits in the Client Request and Status Packet to determine if certain functions have been completed by the client, or the host may compute the completion time using the parameters returned by the client in the Packet Processing Delay Parameters Packet. The client can indicate an ability to support the Packet Processing Delay Parameters Packet using a parameter value of 140 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Packet Processing Delay Parameters Packet is shown generally in FIG. 81A . As seen in FIG. 81A , a Packet Processing Delay Parameters Packet is structured to have Packet Length, Packet Type, cClient ID, Number of List Items, Delay Parameters List, and CRC fields. The packet length for this type of packet is generally fixed at a value of 10, and a type value of 140 identifies the packet as a Packet Processing Delay Parameters Packet. The cClient ID field is reserved for future use as the Client ID, and is generally be set to zero. The 2- byte Number of List items field specifies the number of items in the following Valid Parameter Reply List.The Delay Parameters List field is a list containing one or more Delay Parameter List items. The format for one embodiment of a single Delay Parameters List item is shown in FIG. 81B , where Packet Type for Delay, Pixel Delay, Horizontal Pixel Delay, Vertical Pixel Delay, and Fixed Delay fields are shown.Each Delay Parameters List Item is generally restricted to be 6 bytes in length, and is further defined as follows. The 2-byte Packet Type for Delay field specifies the Packet Type for which the following delay parameters apply. The Pixel Delay field (1 byte) comprises an index to a delay value. The value read from the table is multiplied by the total number of pixels in the destination field of the packet. The total number of pixels is the width times the height of the destination area of the bitmap referenced by the packet. The 1-byte Horizontal Pixel Delay field contains a value that is an index to a delay value table (same table as DPVL). The value read from the table is multiplied by the width (in pixels) of the destination field of the packet. The 1-byte Vertical Pixel Delay field contains a value that is an index to a delay value table (generally uses the same table as DPVL). The value read from the table is multiplied by the height (in pixels) of the destination field of the packet.The Fixed Delay field uses 1 byte as an index to a delay value table (same table as DPVL). The value read from the table is a fixed delay parameter that represents a time required to process the packet that is unrelated to any parameter values specified in the packet. The total delay, or packet processing completion time delay, is determined according to the relationship: Delay = PacketProcessingDelay PixelDelay ⋅ TotalPixels + PacketProcessingDelay HorizontalPixelDelay ⋅ Width + PacketProcessingDelay VerticalPixelDelay ⋅ Height + PacketProcessingDelay FixedDelayFor some packets, the Total Pixels, Width, or Height do not apply because those parameters are not referenced in the corresponding packet. In those cases, the corresponding Pixel Delay parameter is generally set to be zero. 39. Personal Display Capability Packet The Personal Display Capability Packet provides a set of parameters that describe the capabilities of a personal display device, such as a head-mounted display or display glasses. This enables the host to customize the display information according to the specific capabilities of a client. A client, on the other hand, indicates an ability to send the Personal Display Capability Packet by using a corresponding parameter in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Personal Display Capability Packet is shown generally in FIG. 82 . As seen in FIG. 82 , a Personal Display Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Sub-Pixel Layout, Pixel Shape, Horizontal Field of View, Vertical Field of View, Visual Axis Crossing, Lft./Rt. Image, See Through, Maximum Brightness, Optical Capability, Minimum IPD, Maximum IPD, Points of IFeld of Curvature List and CRC fields. In one embodiment, the Packet Length field value is fixed at 68. A Packet Type value of 141 identifies a packet as a Personal Display Capability Packet. The cClient ID field is reserved for future use and is generally set to zero for now.The Sub-Pixel Layout field specifies the physical layout of a sub-pixel from top to bottom and left to right, using values of: 0 to indicate that a sub-pixel layout is not defined; 1 to indicate red, green, blue stripe; 2 to indicate blue, green, red stripe; 3 to indicate a quad-pixel, having a 2-by-2 sub-pixel arrangement of red at the top left, blue at the bottom right, and two green sub-pixels, one at the bottom left and the other at the top right; 4 to indicate a quad-pixel, with a 2-by-2 sub-pixel arrangement of red at the bottom left, blue at the top right, and two green sub-pixels, one at the top left and the other at the bottom right; 5 to indicate a Delta (Triad); 6 to indicate a mosaic with red, green, and blue overlayed (e.g. LCOS display with field-sequential color); and with values 7 through 255 being generally reserved for future use.The Pixel Shape field specifies the shape of each pixel that is composed of a specific configuration sub-pixels, using a value of: 0 to indicate that a stib-pixel shape is not defined; 1 to indicate round; 2 to indicate square; 3 to indicate rectangular; 4 to indicate oval; 5 to indicate elliptical; and with the values 6 through 255 being reserved for future use in indicating desired shapes, as can be appreciated by one skilled in the art.A 1-byte Horizontal Field of View (HFOV) field specifies the horizontal field of view in 0.5 degree increments (e.g. if the HFOV is 30 degrees, this value is 60). If this value is zero then the HFOV is not specified.A 1-byte Vertical Field of View (VFOV) field specifies the vertical field of view in 0.5 degree increments (e.g. if the VFOV is 30 degrees, this value is 60). If this value is zero then the VFOV is not specified.A 1-byte Visual Axis Crossing field specifies the visual axis crossing in 0.01 diopter (1/m) increments (e.g. if the visual axis crossing is 2.22 meters, this value is 45). If this value is zero then the Visual Axis Crossing is not specified.A 1-byte Left/Right Image, Overlap field specifies the percentage of overlap of the left and right image. The allowable range of the image overlap in percent is 1 to 100. Values of 101 to 255 are invalid and are generally not to be used. If this value is zero then the image overlap is not specified.A 1-byte See Through field specifies the see-through percentage of image. The allowable range of see-through in percent is 0 to 100. Values of 101 to 254 are invalid and are not to be used. If this value is 255 then the see-through percentage is not specified. A1-byte Maximum Brightness field specifies the maximum brightness in increments of 20 nits (e.g. if the maximum brightness is 100 nits, this value is 5). If this value is zero then the maximum brightness is not specified.A 2-byte Optical Capability Flags field contains various fields that specify optical capabilities of the display. These bit values are generally assigned according to:Bits 15 through 5 are reserved for future use and are generally set to a logic-zero state.Bit 4 selects Eye Glass Focus Adjustment, with a value of '0' meaning the display has no eye glass focus adjustment, and a value of '1' meaning the display has an eye glass focus adjustment.Bits 3 through 2 select a Binocular Function according to: a value of 0 means the display is binocular and can display 2-dimsnsional (2D) images only; 1 means the display is binocular and can display 3-dimensional (3D) images; 2 means the display is monocular, and 3 is reserved for future use.Bits 1 through 0 select Left-Right Field Curvature Symmetry, with a value of 0 meaning Field curvature not defined. If this field is zero then all field curvature values from A1 through E5 are set to zero except for point C3, which specifies a focal distance of the display or is to be set to zero to indicate the focal distance is not specified. A value of 1 means Left and Right displays have the same symmetry; 2 means Left and fight displays are mirrored on the vertical axis (column C); and 3 is reserved for future use.The 1-byte Inter-Pupillary Distance (IPD) Minimum field specifies the minimum inter-pupillary distance in millimeters (mm). If this value is zero then the minimum inter-pupillary distance is not specified, The 1-byte Inter-Pupillmy Distance (IPD) Maximum field specifies the maximum inter-pupillaty distance in millimeters (mm). If this value is zero then the maximum intor-pupillary distance is not specified.The Points of Field Curvature List field contains a list of 25 2-byte parameters that specify the focal distance in thousandths of a diopter (1/m) with a range of 1 to 65535 (e.g. 1 is 0.001 diopters and 65535 is 65.535 diopters). The 25 elements in the Points of Field Curvature List are labeled A1 through E5 as shown in FIG. 83 . The points are to be evenly distributed over the active area of the display. Column C corresponds to the vertical axis of the display and row 3 corresponds to the horizontal axis of the display. Columns A and E correspond to the left and right edges of the display, respectively. And rows 1 and 5 correspond to the top and bottom edges of the display, respectively. The order of the 25 points in the list is: A1, B1, C1, D1, E1 A2, B2, C2, D2, E2, A3, B3, C3, D3, E3, A4, B4, C4, D4, E4, A5, B5, C5, D5, E5.The CRC field contains a CRC of all bytes in the packet including the Packet Length. 40. Client Error Report Packet The Client Error Report Packet acts as a mechanism or means for allowing a client to provide a list of operating errors to the host. The client may detect a wide range of errors in the course of its normal operation as a result of receiving certain commands from the host. Examples of these errors include: the client may have been commanded to operate in a mode that it does not support, the client may have received a packet containing certain parameters that are out of range or are beyond the capability of the client, the client may have been commanded to enter a mode in an improper sequence. The Client Error Report Packet may be used to detect errors during normal operation, but is most useful to the system designer and integrator to diagnose problems in development and integration of host and client systems. A client indicates its ability to send a Client Error Report Packet using a parameter value of 142 in the Valid Parameter Reply List of the Valid Status Reply List Packet.The format of one embodiment of a Client Error Report Packet is shown generally in FIG. 84A . As seen in FIG. 84A , a Client Error Report Packet is structured to have Packet Length, Packet Type, cclie-at ID, Number of List Items, Error Code List, and CRC fields. A Packet Type value of 142 identifies a packet as a Client Error Report Packet. The eClient ID field is reserved for future use and is generally set to zero for now. The Number of List Items field (2 bytes) specifies the number of items in the following Error Code List. The Error Code List field (here 8 bytes) is a list containing one or more Error Report List items. The format of a single Error Report List item is shown in FIG. 87B.In one embodiment, as shown in FIG. 87B, each Error Report List Item is exactly 4 bytes in length, and has a structure in one embodiment comprising: a 2-byte Display Error Code field that specifies the type of error being reported, a 2-byte Error Sub-code field specifies a greater level of detail regarding the error defined by the Client Error Code packet. The specific definition of each Client Error Code is defined by the manufacturer of the client. An Error Sub-code does not have to be defined for every Display Error Code, and in those cases where the Error Sub-code is not defined the value is set to zero. The specific definition of each Error Sub-code is defined by the manufacturer of the client. 41. Client Identification Packet The Client Identification Packet allows a client to return identifying data in response to a Request Specific Status Packet. In one embodiment, a client indicates an ability to send the Client Identification Packet using a parameter value of 144 in the Valid Parameter Reply List of the Valid Status Reply List Packet It is useful for the host to be able to determine the client device manufacturer name and model number by reading this data from the client. The information may be used to determine if the client has special capabilities that cannot described in the Client Capability Packet. There are potentially two methods, means, or mechanisms for reading identification information from the client. One is through use of the Client Capability Packet, which contains fields similar to those in the base EDID structure. The other method is through use of the Client Identification Packet that contains a richer set of information compared to the similar fields in the Client Capability Packet This allows a host to identify manufacturers that have not been assigned a 3-character EISA code, and allows serial numbers to contain alphanumeric characters.The format of one embodiment of a Client Identification Packet is shown generally in FIG. 85 . As seen in FIG. 85 , a Client Identification Packet is structured to have Packet Length, Packet Type, cClient ID, Week of Mfr, Year of Mfr., Length of Mfr Name, Length of Product Name, Length of Serial Number, Manufacturer Name String, Product Name String, Serial Number String, and CRC fields.The 2 byte Packet. Type field contains a value that identifies the packet as a Client Identification Packet. This value is selected to be 144 in one embodiment. The cClient ID field (2 bytes) again is reserved for future use for the Client ID, and is generally set to zero. The CRC field (2 bytes) contains a 16-bit CRC of all bytes in the packet including the Packet Length.A 1-byte Week of Manufacture field contains a value that defines the week of manufacture of the display. In at least one embodiment, this value is in the range of 1 to 53 if it is supported by the client. If this field is not supported by the client, then it is generally set to zero. A 1-byte Year of Manufacture field contains a value that defines the year of manufacture of the client (display). This value is an offset from the year 1990 as a starting point, although other base years could be used. Years in the range of 1991 to 2245 can be expressed by this field. Example: the year 2003 corresponds to a Year of Manufacture value of 13. If this field is not supported by the client it should be set to a value of zero.The Length of Mfr Name, Length of Product Name, and Length of Serial Number fields each contain 2-byte values that specify the length of the Manufacturer Name String field including any null termination or null pad characters, the length of the Product Name String field including any null termination or null pad characters, and the length of the Serial Number String field including any null termination or null pad characters, respectively.The Manufacturer Name String, Product Name String, and Serial Number String fields each contain a variable number of bytes specified by the Length Mfr Name, Product Name, and Serial Number fields, respectively, that contain an ASCII string that specifies the manufacturer, product name, and alphanumeric serial number of the display, respectively, Each of these strings is terminated by at least one null character. 42. Alternate Display Capability Packet The Alternate Display Capability Packet indicates the capability of the alternate displays attached to the MDDI client controller. It is sent in response to a Request Specific Status Packet. When prompted, a client device sends an Alternate Display Capability Packet for each alternate display that is supported. The client can indicate an ability to send the Alternate Display Capability Packet via a parameter value of 145 in the Valid Parameter Reply List of the Valid Status Reply List Packet.For MDDI systems operated in internal mode it may be common to have more than one display connected to an MDDI client controller. An example application is a mobile phone with a large display on the inside of the flip and a smaller display on the outside. It is not necessary for an internal mode client to return an Alternate, Display Capability Packet for two potential reasons. First, the host may already be programmed or otherwise informed of the capabilities during manufacture since they are used in a common device or housing. Second, due to assembly of the two, the client cannot easily be disconnected or separated from a connection to the host, and the host may contain a hard-coded copy of the client capabilities, or at least know they do not change with a change in client, as otherwise might occur.The Number of Alt Displays field of the Client Capability Packet is used to report that more than one display is attached and the Alternate Display Capability Packet reports the capability of each alternate display. The video stream packet contains 4 bits in the Pixel Data Attributes field to address each alternate display in the client device.The format of one embodiment of a Alternate Display Capability Packet is shown generally in FIG. 89 . As seen in FIG. 86 , an Alternate Display Capability Packet is structured to have Packet Length, Packet Type, cClient ID, Alt Display Number, Reserved 1, Bitmap Width, Bitmap Height, Display Window Width, Display Window Height, Color Map RGB Width, RGB Capability, Monochrome. Capability, Reserved 2, Y Cb Cr Capability, Display Feature Capability, Reserved 3, and CRC fields. A Packet Type value of 145 identifies a packet as a Alternate Display Capability Packet. The cClient ID field is reserved for a Client ID for future use and generally set to zero.The Alt Display Number field uses 1 byte to indicate the identity of the alternate display with an integer in the range of 0 to 15. The first alternate display is typically designated as number 0 and the other alternate displays are identified with unique Alt Display Number values with the largest value used being the total number of alternate displays minus 1. Values larger than the total number of alternate displays minus 1 are not used. Example: a mobile phone having a primary display and a caller-ID display connected to an MDDI client has one alternate display, so the Alt Display Number of the caller-ID display is zero and the Number of Alt Displays field of the Client Capability Packet has a value of 1.The Reserved 1 field (1 byte) is reserved for future use. All bits in this field are set to zero. One purpose of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The Bitmap Width field uses 2 bytes that specify the width of the bitmap expressed as a number of pixels. The Bitmap Height field uses 2 bytes that specify the height of the bitmap expressed as a number of pixels. The Display Window Width field uses 2 bytes that specify the width of the display window expressed was a number of pixels. The Display Window Height field uses 2 bytes that specify the height of the display window expressed as a number of pixels.The Color Map RGB Width field uses 2 bytes that specify the number of bits of the red, green, and blue color components that can be displayed in the color map (palette) display mode. A maximum of 8 bits for each color component (red, green, and blue) can be used. Even though 8 bits of each color component are sent in the Color Map Packet, only the number of least significant bits of each color component defined in this field are used. If the display client cannot use the color map (palette) format then this value is zero. The color map RGB Width word is composed of three separate unsigned values:Bits 3 through 0 define the maximum number of bits of blue in each pixel with values of 0 to 8 being considered valid, Bits 7 through 4 define the maximum number of bits of green in each pixel with values of 0 to 8 being considered valid. Bits 11 through 8 define the maximum number of bits of red in each pixel with values of 0 to 8 being considered valid. Bits 14 through 12 are reserved for future use and are generally set to zero. Bit 15 is used to indicate the ability of a client to accept Color Map pixel data in packed or unpacked format. When Bit 15 is set to a logic-one level, this indicates that the client can accept Color Map pixel data in either packed or unpacked format. If bit 15 is set to a logic-zero, this indicates that the client can accept Color Map pixel data only in unpacked format.RGB Capability field uses 2 bytes to specify the number of bits of resolution that can be displayed in RGB format. In one embodiment, if the client cannot use the RGB format then this value is set equal to zero. The RGB Capability word is composed of three separate unsigned values: Bits 3 through 0 define the maximum number of bits of blue (the blue intensity) in each pixel, Bits 7 through 4 define the maximum number of bits of green (the green intensity) in each pixel, and Bits 11 through 8 define the maximum number of bits of red (the red intensity) in each pixel. Bits 14 through 12 are reserved for future use and are set to zero. Bit 15 is used to indicate the ability of a client to accept RGB pixel data in packed or unpacked format. When Bit 15 is set to a logic-one level, this indicates that the client can accept RGB pixel data in either packed or unpacked format. If bit 15 is set to a logic-zero, this indicates that the client can accept RGB pixel data only in unpacked format.The 1 byte Monochrome Capability field contains a value or information to specify the number of bits of resolution that can be displayed in monochrome format. If the client cannot use the monochrome format then this value is set equal to zero. Bits 6 through 4 are reserved for future use and are generally set to zero. Bits 3 through 0 define the maximum number of bits of grayscale that can exist in each pixel. These four bits make it possible to specify that each pixel consists of 1 to 15 bits. If the value is zero then the monochrome format is not supported by the client. Bit 7 when set to one indicates that the client can accept monochrome pixel data in either packed or unpacked format. If bit 7 is set to zero this indicates that the client can accept monochrome pixel data only in unpacked format.The Reserved 2 field is a 1 byte wide field reserved for future use and generally has all bits in this field set to logic-zero level. In one embodiment, one purpose of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.A 2-byte Y Cb Cr Capability field specifies the number of bits of resolution that can be displayed in Y Cb Cr format. If the client cannot use the Y Cb Cr format then this value is zero. The Y Cb Cr Capability word is composed of three separate unsigned values: Bits 3 through 0 define the maximum number of bits that specify the Cb sample, Bits 7 through 4 define the maximum number of bits that specify the Cr sample, Bits 11 through 8 define the maximum number of bits that specify the Y sample, and Bits 14 through 12 are reserved for future use and are set to zero. Bit 15 when set to one indicates that the client can accept Y Cb Cr pixel data in either packed or unpacked format. If bit 15 is set to zero this indicates that the client can accept Y Cb Cr pixel data only in unpacked format.A 2 byte Bayer Capability field specifies the number of bits of resolution, pixel group, and pixel order that can be transferred in Bayer format. If the client cannot use the Bayer format then this value is set at zero. The Bayer Capability field is composed of the following values: Bits 3 through 0 define the maximum number of bits of intensity that exist in each pixel, Bits 5 through 4 define the pixel group pattern that may be required, Bits 8 through 6 define a pixel order that is required, and Bits 14 through 9 are reserved for future use and are set to zero. Bit 15 when set to one indicates that the client can accept Bayer pixel data in either packed or unpacked format. If bit 15 is set to zero, this indicates that the client can accept Bayer pixel data only in unpacked format.The 2-byte CRC field contains a 16-bit CRC of all bytes in the packet, including the Packet Length. 43. Register Access Packet The Register Access Packet provides either a host or a client with a means, mechanism, or method to access configuration and status registers in the opposite end of the MDDI link. These registers are likely to be unique for each display or device controller. These registers already exist in many displays that require setting configurations, modes of operation, and have other useful and necessary settings. The Register Access Packet allows the MDDI host or client to both write to a register and request to read a register using the MDDI link. When the host or client requests to read a register the opposite end should respond by sending the register data in the same packet type, but also by indicating that this is the data read from a particular register with the use of the Read/Write Info field, The Register Access Packet may be used to read or write multiple registers by specifying a register count greater than 1. A client indicates an ability to support the Register Access Packet using bit 22 of Client Feature Capability field of the Client Capability Packet.The format of one embodiment of a Register Access Packet is shown generally in FIG. 87 . As seen in FIG. 87 , a Register Access Packet is structured to have Packet Length, Packet Type, bClient ID, Read/Write Flags, Register Address, Parameter CRC, Register Data List and Register Data CRC fields. A Packet Type value of 146 identifies a packet as Register Access Packet. The bClient ID field is reserved for future use and is generally set to zero for now.The 2-byte Read/Write Flags field specifies the specific packet as either a write, or a read, or a response to a read, and provides a count of the data values.Bits 15 through 14 act as Read/Write Flags. If Bits[15:14] are '00' then this packet contains data to be written to a register addressed by the Register Address field. The data to be written to the specified registers is contained in the Register Data List field, If Bites[15:14] are 10' then this is a request for data from one or more registers addressed by the Register Address field. If Bits[15:14] are '11' then that packet contains data that was requested in response to a Register Access Packet having bits 15:14 of the Read/Write Flags set to '10'. The Register Address field contains the address of the register corresponding to the first Register Data List item, and the Register Data List field contains data that was read from the address or addresses. If Bits[15;14] are '01' this is treated as an invalid value, this value is reserved for future use and is not used at this time, but those skilled in the art will understand how to employ it for future applications.Bits 13:0 use a 14-bit unsigned integer to specify the number of 32-bit Register Data items to be transferred in the Register Data List field. If bits 15:14 equal '00' then bits 13:0 specify the number of 32-bit register data items that are contained in the Register Data List field to be written to registers starting at the register specified by the Register Address field. If bits 15:14 equal '10' then bits 13:0 specify the number of 32-bit register data items that the receiving device sends to a device requesting that the registers be read. The Register Data List field in this packet contains no items and is of zero length. If bits 15:14 equal '11' then bits 13:0 specify the number of 32-bit register data items that have been read from registers that are contained in the Register Data List field, Bits 15:14 are not currently set equal to '01', which is considered an invalid value, and otherwise reserved for future designations or use.The Register Address field uses 4 bytes to indicate the register address that is to be written to or read from. For addressing registers whose addressing is less than 32 bits, the upper bits are set to zero.The 2-byte Parameter CRC field contains a CRC of all bytes form the Packet Length to the Register Address. If this CRC fails to check then the entire packet is discarded.The Register Data List field contains a list of 4-byte register data values to be written to client registers or values that were read from client device registers.The 2-byte Register Data CRC field contains a CRC of only the Register Data List. If this CRC fails to check then the Register Data may still be used, but the CRC error count is incremented. D. Packet CRC The CRC fields appear at the end of the packets and sometimes after certain more critical parameters in packets that may have a significantly large data field, and thus, an increased likelihood of errors during transfer. In packets that have two CRC fields, the CRC generator, when only one is used, is re-initialized after the first CRC so that the CRC computations following a long data field are not affected by the parameters at the beginning of the packet.In an exemplary embodiment, the polynomial used for the CRC calculation is known as the CRC-16, or X16 + X15 + X2 + X0. A sample implementation of a CRC generator and checker 3600 useful for implementing the invention is shown in FIG. 36 . In FIG. 36 , a CRC register 3602 is initialized to a value of 0x0001 just prior to transfer of the first bit of a packet which is input on the Tx_MDDI_Data_Before_ CRC line, then the bytes of the packet are shifted into the register starting with the LSB first. Note that the register bit numbers in this figure correspond to the order of the polynomial being used, and not the bit positions used by the MDDI. It is more efficient to shift the CRC register in a single direction, and this results in having CRC bit 15 appear in bit position 0 of the MDDI CRC field, and CRC register bit 14 in MDDI CRC field bit position 1, and so forth until MDDI bit position 14 is reached.As an example, if the packet contents for the Client Request and Status Packets are: 0x000c, 0x0046, 0x000, 0x0400, 0x00, 0x00, 0x0000 (or represented as a sequence of bytes as: 0x0c, 0x00, 0x46, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00), and are submitted using the inputs of the multiplexors 3604 and 3606, and NAND gate 3608, the resulting CRC output on the Tx_MDDI_Data_With_CRC line is 0xd9aa (or represented as a sequence as 0xaa, 0xd9).When CRC generator and checker 3600 is configured as a CRC checker, the CRC that is received on the Rx_MDDI_Data line is input to multiplexor 3604 and exclusive-OR (XOR) gate 3612, and is compared bit by bit with the value found in the CRC register using NOR gate 3610, AND gate 3608, and AND gate 3614. If there are any errors, as output by AND gate 3614, the CRC is incremented once for every packet that contains a CRC error by connecting the output of gate 3614 to the input of register 3602. Note that the example circuit shown in the diagram of FIG. 36 can output more than one CRC error signal within a given CHECK_CRC_NOW window (see FIG. 37B ). Therefore, the CRC error counter generally only counts the first CRC error instance within each interval where CHECK_CRC_NOW is active. If configured as a CRC generator the CRC is clocked out of the CRC register at the time coinciding with the end of the packet.The timing for the input and output signals, and the enabling signals, is illustrated graphically in FIGS. 37A and 37B . The generation of a CRC and transmission of a packet of data are shown in FIG. 37A with the state (0 or 1) of the Gen_Reset, Check_CRC_Now, Generate_CRC__Now, and Sending_MDDI_Data signals, along with the Tx_MDDI_Data_Before_CRC and Tx_MDDI_Data_With_CRC signals. The reception of a packet of data and checking of the CRC value are shown in FIG. 37B , with the state of the Gen_Reset, Check_CRC_Now, Generate_CRC_Now, and Sending_MDDI_Data signals, along with the Rx_MDDI_Data and CRC error signals. E. Error Code Overload for Packet CRC Whenever only data packets and CRC are being transferred between the host and client, there are no error codes being accommodated. The only error is a loss of synchronization. Otherwise, one has to wait for the link to timeout from a lack of a good data transfer path or pipeline and then reset the link and proceed. Unfortunately, this is time consuming and somewhat inefficient.For use in one embodiment, a new technique has been developed in which the CRC portion of packets is used to transfer error code information. This is generally shown in FIG. 65 . That is, one or more error codes are generated by the processors or devices handling the data transfer which indicate specific predefined errors or flaws that might occur within the communication processing or link. When an error is encountered, that the appropriate error code is generated and transferred using the bits for the CRC of a packet. That is, the CRC value is overloaded, or overwritten, with the desired error code, which can be detected on the receiving end by an error monitor or checker that monitors the values of the CRC field. For those cases in which the error code matches the CRC value for some reason, the compliment of the error is transferred to prevent confusion.In one embodiment, to provide a robust error warning and detection system, the error code may be transferred several times, using a series of packets, generally all, that are transferred or sent after the error has been detected. This occurs until the point at which the condition creating the error is cleared from the system, at which point the regular CRC bits are transferred without overloading by another value.This technique of overloading the CRC value provides a much quicker response to system errors while using a minimal amount of extra bits or fields.As shown in FIG. 66 , a CRC overwriting mechanism or apparatus 6600 is shown using an error detector or detections means 6602, which can form part of other circuitry previously described or known, detects the presence or existence of errors within the communication link or process. An error code generator or means 6604, which can be formed as part of other circuitry or use techniques such as look up tables to store pre-selected error messages, generates one or more error codes to indicate specific predefined errors or flaws that have been detected as occurring. It is readily understood that devices 6602 and 6604 can be formed as a single circuit or device as desired, or as part of a programmed sequence of steps for other known processors and elements.A CRC value comparator or comparison means 6606 is shown for checking to see if the selected error code or codes are the same as the CRC value being transferred. If that is the case then a code compliment generator or generation means or device is used to provide the compliment of the error codes as to not be mistaken as the original CRC pattern or value and confuse or complicate the detection scheme. An error code selector or selection means element or device 6610 then selects the error code or value it is desired to insert or overwrite, or their respective compliments as appropriate. An error code CRC over-writer or over writing mechanism or means 6612 is a device that receives the data stream, packets, and the desired codes to be inserted and overwrites the corresponding or appropriate CRC values, in order to transfer the desired error codes to a receiving device.As mentioned, the error code may be transferred several times, using a series of packets, so the over-writer 6612 may utilize memory storage elements in order to maintain copies of the codes during processing or recall these codes from previous elements or other known storage locations which can be used to store or hold their values as needed, or as desired.The general processing the overwriting mechanism of FIG. 66 is implementing is shown in additional detail in FIGS. 67A and 67B . In 67A an error, one or more, is detected in step 6702 in the communication data or process and an error code is selected in step 6704 to indicate this condition. At the same time, or at an appropriate point, the CRC value to be replaced is checked in a step 6706, and compared to the desired error code in step 6708. The result of this comparison, as discussed earlier, is a determination as to whether or not the desired code, or other representative values, will be the same as the CRC value present. If this is the case, then processing proceeds toy step 6712 where the compliment, or in some cases another representative value, as desired, is selected as the code to insert. One it has been determined what error codes or values are to be inserted in steps 6710 and 6714, that appropriate code is selected for insertion. These steps are illustrated as separate for purposes of clarity but generally represent a single choice based on the output of the step 6708 decision. Finally, in step 6716 the appropriate values are overwritten in the CRC location for transfer with the packets being targeted by the process.On the packet reception side, as shown in FIG. 67B , the packet CRC values are being monitored in a step 6722, Generally, the CRC values are being monitored by one or more processes within the system to determine if an error in data transfer has occurred and whether or not to request a retransmission of the packet or packets, or to inhibit further operations and so forth, some of which is discussed above. As part of such monitoring the information can also be used to compare values to known or pre-selected error codes, or representative values and detect the presence of errors. Alternatively, a separate error detection process and monitor can be implemented. If a code appears to be present it is extracted or otherwise noted in step 6724 for further processing. A determination can be made in step 6726 as to whether or not this is the actual code or a compliment, in which case an additional step 6728 is used to translate the value to the desired code value. In either case the resulting extracted code, compliment, or other recovered values are then used to detect what error has occurred form the transmitted code in step 6730. V. Link Hibernation The MDDI link can enter the hibernation state quickly and wake up from hibernation quickly. This responsiveness allows a communicating system or device to force the MDDI link into hibernation frequently to reduce power consumption, since it can wake up again for use very quickly. In one embodiment, as an external mode client wakes up from hibernation for the first time it does so at a data rate and with strobe pulse timing that is consistent with a 1 Mbps rate, that is, the MDDI_Stb pair should toggle at a 500 kHz rate. Once characteristics of the client have been discovered by or communicated to the host, then the host may wake up the link at generally any rate from 1 Mbps to the maximum rate at which the client can operate. Internal mode clients may wake up at any rate at which both the host and client can operate. This is also generally applicable to the first time an internal mode client wakes up.In one embodiment, when the link wakes up from hibernation the host and client exchange a sequence of pulses. These pulses can be detected using low-speed line receivers that consume only a fraction of the current as the differential receivers required to receive the signals at the maximum link operating speed. Either the host or client can wake up the link, so the wake-up protocol is designed to handle possible contention that can occur if both host and client attempt to wake up simultaneously.During the hibernation state the MDDI_Data and MDDI_Stb differential drivers are disabled and the differential voltage across all differential pairs is zero volts. The differential line receivers used to detect the sequence of pulses during the wake-up from hibernation have an intentional voltage offset. In one embodiment, the threshold between a logic-one and logic-zero level in these receivers is approximately 125 mV. This causes an un-driven differential pair to be seen as a logic-zero level during the link wake-up sequence.In order to enter a Hibernation State, the host sends 64 MDDI_Stb cycles after the CRC of the Link Shutdown Packet. The host disables the MDDI_Data0 output of the host in the range of 16 to 56 MDDI_Stb cycles (including output disable propagation delays) after the CRC. The host finishes sending the 64 MDDI_Stb cycles after the CRC of the Link Shutdown packet before it initiates the wake-up sequence. In one embodiment, the host-initiated wake-up is defined as the host having to wait at least 100 user after MDDI_Data0 reaches a valid logic-one level before driving pulses on MDDI_Stb. In one embodiment, the client waits at least 60 MDDI_Stb cycles after the CRC of the Link Shutdown Packet before it drives MDDI_DataO to a logic-one level to attempt to wake-up the host.In order to "wake-up" from a Hibernation State, several actions or processes are undertaken. When the client, here a display, needs data or communication, service, from the host it drives the MDDI_Data0 line to a logic-one state for around 70 to 1000 µsec, while MDDI_Stb is inactive and keeps MDDI_DataO driven to a logic-one level for about 70 MDDI_Stb cycles (over a range of 60 to 80) after MDDI_b becomes active, although other periods can be used as desired. The client then disables the MDDI_Data0 driver by placing it into a high-impedance state.If MDDI_Stb is active during hibernation, although unlikely, then the client might only drive MDDI_Data0 to a logic-one state for about 70 MDDI_Stb cycles (over a range of 60 to 80). This action causes the host to start or restart data traffic on the forward link (208) and to poll the client for its status.The host must detect the presence of the request pulse and begins the startup sequence of first driving the MDDI_Stb to logic-zero level and MDDI_Data0 to a logic-high level for at least around 200 nsec. And then while toggling MDDI_Stb continue to drive MDDI_Data0 to a logic-one level for about 150 MDDI_Stb cycles (a range of 140 to 160) and to logic-zero for about 50 MDDI_Stb cycles. The client should not send a service request pulse if it detects MDDI_Data0 in the logic-one state for more than 80 MDDI_Stb cycles. When the client has detected MDDI_Data0 at a logic-one level for 60 to 80 MDDI_Stb cycles it begins to search for the interval where the host drives MDDI_Data0 to a logic-zero level for 50 MDDI_Stb cycles. After the host drives MDDI_Data0 to a logic-zero level for a duration of 50 MDDL_Stb cycles, then the host starts sending packets on the link. The first packet sent is a Sub-frame Header Packet. The client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles of the 50 cycle interval. The nature of selection of the times and tolerances of time intervals related to the hibernation processing and start up sequence are discussed further below. (See FIGS. 68A-C below.)The host may initiate the wake-up by first enabling MDDI_Stb and simultaneously drive it to a logic-zero level. MDDI_Stb should not be driven to a logic-one level until pulses are output as described below. After MDDI_Stb reaches a logic-zero level the host enables MDDI_Data0 and simultaneously drives it to a logic-one level. MDDI_Data0 should not be driven to a logic-zero level during the wake-up process_until the interval where it is driven to a logic-zero level for an interval of 50 MDDI_Stb pulses as described below. The host should wait at least 200 nsec after MDDI_Data0 reaches a valid logic-one level before driving pulses on MDDI_Stb. This timing relationship occurs while considering the worst-case output enable delays. This substantially guarantees that a client has sufficient time to fully enable its MDDI_Stb receiver after being awakened by a logic-one level on MDDI_Data0 that was driven by the host.An example of the processing steps for a typical client service request event 3800 with no contention is illustrated in FIG. 38 , where the events are labeled for convenience in illustration using the letters A, B, C, D, E, F, and G. The process commences at point A when the host sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, the host enters the low-power hibernation state by disabling the MDDI_Data0 driver and setting the MDDI_Stb driver to a logic zero, as shown at point B. MDDI_Data0 is driven to a logic-zero level by a high-impedance bias network. After some period of time, the client sends a service request pulse to the host by driving MDDI_Data0 to a logic one level as seen at point C. The host still asserts the logic-zero level using the high-impedance bias network, but the driver in the client forces the line to a logic one level. Within 50 µsec, the host recognizes the service request pulse, and asserts a logic one level on MDDI_Data0 by enabling its driver, as seen at point D. The client then ceases from attempting to assert, the service request pulse, and the client places its driver into a high-impedance state, as seen at point E. The host drives MDDI_Data0 to a logic-zero level for 50 µsec, as shown at point F, and also begins to generate MDDI_Stb in a manner consistent with the logic-zero level on MDDL_Data0. The client begins to look for the Sub-frame Header Packet after MDDL_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. After asserting MDDL_Data0 to a logic-zero level and driving MDDL_Stb for 50 µsec, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.A similar example is illustrated in FIG. 39 where a service request is asserted after the link restart sequence has begun, and the events are again labeled using the letters A, B, C, D, E, F, and G. This represents a worst case scenario where a request pulse or signal from the client comes closest to corrupting the Sub-frame Header Packet. The process commences at point A when the host again sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, the host enters the low-power hibernation state by disabling the MDDI_Data0 driver and setting the MDDI_Stb driver to a logic-zero level, as shown at point B. As before, MDDI_Data0 is driven to a logic-zero level by a high-impedance bias network. After a period of time, the host begins the link restart sequence by driving MDDI_Data0 to a logic-one level for 150 µsec as seen at point C. Prior to 50 µsec passing after the link restart sequence begins the display also assert MDDI_Data0 for a duration of 70 µsec, as seen at point D. This happens because the display has a need to request service from the host and does not recognize that the host has already begun the link restart sequence. The client then ceases attempting to assert the service request pulse, and the client places its driver into a high-impedance state, as seen at point E. The host continues to drive MDDI_Data0 to a logic-one level. The host drives MDDI_Data0 to a logic-zero level for 50 µsec, as shown at point F, and also begins to generate MDDI_Stb in a manner consistent with the logic zero level on MDDI_Data0. After asserting MDDI_Data0 to a logic-zero level, and driving MDDI_Stb for 50 µsec, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.From the above discussion, one sees that the prior solution involved having the host go through two states as part of a wakeup sequence. For the first state, the host drives the MDDI_Data0 signal high for 150 µs, and then drives the MDDI_Data0 signal low for 50 us while activating the MDDI_Stb line, and then begins to transmit MDDI packets. This process works well to advance the state of the art in terms of data rates achievable using the MDDI apparatus and methods. However, as stated earlier, more speed in terms of reduced response time to conditions or being able to more quickly select the next step or process, are the ability to simplify processing or elements, are always in demand.Applicants have discovered a new inventive approach to wake-up processing and timing in which the host uses a clock cycle based timing for the signal toggling. In this configuration, the host starts toggling MDDI_Stb from 0 to 10 µsec after the host drives the MDDI_Data0 signal high at the beginning of the wake-up sequence, and does not wait until the signal is driven low. During a wake-up sequence, the host toggles MDDI_Stb as though the MDDI_Data0 signal were always at a logic-zero level. This effectively removes the concept of time from the client side, and the host changes from the prior 150 µs and 50 µs periods for the first two states, to 150 clock cycles and 50 clock cycles, for these periods.The host now becomes responsible for driving that data line high, and within 10 clock cycles starting to transmit a strobe signal as if the data line was zero. After the host has driven the data line high for 150 clock cycles, the host drives the data line low for 50 clock cycles while continuing to transmit the strobe signal. After it has completed both of these processes, the host can begin to transmit the first sub-frame header packet.On the client side, the client implementation can now use the generated clock to calculate the number of clock cycles that the data line is first high, and then low. The number of clock cycles that need to occur in both the data line driven high state is 150 and data line driven low state is 50. This means that for a proper makeup sequence, the client should be able to count at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low. Once these two conditions are met, the client can begin to search for the unique word of the first sub-frame. A break in this pattern is used as a basis to return the counters to an initial state in which the client again looks for the first 150 continuous clock cycles of the data line being high.A client implementation of the invention for host based wakeup from hibernation is very similar to the initial start-up case except that the clock rate is not forced to start at 1Mbps, as discussed earlier. Instead the clock rate can be set to resume at whatever previous rate was active when the communication link went into hibernation. If the host begins transmission of a strobe signal as described above, the client should be able to again count at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low. Once these two conditions have been met, the client can begin the search for the unique word.A client implementation of the invention for client based wakeup from hibernation is similar to the host based wakeup except that it starts by having the client driving the data line. The client can asynchronously drive the data line without a clock to wake up the host device. Once the host recognizes that the data line is being driven high by the client, it can begin its wakeup sequence. The client can count the number of clock cycles generated by the host starting or during its wakeup process. Once the client counts 70 continuous clock cycles of the data being high, it can stop driving the data line high. At this point, the host should already be driving the data line high as well. The client can then count another 80 continuous clock cycles of the data line being high to reach the 150 clock cycles of the data line being high, and can then look for 50 clock cycles of the data line being low. Once these three conditions have been met the client can begin to look for the unique word.An advantage of this new implementation of wake-up processing is that it removes the need for a time measuring device. Whether this is an oscillator, or capacitor discharge circuit, or other such known devices, the client no longer needs such external devices to determine the start up conditions. This saves money and circuit area when implementing controllers, counters, and so forth on a client device board. While this may not be as advantageous to the client, for the host, this technique should also potentially simplify the host in terms of very high density logic (VHDL) being used for core circuitry. The power consumption of using the data and strobe lines as the wakeup notification and measurement source will also be lower since no external circuitry will need to be running for the core elements to be waiting for a host based wakeup. The number of cycles or clock periods used are exemplary and other periods can be used as will be apparent to one skilled in the art.An advantage of this new implementation of wake-up processing is that it removes the need for a time measuring device. Whether this is an oscillator, or capacitor discharge circuit, or other such know devices, the client no longer needs such external devices to determine the start up conditions. This saves money and circuit area when implementing controllers, counters, and so forth on a client device board. While this may not be as advantageous to the client, for the host, this technique should also potentially simplify the host in terms of very high density logic (VHDL) being used for core circuitry. The power consumption of using the data and strobe lines as the wakeup notification and measurement source will also be lower since no external circuitry will need to be running for the core elements to be waiting for a host based wakeup.To clarify and illustrate the operation of this new technique, the timing of MDDI_Data0, MDDI_Stb, and various operations relative to the clock cycles are shown in FIGS. 68A, 68B, and 68C .An example of the processing steps for a typical Host-initiated Wake-up with no contention is illustrated in FIG. 68A , where the events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, and G. The process commences at point A when the host sends a Link Shutdown Packet to the client device to inform it that the link will transition to a low-power hibernation state. In a next step, point B, the host toggles MDDI_Stb for about 64 cycles (or as desired for system design) to allow processing by the client to be completed prior to stopping MDDI_Stb from toggling, which stops the recovered clock in the client device. The host also initially sets MDDI_Data0 to logic-zero level and then disables the MDDI_Data0 output in the range of 16 to 48 cycles (generally including output disable propagation delays) after the CRC. It may be desirable to place high-speed receivers for MDDI_Data0 and MDDI_Stb in the client in a low power state some time after the 48 cycles after the CRC and prior to the next stage (C). The client places its high-speed receivers for MDDI_Data0 and MDDI_Stb into hibernation any time after the rising edge of the 48thMDDI_Stb cycle after the CRC of the Link Shutdown Packet. It is recommended that the client place its high-speed receivers for MDDI_Data0 and MDDI_Stb into hibernation before the rising edge of the 64thMDDI_Stb cycle after the CRC of the Link Shutdown Packet.The host enters the low-power hibernation state at point or step C, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. One can also set the MDDI_Stb driver to a logic-zero level (using a high-impedance bias network) or to continue toggling during hibernation, as desired. The client is also in a low power level hibernation state.After some period of time, the host commences the link restart sequence at point D, by enabling the MDDI_Data0 and MDDI_Stb driver output. The host drives MDDI_Data0 to a logic-one level and MDDI_Stb to a logic-zero level for as long as it should take for the drivers to fully enable their respective outputs. The host typically waits around 200 nanoseconds after these outputs reach desired logic levels before driving pulses on MMDI_Stb. This allows the client time to prepare to receive.With the host drivers enabled and MDDI_Data0 being driven to a logic-one level, the host begins to toggle MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point E. The host drives MDDI-Data0 to a logic zero level for 50 cycles, as shown at point F, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. The host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point G.An example of the processing steps for a typical Client-initiated Wake-up with no contention is illustrated in FIG. 68B , where the events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, G, H, and 1. As before, the process commences at point A when the host sends a Link Shutdown Packet to inform the client that the link will transition to the low power state.At point B, the host toggles MDDI_Stb for about 64 cycles (or as desired for system design) to allow processing by the client to be completed prior to stopping MDDI_Stb from toggling, which stops the recovered clock in the client device. The host also initially sets MDDI_Data0 to a logic-zero level and then disables the MDDI_Data0 output in the range of 16 to 48 cycles (generally including output disable propagation delays) after the CRC, It may be desirable to place high-speed receivers for MDDI_Data0 and MDDI_Stb in the client in a low power state some time after the 48 cycles after the CRC and prior to the next stage (C).The host enters the low-power hibernation state at point or step C, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. One can also set the MDDI_Stb driver to a logic-zero level (using a high-impedance bias network) or to continue toggling during hibernation, as desired. The client is also in a low power level hibernation state.After some period of time, the client commences the link restart sequence at point D, by enabling the MDDI_Stb receiver, and also enabling an offset in the MDDI_Stb receiver to guarantee the state of the received version of MDDI_Stb is a logic-zero level in the client before the host enables its MDDI_Stb driver. It may be desirable for the client to enable the offset slightly ahead of enabling the receiver to ensure the reception of a valid differential signal and inhibit erroneous signals, as desired. The Client enables the MDDI_Data0 driver while driving the MDDI_Data0 line to a logic one level. It is allowed for MDDI_Data0 and MDDI_Stb to be enabled simultaneously if the time to enable the offset and enable the standard MDDI_Stb differential receiver is less than 200 nsec.Within about 1 msec., point E, the host recognizes the service request pulse from the client, and the host begins the link restart sequence by enabling the MDDI_Data0 and MDDI_Stb driver outputs. The host drives MDDI_Data0 to a logic-one level and MDDI_Stb to a logic-zero level for as long as it should take for the drivers to enable their respective outputs. The host typically waits around 200 nanoseconds after these outputs reach desired logic levels before driving pulses on MDDI_Stb. This allows the client time to prepare to receive.With the host drivers enabled and MDDI_Data0 being driven to a logic-one level, the host begins outputting pulses on MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point F. When the client recognizes the first pulse on MDDI_Stb it disables the offset in its MDDI_Stb receiver. The client continues to drive MDDI_Data0 to a logic-one level for 70 MDDL_Stb cycles, and disables its MDDI_Data0 driver at point G. The host continues to drive MDDI_Data0 to a logic-one level for a duration of 80 additional MDDI_Stb pulses, and at point H drives MDDi_Data0 to a logic-zero level.As seen at points G and H, the host drives MDDI_Data0 to a logic-zero level for 50 cycles, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDL_Stb cycles. After driving MDDI_Stb for a duration of 50 cycles, the host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point I.An example of the processing steps for a typical Host-initiated Wake-up with contention from the client, that is the client also wants to wake up the link, is illustrated in FIG. 68C . The events are again labeled for convenience in illustration using the letters A, B, C, D, E, F, G, H, and 1, As before, the process commences at point A when the host sends a Link Shutdown Packet to inform the client that the link will transition to the low power state, proceeds to point B where MDDI_Stb is toggled for about 64 cycles (or as desired for system design) to allow processing by the client to be completed, and then to point C, where the host enters the low-power hibernation state, by disabling the MDDI_Data0 and MDDI_Stb drivers and placing a host controller in a low power hibernation state. After some period of time, the host commences the link restart sequence at point D, by enabling the MDDI_Data0 and MDDI_Stb driver output, and begins to toggle MDDI_Stb for a duration of 150 MDDI_Stb cycles, as seen at point E.At up to 70 MDDI_Stb cycles after point E, here point F, the client has not yet recognized that the host is driving MDDI_Data0 to a logic-one level so the client also drives MDDI_Data0 to a logic-one level. This occurs here because the client has a desire to request service but does not recognize that the host it is trying to communicate with has already begun the link restart sequence. At point G, the client ceases to drive MDDI_Data0, and places its driver into a high impedance state by disabling its output. The host continues to drive MDDI_Data0 to a logic-one level for 80 additional cycles.The host drives MDDI_Data0 to a logic zero level for 50 cycles, as shown at point H, and the client begins to look for the Sub-frame Header Packet after MDDI_Data0 is at a logic-zero level for 40 MDDI_Stb cycles. The host begins to transmit data on the forward link by sending a Sub-frame Header Packet, as shown at point I. VI. Interface Electrical Specifications In the example embodiments, Data in a Non-Retum-to-Zero (NRZ) format is encoded using a data-strobe signal or DATA-STB format, which allows clock information to be embedded in the data and strobe signals. The clock can be recovered without complex phase lock loop circuitry. Data is carried over a bi-directional differential link, generally implemented using a wite-line cable, although other conductors, printed wires, or transfer elements can be used, as stated earlier. The strobe signal (STB) is carried over a uni-directional link which is driven only by the host. The strobe signal toggles value (0 or 1) whenever there is a back-to-back state, 0 or 1, that remains the same on the Data line or signal.An example of how a data sequence such as bits "1110001011" can be transmitted using DATA-STB encoding is shown in graphical form in FIG. 40 . In FIG. 40 , a DATA signal 4002 is shown on the top line of a signal timing chart and a STB signal 4004 is shown on a second line, each time aligned as appropriate (common starting point). As time passes, when there is a change of state occurring on the DATA line 4002 (signal), then the STB line 4004 (signal) maintains a previous state, thus, the first '1' state of the DATA signal correlates with the first '0' state for the STB signal, its starting value. However, if or when the state, level, of the DATA signal does not change then the STB signal toggles to the opposite state or '1' in the present example, as is the case in FIG. 40 where the DATA is providing another '1' value. That is, there is one and only one transition per bit cycle between DATA and STB. Therefore, the STB signal transitions again, this time to '0' as the DATA signal stays at '1' and holds this level or value as the DATA signal changes level to '0'- When the DATA signal stays at '1', the STB signal toggles to the opposite state or '1' in the present example, and so forth, as the DATA signal changes or holds levels or values.Upon receiving these signals, an exclusive-OR (XOR) operation is performed on the DATA and STB signals to produce a clock signal 4006, which is shown on the bottom of the timing chart for relative comparison with the desired data and strobe signals. An example of circuitry useful for generating the DATA and STB outputs or signals from input data at the host, and then recovering or recapturing the data from the DATA and STB signals at the client, is shown in FIG. 41 .In FIG. 41 , a transmission portion 4100 is used to generate and transmit the original DATA and STB signals over an intermediary signal path 4102, while a reception portion 4120 is used to receive the signals and recover the data, As shown in FIG 41 , in order to transfer data from a host to a client, the DATA signal is input to two D-type flip-flop circuit elements 4104 and 4106 along with a clock signal for triggering the circuits. The two flip-flop circuit outputs (Q) are then split into a differential pair of signals MDDI_Data0+, MDDI_Data0- and MDDI_Stb+, MDDI_Stb-, respectively, using two differential line drivers 41.08 and 4110 (voltage mode). A three-input exclusive-NOR (XNOR) gate, circuit, or logic element 4112 is connected to receive the DATA and outputs of both flip-flops, and generates an output that provides the data input for the second flip-flop, which in turn generates the MDDI_Stb+, MDDI_Stb-signals. For convenience, the XNOR gate has the inversion bubble placed to indicate that it is effectively inverting the Q output of the flip-flop that generates the Strobe.In reception portion 4120 of FIG 41 , the MDDI_Data0+, MDDI_Data0- and MDDI_Stb+, MDDI_Stb- signals are received by each of two differential line receivers 4122 and 4124, which generate single outputs from the differential signals. The outputs of the amplifiers are then input to each of the inputs of a two-input exclusive-OR (XOR) gate, circuit, or logic element 4126 which produces the clock signal. The clock signal is used to trigger each of two D-type flip-flop circuits 4128 and 4130 which receive a delayed version of the DATA signal, through delay element 4132, one of which (4128) generates data '0' values and the other (4130) data '1' values. The clock has an independent output from the XOR logic as well. Since the clock information is distributed between the DATA and STB lines, neither signal transitions between states faster than half of the clock rate. Since the clock is reproduced using the exclusive-OR processing of the DATA and STB signals, the system effectively tolerates twice the amount of skew between the input data and clock compared to the situation when a clock signal is sent directly over a single dedicated data line.The MDDI Data pairs, MDDI_Stb+, and MDDI_Stb- signals are operated in a differential mode to maximize immunity from the negative affects of noise. Each differential pair is parallel-terminated with the characteristic impedance of the cable or conductor being used to transfer signals. Generally, all parallel-terminations reside in the client device. This is near the differential receiver for forward traffic (data sent from the host to the client), but it is at the driving end of the cable or other conductors or transfer elements for reverse traffic (data sent from the client to the host). For reverse traffic the signal is driven by the client, reflected by the high impedance receiver at the host, and is terminated at the client. This avoids the need for a double termination that would increase current consumption. It also functions at data rates greater than the reciprocal of the round-trip delay in the cable. The MDDI_Stb+ and MDDI_Stb-conductors or signals are only driven by the host.An exemplary configuration of elements useful for achieving the drivers, receivers, and terminations for transferring signals as part of the inventive MDDI are shown in FIG. 42 . This exemplary interface uses low voltage sensing, here 200 mV, with less than 1 volt power swings and low power drain. The driver of each signal pair has a differential current output. While receiving MDDI packets the MDDI_Data and MDDI_Stb pairs use a conventional differential receiver with a voltage threshold of zero volts. In the hibernation state the driver outputs are disabled and the parallel-termination resistors pull the voltage on each signal pair to zero volts. During hibernation a special receiver on the MDDI_Data0 pair has an offset input threshold of positive 125 mV, which causes the hibernation line receiver to interpret the un-driven signal pair as a logic-zero level.Sometimes the host or client simultaneously drive the differential pair to a logic-one level or a logic-zero level to guarantee a valid logic-level on the pair when the direction of data flow changes (from host-to-client or client-to-host). The output voltage range and output specifications are still met with simultaneously driven outputs driven to the same logic level. In some systems it may be necessary to drive a small current into the terminated differential pair to create a small offset voltage at certain times during hibernation and when the link is waking up from the hibernation state. In those situations, the enabled offset-current bias circuits drive the current levels referred to as: IESD-and-Rx- internal ESD diode and differential receiver input with IESD-and-Rx≤ 1 µA typically; ITx-Hi-Z- differential driver output in the high-impedance state, with ITx-Hi-z≤ 1 µA typically; and Iexternal-ESD- the leakage through the external ESD protection diodes, with Iexternal-ESD≤ 3 µA typically.Each of these leakage currents is illustrated in FIG. 47 . The pull-up and pull-down circuits must achieve the minimum differential voltage under the worst-case leakage conditions described above when all occur simultaneously. The total leakage is ≤ 4 µA for internal mode without external ESD protection diodes and ≤ 10 µA for external mode with external ESD protection.The electrical parameters and characteristics of the differential line drivers and line receivers are described for one exemplary embodiment in Tables VIIa-VIId. Functionally, the driver transfers the logic level on the input directly to a positive output, and the inverse of the input to a negative output. The delay from input to outputs is well-matched to the differential line which is driven differentially. In most implementations, the voltage swing on the outputs is less than the swing on the input to minimize power consumption and electromagnetic emissions. In one embodiment, there is a minimum voltage swing of around 0.5V. However, other values can be used, as would be known by those skilled in the art, and the inventors contemplate a smaller value in some embodiments, depending on design constraints.The differential line receivers have the same characteristic as a high-speed voltage comparator. In FIG. 41 , the input without the bubble is the positive input and the input with the bubble is the negative input. The output is a logic one if: (Vinput+) - (Vinput-) is greater than zero. Another way to describe this is a differential amplifier with very large (virtually infinite) gain with the output clipped at logic 0 and 1 voltage levels.The delay skew between different pairs should be minimized to operate the differential transmission system at the highest potential speed.Table VIIaVoutput-RangeAllowable host driver output voltage range with respect to host ground0.351.60VIOD+Driver differential output high current (while driving the terminated transmission line)2.54.5mAIOD-Driver differential output low voltage (while driving the terminated transmission line)-4.5-2.5mATRise-FallRise and fall time (between 20% and 80% amplitude) of dri ver output, measured in differential mode425Note 1psecTskew-pairSkew between positive and negative outputs of the same differential pair (intra-pair skew)125psecTDifferential-SkewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossing0TB-283psec TB-TPO-DRVR Jitter, bit boundary to minimum output level0See abovepsecNote 1: The maximum rise and fall time is either 30% of the interval to transmit one bit on one differential pair or 100 nsec, whichever is smaller.Table VIIbVoutput-Range-ExtAllowable client driver output voltage range with respect to client ground (External Mode)01.25VVoutput-Range-Allowable Intclient driver output voltage range with respect to client ground (Internal Mode)0.351.60VIOD+Driver differential output high voltage (while driving the equivalent of the pull-up and pull-down circuits that exist at the host and client)2.54.5mAIOD-Driver differential output low voltage (while driving the equivalent of the pull-up and pull-down circuits that exist at the host and client)-4.5-2.5mATRise-FallRise and fall time (between 20% and 80% amplitude) of driver output, measured in differential mode425Note 1psecTskew-pairSkew between positive and negative outputs of the same differential pair (intra-pair skew)125psecTDifferential-skewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossingTB- 283psecTB-TP4-DRVRJitter, bit boundary to minimum output levelSee abovepsecNote 1: The maximum rise and fall time is 30% of the interval to transmit one bit on one differential pair or 100 nsec, whichever is smaller.Table VIIcVIT+Receiver differential input high threshold voltage050mVVIT-Receiver differential input low threshold voltage-500mVVIT+Receiver differential input high threshold voltage (offset for hibernation wake-up)125175mVVIT-Receiver differential input low threshold voltage (offset for hibernation wake-up)75125mVVluput-RangeAllowable receiver input voltage range with respect to client ground.01.65VRtermParallel termination resistance value98100102ΩIinInput leakage current-1010µACpadCapacitance of pad to client ground (note 1)5pFCdiffCapacitance between the two signals of a differential pair (note 1)1pFTskew-pair-INTSkew caused by the differential receiver between positive and negative inputs of the differential receiver of the same differential pair (intra-pair skew). Internal Mode250psecTskew-pair-EXTIntra-pair skew, External Mode50psecTDifferential- SkewPeak delay skew between one differential pair and any other differential pair.See abovepsecTAJitter, bit boundary to center crossingTB- 38.5psecTB-TP4-RCVR- INTJitter, bit boundary to minimum input level (Internal Mode)0See abovepsecTB-TP4-RCVR -EXTJitter, bit boundary to minimum input level (External Mode)0See abovepsecTable VIIdVIT+Receiver differential input high threshold voltage (non-offset)050mVVIT-Receiver differential input low threshold voltage (non-offset)-500mVVIT+Receiver differential input high threshold voltage (offset for hibernation wake-up)125175mVVIT-Receiver differential input low threshold voltage (offset for hibernation wake-up)75125mVVInput-RangeAllowable receiver input voltage range with respect to host ground.01.65VIinInput leakage current (excluding hibernate bias)-1010µACpadCapacitance of pad to host ground5pFCdiffCapacitance between the two signals of a differential pair.1pFTskew-pairSkew caused by the differential receiver between positive and negative inputs of the differential receiver of the same differential pair (intra-pair skew).250psecTskew-pair-EXTIntra-pair skew, External Mode50psecTAJitter, bit boundary to center crossingTB- 38.5psecTB-TP0-RCVR-INTJitter, bit boundary to minimum output levelSee abovepsec(External Mode) TB-TP0- RCVR-EXT Jitter, bit boundary to minimum output level (External Mode) See abovepsecIn FIG. 42 , a host controller 4202 and a client or display controller 4204 are shown transferring packets over the communication link 4206. The host controller employs a series of three drivers 4210, 4212, and 4214 to receive the host DATA and STB signals to be transferred, as well as to receive the client Data signals to be transferred, while the client employs the three drivers 4230, 4232, and 4234. The driver responsible for passage of the host DATA (4212) employs an enable signal input to allow activation of the communication link generally only when transfer from the host to the client is desired. Since the STB signal is formed as part of the transfer of data, no additional enable signal is employed for that driver (4212). The inputs of each of the client DATA and STB receivers (4132, 4230) have termination impedances or resistors 4218 and 4220, respectively paced across them. Driver 4234 in the client controller is used to prepare, the data signals being transferred from the client to the host, where driver 4214 on the input side, processes the data.The special receivers (drivers) 4216 and 4236 are coupled or connected to the DATA lines, and generate or use the 125 mV voltage offset previously discussed, as part of the hibernation control discussed elsewhere. The offsets cause the hibernation line receivers to interpret un-driven signal pairs as a logic-zero level.The above drivers and impedances can be formed as discrete components or as part of a circuit module, or an application specific integrated circuit (ASIC) which acts as a more cost effective encoder or decoder solution.It can be easily seen that power is transferred to the client device, or display, from the host device using the signals labeled HOST_Pwr and HOST_Gnd over a pair of conductors. The HOST_Gnd portion of the signal acts as the reference ground and the power supply return path or signal for the client device. The HOST_Pwr signal acts as the client device power supply which is driven by the host device In an exemplary configuration, for low power applications, the client device is allowed to draw up to 500 mA, The HOST_Pwr signal can be provided from portable power souses, such as but not limited to, a lithium-ion type battery or battery pack residing at the host device, and may range from 3.2 to 4.3 volts with respect to HOST_Gnd. VII. Timing Characteristics A. Overview The steps and signal levels employed to enter a hibernation state (no service requested, desired, or required), and to secure service for a client from the host, either by host- or client initiation, are illustrated in FIGS. 43a, 43b, and 43c , respectively. In FIGS. 43a, 43b, and 43c , the first part of signals being illustrated shows a Link Shutdown Packet being transferred from the host and the data line is then driven to a logic zero state using the high-impedance bias circuit. No data is being transmitted by the client, or host, which has its driver disabled. A series of strobe pulses for the MDDI_Stb signal line can be seen at the bottom, since MDDI_Stb is active during the Link Shutdown Packet. Once this packet ends, the logic level changes to zero as the host drives the bias circuit and logic to zero. This represents the termination of the last signal transfer or service from the host, and could have occurred at any time in the past, and is included to show the prior cessation of service, and the state of the signals prior to service commencement. If desired, such as signal can be sent just to reset the communication link to the proper state without a 'known' prior communication having been undertaken by this host device.As shown in FIG. 43a , and discussed for the Link Shutdown Packet above, in the low-power hibernation state, the MDDI_Data0 driver is disabled into a high-impedance state starting after the 16th to 48th MDDI_Stb cycles or pulses after the last bit of the All Zeros field in the link Shutdown Packet. For Type-2, Type-3, or Type-4 links the MDDI_Data1 through NDDI_DataPwr7 signals are also placed in a high-impedance state at the same time that the MDDI_Data0 driver is disabled. As described in the definition of the All Zeros field, MDDI_Stb toggles for 64 cycles (or as desired for system design) following the MSB of the CRC field of the Link Shutdown Packet to allow processing by the client to be completed and facilitate an orderly shutdown in a client controller. One cycle is a low-to-high transition followed by a high-to-low transition, or a high-to-low transition followed by a low-to-high transition. After the All Zeros field is sent, the MDDI_Stb and MDDI_Data0 drivers in the host are disabled, and the host enters the low-power hibernation state. After some period of time, the host commences the link restart sequence as shown in FIGS. 43b and 43c , by enabling the MDDI_Data0 and MDDI_Stb lines or driver outputs, and begins to toggle MDDI_Stb, as part of either at a host or client initiated wake-up request.As shown in FIG. 43b , after some time passes with the signal output from drivers for MDDI_Data0 and MDDI_Stb disabled, a host initiates service or wake-up from hibernation by enabling its MDDI_Stb driver for a period of time designated tsth-dagta-enbl, during which the line is driven to a logic zero level, until it is completely enabled and then enabling its MDDI_Data0 driver. The host holds MDDI_Stb at logic-zero level after MDDI_Data0 reaches a high or a logic one level which occurs over a period of time designated tclient-startup. At the end of the tclient-startupperiod the host then toggles the MDDI_Stb signal or line. The host drives the MDDI_Data0 line high, a logic-one level, while the client does not drive MDDL_Data0, for a period designated trestart-high, and then drives the MDDI_Data0 line low, or to logic-zero level, for a period designated trestart-low. After this, the first forward traffic begins with a Sub-Frame Header Packet, and the forward traffic packets are then transferred. The MDDI_Stb signal is active during the period and the subsequent Sub_Frame Header Packet.As shown in FIG. 43c , after some time passes with the signal output from drivers for MDDI_Data0 and MDDI_Stb disabled, a client initiates a service request or wake-up from hibernation by enabling an offset in the MDDI_Stb receiver or output signal for a period of time designated tstb-dagta-enbl, as discussed above, before the host enables its MDDI_Stb driver. The client then enables its MDDI_Data0 driver for a period of time designated thost-detect, during which the line is driven to a logic zero level, before the host begins MDDLStb toggling.A certain amount of time passes or may be needed before the host detects the request during thost-detect, after which the host responds by holding MDDI_Stb at logic-zero level for the period designated tstb-startupbefore the host begins toggling MDDI-Stb with a link startup sequence by driving the MDDI_Data0 to a logic-one or high level during the trestart-highperiod. When the client recognizes the first pulse on MDDI_Stb it disable the offset in its MDDI-Stb receiver. The client continues to drive MDDI_Data0 to a logic-one level or a period designated tclient-detectuntil it detects the host driving the line. At this point, the client de-asserts the request, and disables its MDDI_Data0 driver so that the output from the client goes to a logic-zero level again, and the host is driving MDDI_Data0. As before, the host continues to drive MDDI_Data0 to a logic-one level for the trestart-highperiod, and then drives the MDDI_Data0 line low for the trestart-lowperiod, after which the first forward traffic begins with a Sub-Frame Header Packet. The MDDI_Stb signal is active during the trestart-lowperiod and the subsequent Sub-Frame Header Packet.Table VIII shows representative times or processing periods for the length of the various periods discussed above, and the relationship to exemplary minimum and maximum data rates, where:t bit = 1 Link_Data_Rate ,where Link_Data_Rate is the bit rate of a single dataLink_Data_Rate pair,Table VIII1/tBrr-min-perfLink data rate for a minimum performance device0,0011.1Mbps1/tBrr-max-perfMaximum link data rate range for a device, external0.001400Mbps1/tBIT-max-perfMaximum link data rate range for a device, internal0.001550MbpsReverse Link data rate0.000550MbpstBITPeriod of one forward link data bit, external mode2.5106nsectBITPeriod of one forward link data bit, internal mode1.8106nsectrestart-highDuration of host link restart high pulse140150160Stb elkstrestart-lowDuration of host link restart low pulse505050Stb clkststb-data-enablMDDI_Stb completely enabled to MDDI_Data0 enabled link restart sequence0µsectclient-startupTime for host to hold MDDI_Stb at logic-zero level after MDDI_Data0 reaches logic-high level200nsecthost-detectTime from MDDI_Data0 high to MDDI_Stb toggling01000µsectclient-detectTime for client to detect MDDI_Data0 at logic-high level performance device6080Stb clkststb-startupTime for host to hold MDDI_Stb at logic-zero level before host begins toggling MDDI_Stb200nsecThose skilled in the art will readily understand that the functions of the individual elements illustrated in FIGS. 41 and 42 , are well known, and the function of the elements in FIG. 42 is confirmed by the timing diagram in FIGS 43a, 43b, and 43c . Details about the series terminations and hibernation resistors that are shown in FIG. 42 were omitted from FIG. 41 because that information is unnecessary for a description of how to perform the Data-Strobe encoding and recover the clock from it. B. Dat-a-Strobe Timing Forward Link The switching characteristics for the transfer of data on the forward link from the host driver output is shown in Table IX-1. Table IX presents a tabular form of the minimum and maximum desired, versus typical times for certain signal transitions to occur. For example, the typical length of time for a transition to occur from the start to the end of a data value (output of a '0' or '1'), a Data0 to Data0 transition, termed ttdd-(host-output), is ttbitwhile the minimum time is about ttbit-0.5 nsec., and the maximum is about ttbit+0.5 nsec. The relative spacing between transitions on the Data0, other data lines (DataX), and the strobe lines (Stb), is illustrated in FIG. 44 where the Data0 to Strobe, Strobe to Strobe, Strobe to Data0, Data0 to non-Data0, non-Data0 to non-Data0, non-Data0 to Strobe, and Strobe to non-Data0 transitions are shown, which are referred to as ttds-(host-output), ttss-(host-output), ttsd-(host-output), ttddx-(host-output), ttdxdx-(host-output), ttdxs-(host-output), and ttsdx-(host-output), respectively.Table IX-1ttdd-(host-output)Data0 to Data0 transitionttbit- 0.5ttbitttbit+ 0.5nsecttds-(host-output)Data0 to Strobe transitionttbit- 0,8ttbitttbit+ 0.8nsecttss-(host-output)Strobe to Strobe transitionttbit- 0.5ttbitttbit+ 0.5nsecttsd-(nost-output)Strobe to Data0 transitiontibit- 0.8ttbitttbit+ 0.8nsecttddx-(host-output)Data0 to non-Data0 transitionttbitnsecttdxdx-(host-output)non-Data0 to non-Data0 transitionttbit- 05ttbitttbit+ 05nsecttdxs-(host-output)non-Data0 to Strobe transitionttbitnsecttsdx-(host-output)Strobe to non-Data0 transitionttbitnsecThe typical MDDI timing requirements for the client receiver input for the same signal transferring data on the forward link is shown in Table IX-2. Since the same signals are being discussed but time delayed, no new figure is needed to illustrate the signal characteristics or meaning of the respective labels, as would be understood by those skilled in the art.Table IX-2ttdd-(client-input)Data0 to Data0 transitionttbit- 1,0ttbitttbit+ 1.0nsecttds-(client-input)Data0 to Strobe transitionttbit - 1.5ttbitttbit + 1.5nsectts-(client-input)Strobe to Strobe transitionttbit- 1.0ttbitttbit+ 1.0nsecttsd-(client-input)Strobe to Data0 transitionttbit- 1,5ttbitttbit+ 1.5nsecttddx-(host-ontput)Data0 to non-Data0 transitionttbitnsecttdxdx-(host-output)non-Data0 to non-Data0 transitionttbitnsecttdxs-(host-output)non-Data0 to Strobe transitionttbitnsecttsdx-(host-output)Strobe to non-Data0 transitionttbitnsecFIGS. 45 and 46 illustrate the presence of a delay in response that can occur when the host disables or enables the host driver, respectively. In the case of a host forwarding certain packets, such as the Reverse Link Encapsulation Packet or the Round Trip Delay Measurement Packet, the host disables the line driver after the desired packets are forwarded, such as the Parameter CRC, Strobe Alignment, and All Zero packets illustrated in FIG. 45 as having been transferred. However, as shown in FIG. 45 , the state of the line does not necessarily switch from '0' to a desired higher value instantaneously, although this is potentially achievable with certain control or circuit elements present, but takes a period of time termed the host Driver Disable Delay period to respond. While it could occur virtually instantly such that this time period is 0 nanoseconds (nsec.) in length, it could more readily extend over some longer period with 10 nsec. being a desired maximum period length, which occurs during the Guard Time 1 or Turn Around 1 packet periods.Looking in FIG. 46 , one sees the signal level change undergone when the host Driver is enabled for transferring a packet such as the Reverse Link Encapsulation Packet or the Round Trip Delay Measurement Packet. Here, after the Guard Time 2 or Turn Around 2 packet periods, the host driver is enabled and begins to drive a level, here '0', which value is approached or reached over a period of time termed the Host Driver Enable Delay period, which occurs during the Driver Re-enable period, prior to the first packet being sent.A similar process occurs for the drivers and signal transfers for the client device, here a display. The general guidelines for the length of these periods, and their respective relationships are shown in Table X, below.Table XHost Driver Disable Delay010nsecHost Driver Enable Delay02.0nsecDisplay Driver Disable Delay010userDisplay Driver Enable Delay02.0nsec C. Host And Client Output Enable And Disable Times The switching characteristics and relative timing relationships for Host and Client output enabled and disable time or operations relative to the Reverse Link Encapsulation Packet structure and period, is shown in FIG. 48 . The driver output functions or operations are labeled as: thost-mablefor the Host output enable time, thost-disablefor the Host output disable time, tclient-enablefor the Client output enable time, and tclient-disablefor the Client output disable time. Typical times for certain signal transitions are discussed below. The minimum period for these operations would be zero nanoseconds, with typical or maximum values determined from the system design employing the interface, possibly on the order of 8 nanoseconds, or more..The general guidelines for the length of these periods, (host and client enable/disable times) and their respective relationships are shown in Table XI, below.Table XIthost-enableHost output enable time024·tBITnsecthost-disableHost output disable time, entire length of the Turn-Around 1 field024·tBTTnsectclient-enableClient output enable time, entire length of the Turn-Around 1 field024·tBTTnsectclient-disableClient output disable time, measured from the end of the last bit of the Turn-Around 2 field024·tBTTnsec VIII. Implementation of Link Control (link Controller Operation) A. State Machine Packet Processor Packets being transferred over a MDDI link are dispatched very rapidly, typically at a rate on the order of 300 Mbps or more, such as 400 Mbps, although lower rates are certainly accommodated, as desired, This type of bus or transfer link speed is too great for currently commercially available (economical) general-purpose microprocessors or the like to control. Therefore, a practical implementation to accomplish this type of signal transfer is to utilize a programmable state machine to parse the input packet stream to produce packets that are transferred or redirected to the appropriate audio-visual subsystem for which they are intended. Such devices are well known and use circuits generally dedicated to a limited number of operations, functions, or states to achieve a desired high speed or very high speed operation.General purpose controllers, processors, or processing elements, can be used to more appropriately act upon or manipulate some information such as control or status packets, which have lower speed demands. When those packets (control, status, or other predefined packets) are received, the state machine should pass them through a data buffer or similar processing element to the general-purpose processor so the packets can be acted upon to provide a desired result (effect) while the audio and visual packets are transferred to their appropriate destination for action. If future, microprocessors or other general purpose controllers, processors, or processing elements are manufactured to achieve higher data rate processing capabilities, then the states or state machine discussed below might also be implemented using software control of such devices, typically as programs stored on a storage element or media.The general purpose processor function can be realized in some embodiments by taking advantage of the processing power, or excess cycles available for, microprocessors (CPUs) in computer applications, or controllers, processors, digital signal processors (DSPs), specialized circuits, or ASICs found in wireless devices, in much the same manner as some modems or graphics processors utilize the processing power of CPUs found in computers to perform some functions and reduce hardware complexity and costs, However, this cycle sharing or usage can negatively impact the processing speed, timing, or overall operation of such elements, so in many applications, dedicated circuits or elements are preferred for this general processing.In order for image data to be viewed on a display (micro-display), or to reliably receive all packets sent by the host device, the client signal processing is synchronized with the forward link channel timing, That is, signals arriving at the client and the client circuits need to be substantially time synchronized for proper signal processing to occur, A high level diagram of states achieved by signal for one embodiment is presented in the illustration of FIG. 49 . In FIG. 49 , the possible forward link synchronization "states" for a state machine 4900 are shown being categorized as one ASYNC FRAMES STATE 4904, two ACQUIRING SYNC STATES 4902 and 4906, and three IN-SYNC STATES 4908, 4910, and 4912.As shown by starting step or state 4902, the display or client, such as a presentation device, starts in a pre-selected "no sync" state, and searches for a unique word in the first sub-frame header packet that is detected. It is to be noted that this no sync state represents the minimum communication setting or "fall-back" setting in which a Type 1 interface is selected. When the unique word is found during the search, the client saves the sub-frame length field. There is no checking of the CRC bits for processing on this first frame, or until synchronization is obtained. If this sub-frame length is zero, then sync state processing proceeds accordingly to a state 4904 labeled here as the "async frames" state, which indicates that synchronization has not yet been achieved. This step in the processing is labeled as having encountered cond 3, or condition 3, in FIG. 49 . Otherwise, if the frame length is greater than zero, then the state processing proceeds to a state 4906 where the interface state is set as "found one sync frame." This step in the processing is labeled as encountering cond 5, or condition 5, in FIG. 49 . In addition, if the state machine sees a frame header packet and good CRC determination for a frame length greater than zero, processing proceeds to the "found one sync frame" state. This is labeled as meeting cond 6, or condition 6, in FIG. 49 .In each situation in which the system is in a state other than "no sync," if a packet with a good CRC result is detected, then the interface state is changed to the "in-sync" state 4908. This step in the processing is labeled as having encountered cond 1, or condition 1, in FIG. 49 . On the other hand, if the CRC in any packet is not correct, then the sync state processing proceeds or returns to the interface state 4902 of "NO SYNC FRAME" state. This portion of the processing is labeled as encountering cond 2, or condition 2, in the state diagram of FIG. 49 . B. Acquisition Time for Sync The interface can be configured to accommodate a certain number of "sync errors" prior to deciding that synchronization is lost and returning to the "NO SYNC FRAME" state. In FIG. 49 , once the state machine has reached the "IN-SYNC STATE" and no errors are found, it is continuously encountering a cond 1 result, and remains in the "IN-SYNC" state. However once one cond 2 result is detected, processing changes the state to a "one-sync-error" state 4910. At this point, if processing results in detecting another cond 1 result, then the state machine returns to the "in-sync" state, otherwise it encounters another cond 2 result, and moves to a "TWO-SYNC-ERRORS" state 4912. Again, if a cond 1 occurs, processing returns the state machine to the "IN-SYNC" state. Otherwise, another cond 2 is encountered and the state machine returns to the "no-sync" state. It is also understandable that should the interface encounter a "link shutdown packet," then this will cause the link to terminate data transfers and return to the "no-sync frame" state as there is nothing to synchronize with, which is referred to as meeting cond 4, or condition 4, in the state diagram of FIG. 49 .It is understood that it is possible for there to be a repeating "false copy" of the unique word which may appear at some fixed location within the sub-frame. In that situation, it is highly unlikely that the state machine will synchronize to the sub-frame because the CRC on the sub-frame Header Packet must also be valid when processed in order for the MDDI processing to proceed to the "IN SYNC" state.The sub-frame length in the sub-frame Header Packet may be set to zero to indicate that the host will transmit only one sub-frame before the link is shut down, and the MDDI is placed in or configured into an idle hibernation state. In this case, the client must immediately receive packets over the forward link after detecting the sub-frame Header Packet because only a single sub-frame is sent before the link transitions to the idle state. In normal or typical operations, the sub-frame length is non-zero and the client only processes forward link packets while the interface is in those states collectively shown as "IN-SYNC" states in FIG. 49 .An external mode client device may be attached to the host while the host is already transmitting a forward link data sequence. In this situation, the client must synchronize to the host. The time required for a client to synchronize to the forward link signal is variable depending on the sub-frame size and the forward link data rate. The likelihood of detecting a "false copy" of the unique word as part of the random, or more random, data in the forward link is greater when the sub-frame size is larger. At the same time, the ability to recover from a false detection is lower, and the time taken to do so is longer, when a forward link data rate is slower.For one or more embodiments, it recommended or understood that a MDDI host should perform certain additional steps to ensure that the MDDI reverse link is stable before it stops forward link transmission to go to a low power mode or to shut down the link completely.One problem that can occur is that if a host uses an incorrect measurement of the round-trip delay value this can cause all subsequently received reverse data transmission from the client to fail even though the forward link appears to be fine. This could happen if the host tries to send a Round Trip Delay Measurement Packet when the client is not in sync with the forward link, or due to an extreme ambient temperature change that causes a corresponding large change in the propagation delay of the differential drivers and receivers which affects the round trip delay. An intermittent cable or connector contact failure could also cause the client to temporarily lose synchronization and then regain sync, during which time, it may miss receiving a Round Trip Delay Measurement Packet. Subsequent reverse link packets would not be able to be decoded properly by the host.Another type of problem that can occur is that if the client temporarily loses sync and the host sends a Link Shutdown Packet before the client is able to regain sync. The host will be in hibernation while the client is unable to enter the hibernation state because it did not receive the Link Shutdown Packet and does not have a clock because the link is in hibernation.One technique or embodiment useful for overcoming such problems is to have the host ensure that the client is in sync with the forward link before putting the link into the hibernation state. If the MDDI host is unable to do this or does not have such an opportunity, such as when it loses power or the link is abruptly broken or fails due to a cable, conductor, or connector separation, break, or disconnection occurring during operation, then the host should first try to ensure that the client is in sync before starting a round-trip delay measurement process or sending a Reverse Link Encapsulation packet.A host can observe the CRC Error Count field in a Client Request and Status packet sent by the client to determine the forward link integrity. This packet is requested by the host from the client. However, in the event of a major link failure or disruption, this request will most likely go unanswered since a client will not be able to properly decode the packet, or maybe even receive it altogether. The request for the CRC Error Count using the Client Request and Status Packet sent in a Reverse Link Encapsulation Packet acts as a first integrity check, a sort of first line of defense. In addition, a host can send a Round Trip Delay Measurement Packet to confirm whether or not the assumption about the client having fallen out of sync is a valid one or not. If the client does not respond to a Round Trip Delay Measurement Packet, the host will conclude that the client is out of sync and can then start the process of getting it back in sync.Once the host concludes that the client has more than likely lost synchronization with the forward link, it waits until the next subframe header before attempting to send any packets other than filler packets. This is done in order to allow a client enough time to detect or look for one unique word contained in the subframe header packet. Following this, the host may assume that the client would have reset itself since it would not have found the unique word at the correct location. At this point, the host may follow the subframe header packet with a Round Trip Delay Measurement Packet. If the client still does not respond correctly to the Round Trip Delay Measurement Packet, the host may repeat the resynchronization process. A correct response is one in which the client sends the specified sequence back to the host in the Measurement Period of the Round Trip Delay Measurement Packet. If this sequence is not received, then attempts to receive reverse data in a Reverse Link Encapsulation Packet will fail. Continued failure of this nature may indicate some other system error which will have to be addressed in some other manner, and is not part of the link synchronization at this point.However, if after a successful Round Trip Delay Measurement Packet the host still sees corrupted data or no response in the Reverse Link Encapsulation Packets, it should confirm the reverse data sampling is correct by re-sending a Round Trip Delay Measurement Packet. If this is not successful after a number of attempts it is recommended for one embodiment that the host reduce the reverse data rate by increasing the reverse rate divisor value.The host should perform the Link Failure Detection and possibly the Link Resynchronization steps described above before placing the MDDI link into the hibernation state. This will generally ensure that the Round Trip Delay Measurement Packet performed when the link is restarted later on is successful. If the host has no reason to suspect a link failure, and a correct response to a Reverse Link Encapsulation Packet and zero forward link CRC errors is being reported by the client, a host may assume that everything is operating or functioning accordingly or appropriately (no link failure for example) and proceed with the power down/hibernation process.Another manner in which a host can test for synchronization is for the host to send the Round Trip Delay Measurement Packet and confirm the proper response from the client. If the proper response is received by the host, it can reasonably be assumed that the client is successfully interpreting forward link packets. C. Initialization As stated earlier, at the time of "start-up," the host configures the forward link to operate at or below a minimum required, or desired, data rate of 1 Mbps, and configures the sub-frame length and media-frame rate appropriately for a given application. That is, both the forward and reverse links begin operation using the Type 1 interface. These parameters are generally only going to be used temporarily while the host determines the capability or desired configuration for the client display (or other type of client device). The host sends or transfers a sub-frame Header Packet over the forward link followed by a Reverse Link Encapsulation Packet which has bit '0' of the Request Flags set to a value of one (1), in order to request that the display or client responds with a Client Capability Packet. Once the display acquires synchronization on (or with) the forward link, it sends a Client Capability Packet and a Client Request and Status Packet over the reverse link or channel.The host examines the contents of the Client Capability Packet in order to determine how to reconfigure the link for optimal or a desired level of performance. The host examines the Protocol Version and Minimum Protocol Version fields to confirm that the host and client use versions of the protocol that are compatible with each other. The protocol versions generally remain as the first two parameters of the client capability Packet so that compatibility can be determined even when other elements of the protocol might not be compatible or completely understood as being compatible.In internal mode the host can know the parameters of the client in advance without having to receive a Client Capability Packet. The link may start up at any data rate at which the host and client can both operate. In many embodiments, a system designer will most likely choose to start the link at the maximum achievable data rate to hasten data transfer, however, this is not required and may not be used in many situations. For internal mods operation, the frequency of the strobe pulses used during the link restart from hibernation sequence will usually be consistent with this desired rate. D. CRC Processing For all packet types, the packet processor state machine ensures that the CRC checker is controlled appropriately or properly. It also increments a CRC error counter when a CRC comparison results in one or more errors being detected, and it resets the CRC counter at the beginning of each sub-frame being processed. E. Alternative Loss Of Synchronization Check While the above series of steps or states work to produce higher data rates or throughput speed, Applicants have discovered that an alternative arrangement or change in the conditions the client uses to declare that there is a loss of synchronization with the host, can be used effectively to achieve even higher data rates or throughput. The new inventive embodiment has the same basic structure, but with the conditions for changing states changed. Additionally a new counter is implemented to aid in making checks for sub-frame synchronization. These steps and conditions are presented relative to FIG. 63 , which illustrates a series of states and conditions useful in establishing the operations of the method or state machine. Only the "ACQUIRING-SYNC STATES" and "IN-SYNC STATES" portions are shown for clarity. In addition, since the resulting states are substantially the same, as is the state machine itself, they use the same numbering. However, the conditions for changing states (and the state machine operation) vary somewhat, so that all are renumbered for clarity between the two figures (1 , 2 , 3 , 4 , 5 , and 6 , versus 61, 62, 63, 64, and 65), as a convenience in identifying differences. Since the ASYNC FRAME state is not considered in this discussion, one state (4904) and condition (6) are no longer used in the figure.In FIG. 63 , the system or client (for display or presentation) starts with state machine 5000 in the pre-selected "no sync" state 4902, as in FIG. 49 . The first state change for changing states from the no-sync condition 4902 is in condition 64 which is the discovery of the sync pattern. Assuming that the CRC of the sub-frame header also passes on this packet (meets condition 61), the state of the packet processor state machine can be changed to the in-sync state 4908. A sync error, condition 62, will cause the state machine to shift to state 4910, and a second occurrence to state 4912. However, it has been discovered that any CRC failure of an MDDI packet will cause the state machine to move out of in-sync state 4908, to the one sync error state 4910. Another CRC failure of any MDDI packet will cause a move to the two sync failure state 4912. A packet decoded with a correct CRC value will cause the state machine to return to the in-sync state 4908.What has been changed is to utilize the CRC value or determination for 'every' packet. That is, to have the state machine look at the CRC value for every packet to determine a loss of synchronization instead of just observing sub-frame header packets. In this configuration or process, a loss of synchronization is not determined using the unique word and just sub-frame header CRC values.This new interface implementation allows the MDDI link to recognize synchronization failures much more quickly, and, therefore, to recover from them more quickly, as well.To make this system more robust, the client should also add or utilize a sub-frame counter. The client then checks for the presence of the unique word at the time it is expected to arrive or occur in a signal. If the unique word does not occur at the correct time, the client can recognize that a synchronization failure has occurred much more quickly than if it had to wait several (here three) packet times or periods that were greater than a sub-frame length. If the test for the unique word indicates it is not present, in other words that the timing is incorrect, then the client can immediately declare a link loss of synchronization and move to the no-sync state. The process of checking for the proper unique word presence, adds a condition 65 (cond 65) to the state machine saying that the unique word is incorrect. If a sub-frame packet is expected to be received on the client and doesn't match up, the client can immediately go to the no-sync state 4902, saving additional time waiting for multiple sync errors (condition 62) normally encountered traversing through states 4910 and 4912.This change uses an additional counter or counting function in the client core to count sub-frame length. In one embodiment, a count down function is used and the transfer of any packet that was currently being processed is interrupted to check for the sub-frame unique word if the counter has expired. Alternatively, the counter can count up, with the count being compared to a desired maximum or particular desired value, at which point the current packet is checked. This process protects the client from decoding packets that are incorrectly received on the client with extraordinarily long packet lengths. If the sub-frame length counter needed to interrupt some other packet that was being decoded, a loss of synchronization can be determined since no packet should cross a sub-fraine boundary. IX. Packet Processing For each type of packet discussed above that the state machine receives, it undertakes a particular processing step or series of steps to implement operation of the interface. Forward link packets are generally processed according to the exemplary processing listed in Table XII below.Table XIISub-Frame Header (SH)Confirms good packet, captures sub-frame length field, and sends packet parameters to a general purpose processor.Filler (F)Ignores data.Video Stream (VS)Interprets the Video Data Format Descriptor and other parameters, unpacks packed pixel data when necessary, translates pixels through the color map if necessary, and writes pixel data to appropriate locations in the bitmap.Audio Stream (AS)Sends audio sample rate setting to audio sample clock generator, separates audio samples of specified size, unpacks audio sample data when necessary, and routes audio samples to appropriate audio sample FIFOColor Map (CM)Reads color map size and offset parameters, and writes the color map data to a color map memory or storage location.Reverse Link Encapsulation (REL)Facilitates sending packets in reverse direction at the appropriate time. Reverse link flags are examined, and Client Capability packets are sent as necessary. Client Request and Status packets are also sent as appropriate.Client Capability (CC)Sends this type of packet when requested by a host using the reverse link flags field of the Reverse Link Encapsulation Packet.Keyboard (K)Passes these packets to and from a general purpose processor that communicates with a keyboard type device, if one is present, and use is desired.Pointing Device (PD)Passes these packets to and from a general purpose processor that communicates with a pointing type device, if one is present, and use is desired.Link Shutdown (LS)Records fact link is shut down and informs a general-purpose processor.Client Service Request and Status (CSRS)Sends this packet as the first packet in the Reverse Link Encapsulation packet.Bit Block Transfer (BPT)Interprets packet parameters, such as Video Data Format Descriptor, determines which pixels to move first, and moves pixels in bitmap as required.Bitmap Area Fill (BAF)Interprets packet parameters, translates pixels through color map if necessary, and writes pixel data to appropriate locations in bitmap,Bitmap Pattern Fill (BPF)Interprets packet parameters, unpacks packed pixel data if necessary, translates pixels through color map if necessary, and writes pixel data to appropriate locations in bitmap.Communication Link Channel (CLC)Sends this data directly to a general-purpose processor.Client Service Request (CSR) during hibernationGeneral-purpose processor controls the low-level functions of sending request and detects contention with link restarting on its own.Interface Type Handoff Request (ITHR) and Interface Type Acknowledge (ITA)May pass these packets to and from the general-purpose processor. The logic to receive this type of packet and formulate a response with an acknowledgment is substantially minimal. Therefore, this operation could also be implemented within the packet processor state machine. The resulting handoff occurs as a low-level physical layer action and is not likely to affect the functionality or functioning of the general-purpose processor.Perform Type Handoff (PTH)May act on such packets either directly or by transferring them to the general-purpose processor, also commanding hardware to undergo a mode change. X. Reducing the Reverse Link Data Rate It has been observed by the inventors that certain parameters used for the host link controller can be adjusted or configured in a certain manner in order to achieve a maximum or more optimized (scale) reverse link data rate, which is very desirable. For example, during the time used to transfer the Reverse Data Packets field of the Reverse Link Encapsulation Packet, the MDDI_Stb signal pair toggles to create a periodic data clock at half the forward link data rate. This occurs because the host link controller generates the MDDI_Stb signal that corresponds to the MDDI_Dala0 signal as if it were sending all zeroes. The MDDI_Stb signal is transferred from the host to a client where it is used to generate a clock signal for transferring reverse link data from the client, with which reverse data is sent back to the host. An illustration of typical amounts of delay encountered for the signal transfer and processing on the forward and reverse paths in a system employing the MDDI, is shown in FIG. 50 . In FIG. 50 , a series of delay values 1,5 nsec., 8.0 nsec., 2.5 nsec., 2.0 nsec., 1.0 nsec., 1.5 nsec., 8.0 nsec., and 2.5 nsec., are shown near processing portions for the Stb+/- generation, cable transfer-to-client, client receiver, clock generation, signal docking, Data0+/- generation, cable trans-for-to-host, and host receiver stages, respectively.Depending on the forward link data rate and signal processing delays encountered, it may require more time than one cycle on the MDDI_Stb signal for this "round trip" effect or set of events to be completed, which results in the consumption undesirable amounts of time or cycles. To circumvent this problem, the Reverse Rate Divisor makes it possible for one bit time on the reverse link to span multiple cycles of the MDDI_Stb signal. This means that the reverse link data rate is less than the forward link rate.It should be noted that the actual length of signal delays through the interface may differ depending on each specific host-client system or hardware being used. Although not required, each system can generally be made to perform better by using the Round Trip Delay Measurement Packet to measure the actual delay in a system so that the Reverse Rate Divisor can be set to an optimum value. The host may support either basic data sampling which is simper but operates at a slower speed or advanced data sampling that is more complex but supports higher reverse data rates. The client capability to support both methods is considered the sameA round-trip delay is measured by having the host send a Round Trip Delay Measurement Packet to the client. The client responds to this packet by sending a sequence of ones back to the host inside of, or during, a pre-selected measurement window in that packet called the Measurement Period field. The detailed timing of this measurement was described previously. The rowid-trip delay is used to determine the rate at which the reverse link data can be safely sampled.The round-trip delay measurement consists of determining, detecting, or counting the number of forward link data clock intervals occurring between the beginning of the Measurement Period field and the beginning of the time period when the 0xff, 0xff, 0x00 response sequence is received back at the host from the client. Note that it is possible that the response from the client could be received a small fraction of a forward link clock period before the measurement count was about to increment. If this unmodified value is used to calculate the Reverse Rate Divisor it could cause bit errors on the reverse link due to unreliable data sampling. An example of this situation is illustrated in FIG. 51 , where signals representing MDDI_Data at host, MDDI_Stb at host, forward link data clock inside the host, and a Delay Count are illustrated in graphical form. In FIG. 51 , the response sequence was received from the client a fraction of a forward link clock period before the Delay Count was about to increment from 6 to 7. If the delay is assumed to be 6, then the host will sample the reverse data just after a bit transition or possibly in the middle of a bit transition. This could result in erroneous sampling at the host. For this reason, the measured delay should typically be incremented by one before it is used to calculate the Reverse Rate Divisor.The Reverse Rate Divisor is the number of MDDI_Stb cycles the host should wait before sampling the reverse link data. Since MDDI_Stb is cycled at a rate that is one half of the forward link rate, the corrected round-trip delay measurement needs to be divided by 2 and then rounded up to the next integer. Expressed as a formula, this relationship is: reverse_rate_divisor = RoundUpToNextInteger ⁢ round_trip_delay + 1 2For the example given, this becomes: reverse_rate_divisor = RoundUpToNextInteger ⁢ 6 + 1 2 = 4If the round trip delay measurement used in this example were 7 as opposed to 6, then the Reverse Rate Divisor would also be equal to 4.The reverse link data is sampled by the host on the rising edge of the Reverse Link Clock. There is a counter or similar known circuit or device present in both the host and client (display) to generate the Reverse Link Clock. The counters are initialized so that the first rising edge of the Reverse Link Clock occurs at the beginning of the first bit in the Reverse Link Packets field of the Reverse Link Encapsulation packet. This is illustrated, for the example given below, in FIG. 52A . The counters increment at each rising edge of the MDDI_Stb signal, and the number of counts occurring until they wrap around is set by the Reverse Rate Divisor parameter in the Reverse Link Encapsulation Packet. Since the MDDI_Stb signal toggles at one half of the forward link rate, then the reverse link rate is one half of the forward link rate divided by the Reverse Rate Divisor. For example, if the forward link rate is 200 Mbps and the Reverse Rate Divisor is 4 then the reverse link data rate is expressed as: 1 2 ⋅ 200 ⁢ Mbps 4 = 25 ⁢ MbpsAn example showing the timing of the MDDI_Data0 and MDDI_Stb signal lines in a Reverse Link Encapsulation Packet is shown in FIG. 52 , where the packet parameters used for illustration have the values:Packet Length = 1024 (0x0400)Turn Around 1 Length = 1Packet Type = 65 (0x41)Turn Around 2 Length = 1Reverse Link Flags = 0Reverse Rate Divisor = 2Parameter CRC = 0xdb43All Zero is 0x00Packet data between the Packet Length and Parameter CRC fields is:0x00, 0x04, 0x41 , 0x00, 0x02, 0x01, 0x01, 0x43, 0xdb, 0x00, ...The first reverse link packet returned from the client is the Client Request and Status Packet having a Packet Length of 7 and a packet type of 70. This packet begins with the byte values 0x07, 0x00, 0x46, ... and so forth. However, only the first byte (0x07) is visible in FIG. 52 . This first reverse link packet is time-shifted by nearly one reverse link clock period in the figure to illustrate an actual reverse link delay. An ideal waveform with zero host to client round-trip delay is shown as a dotted-line trace.The MS byte of the Parameter CRC field is transferred, preceded by packet type, then the all zero field. The strobe from the host is switching from one to zero and back to one as the data from the host changes level, forming wider pulses. As the data goes to zero, the strobe switches at the higher rate, only the change in data on the data line causes a change near the end of the alignment field. The strobe switches at the higher rate for the remainder of the figure due to the fixed 0 or 1 levels of the data signal for extended periods of time, and the transitions failing on the pulse pattern (edge).The reverse link clock for the host is at zero until the end of the Turn Around 1 period, when the clock is started to accommodate the reverse link packets. The arrows in the lower portion of the figure indicate when the data is sampled, as would be apparent from the remainder of the disclosure. The first byte of the packet field being transferred (here 11000000) is shown commencing after Turn Around 1, and the line level has stabilized from the host driver being disabled. The delay in the passage of the first bit, and as seen for bit three, can bee seen in the dotted lines for the Data signal.In FIG. 53 , one can observe typical values of the Reverse Rate Divisor based on the forward link data rate. The actual Reverse Rate Divisor is determined as a result of a round-trip link measurement to guarantee proper reverse link operation. A first region 5302 corresponds to an area of safe operation, a second region 5304 corresponds to an area of marginal performance, while a third region 5306 indicates settings that are unlikely to function properly.The round-trip delay measurement and Reverse Rate Divisor setting are the same while operating with any of the Interface Type settings on either the forward or reverse link because they are expressed and operated on in terms of units of actual clock periods rather than numbers of bits transmitted or received.Typically, the largest possible Reverse Rate Divisor is half the number of bits that can be sent in the measurement window of the Round Trip Delay Measurement Packet using a Type-1 interface, or for this example:512 ⁢ bytes ⋅ 8 ⁢ bits / byte 2 = 2048An advanced reverse data sampling method can also be employed as an alternative that allows the reverse bit time to be smaller than the round-trip delay. For this technique a host not only measures the round-trip delay, but can also determine the phase of the response from the client with respect to an 'ideal' bit boundary of a client and link with zero delay. By knowing the phase of the client, device response, a host can determine a relatively safe time to sample the reverse data bits from the client. The round-trip delay measurement indicates to a host the location of the first bit of reverse data with respect to the beginning of the Reverse Data Packets field.One embodiment of an example of advanced reverse data sampling is illustrate in graphical form in FIG 52B . An ideal reverse data signal with zero round-trip delay is shown as a dotted-line waveform. The actual round-trip delay, between 3.5 and 4 MDDI_Stb cycles, can be observed as the difference in delay between solid waveform and the ideal. This is the same delay that would be measured using the Round Trip Delay Measurement Packet, and would be a measured round-trip delay value equal to 7 forward-link bit times. In this embodiment, reverse data bits are 2 MDDI_Stb pulses long, which is 4 forward-link bit times, which corresponds to a reverse rate divisor equal to 2. For advanced reverse data sampling it is convenient to use a pre-seleceted reverse rate divisor of 2 instead of computing it as described elsewhere. This appears to be a substanitlaly optimum choice for advanced reverse data sampling because the ideal sampling point can easily be determined using the conventional measurements described above.The ideal sampling point for reverse data can be easily computed by taking the reminder of the total round-trip delay divided by the number of forward link clocks per reverse bit, or round-trip delay modulo forward link clocks per reverse bit. Then subtract either 1 or 2 to get to a safe point away from the data transition. In this example, 7 mod 4 = 3, then 3 - 1 = 2, or 3 - 2 = 1. The safe sampling point is either 1 or 2 forward link bit times from the edge of the "ideal" bit boundary for zero round-trip delay. The figure shows the sampling point at 2 forward link bit times from the ideal bit boundary, as indicated by the series of vertical arrows at the bottom of the timing diagram. The first sampling point is the first ideal bit boundary after the measured round-trip delay, plus the offset for safe sampling. In this example, the round trip delay measurement is 7, so the next ideal bit boundary is at the 8thbit time, then add either 1 or 2 for the safe sampling point, so the first bit shall be sampled at either 9 or 10 forward link bit times after the beginning of the Reverse Data Packets Field. XI. Turn-Around and Guard Times The Turn-Around 1 field in the Reverse Link Encapsulation Packet allows time for the host drivers to disable and the client drivers to enable simultaneously. The Guard Time 1 field in the Round Trip Delay Measurement Packet allows overlap of the host and client, so the client drivers can enable before the host interface drivers are disabled. The Turn Around 2 field in the Reverse Link Encapsulation Packet allows data in the previous field from the client to be fully transmitted before the host drivers are enabled. The Guard Time 2 field provides a time value or period which allows the client and host drivers to drive simultaneously at a logic-zero level. The Guard Time 1 and Guard Time 2 fields are generally filled with pre-set or pre-selected values for lengths that are not meant to be adjusted. Depending on the interface hardware being used, these values may be developed using empirical data and adjusted in some instances to improve operation. Turn-Around 1 Several factors contribute to a determination of the length of Turn-Around 1 and these are the forward link data rate, the maximum disables time of the MDDI_Data drivers in the host, and the enable time of the client driver which is which is generally the same as the host disable time. The length of the Turn-Around 1 field is selected to be 24·tBIT·(Table XI) The length in the number of forward link bytes of the Turn-Around 1 field is determined using the Interface Type Factor, and is computed using the relationship: : Length TurnAround ⁢ 1 = 24 8 ⁢ bits / byte ⋅ InterfaceTypeFactor FWD = 3 ⋅ InterfaceTypeFactor FWDwhere the Interface Type Factor is 1 for Type 1, 2 for Type 2, 4 for Type 3, and 8 for Type-4. Turn-Around 2 The factors that determine the length of time generally used for Turn Around 2 are, the round-trip delay of the communication link, the maximum disable time of the MDDI_Data drivers in the client, and the enable time of the host driver which is specified to be the same as the client driver disable time. The maximum host driver enable time and client driver disable time is specified in Error! Reference source not found.IError! Reference source not found. The round-trip delay is measured in units of tBTT. The minimum length specified in the number of forward link bytes of the Turn-Around 2 field is computed according to the relationship: Length TurnAround ⁢ 2 ≥ RoundUpToNextInteger ⁢ RoundTripDelay + 24 8 ⁢ bits / byte ⋅ InterfaceTypeFactor FWDFor example, a Type 3 forward link with a round-trip delay of 10 forward link clocks typically uses a Turn Around 2 delay on the order of: Length TurnAround ⁢ 2 ≥ RoundUpToNextInteger ⁢ 11 + 24 8 ⋅ 4 = 18 ⁢ bytes XII. Alternative Reverse Link Timing While the use of timing and guard bands discussed above work to achieve a high data transfer rate interface, the inventors have discovered a technique to allow for reverse bit lengths that are shorter than the round trip time, by changing the reverse timing discovery.As presented above, the previous approach to the timing of the reverse link is configured such that the number of clock cycles is counted from the last bit of the Guard Time 1 of a reverse timing packet until the first bit is sampled on the rising edge of an IO clock. That is the clock signal(s) used to time the inputs and outputs for the MDDI. The calculation for the reverse rate divisor is then given by: reverse_rate_divisor = RoundUpToNextInteger ⁢ round_trip_delay + 1 2This provides a bit width equal to the round trip delay which results in a very reliable reverse link. However, the reverse link has been shown to be capable of running faster, or at a higher data transfer rate, which the inventors want to take advantage of. A new inventive technique allows utilizing additional capabilities of the interface to reach higher speeds.This is accomplished by having the host count the number of clock cycles until a one is sampled, but with the host sampling the data line on both the rising and falling edges during the reverse timing packet. This allows the host to pick the most useful or even optimal sampling point within the reverse bit to ensure that the bit is stable. That is, to find the most useful or optimal using edge to sample data on for reverse traffic reverse encapsulation packets. The optimal sampling point depends on both the reverse link divisor and whether the first one was detected on a rising edge or a falling edge. The new timing method allows the host to just look for the first edge of the 0xFF 0xFF 0x00 pattern sent by the client for reverse link timing to determine where to sample in a reverse encapsulation packet.Examples of the arriving reverse bit and how that bit would look for various reverse rate divisors, is illustrated in FIG. 64 , along with a number of clock cycles that have occurred since the last bit of Guard Time 1. In Fig. 64 , one can see that if the first edge occurs between a rising and falling edge (labeled as rise/fall), the optimal sampling point for a reverse rate divisor of one, the optimal sample point is a clock cycle edge labeled 'b', as that is the only rising edge occurring within the period of the reverse bit. For a reverse rate divisor of two, the optimal sampling point, is probably still clock cycle leading edge 'b' as cycle edge 'c' is closer to a bit edge than 'b'. For a reverse rate divisor of four, the optimal sampling point is probably clock cycle edge 'd', as it is closer to the back edge of the reverse bit where the value has probably stabilized.Returning to FIG. 64 , if, however, the first edge occurs between a falling and rising edge (labeled as fall/rise), the optimal sampling point for a reverse rate divisor of one is sampling point clock cycle edge 'a', as that is the only rising edge within the reverse bit time period. For a reverse rate divisor of two. the optimal sampling point is edge 'b', and for a reverse rate divisor of four the optimal sampling point is edge 'c'.One can see that as the reverse rate divisors get larger and larger, the optimal sampling point becomes easier to ascertain or select, as it should be the rising edge that is closest to the middle.The host can use this technique to find the number of rising clock edges before the rising data edge of the timing packet data is observed on the data line. It can then decide, based on whether the edge occurs between a rising and falling edge or between a falling and rising edge, and what the reverse rate divisor is, how many additional clock cycles to add to a number counter, to reasonably ensure that the bit is always sampled as close to the middle as possible.Once the host has selected or determined the number of clock cycles, it can "explore" various reverse rate divisors with the client to determine if a particular reverse rate divisor will work. The host (and client) can start with a divisor of one and check the CRC of the reverse status packet received from the client to determine if this reverse rate functions appropriately to transfer data. If the CRC is corrupt, there is probably a sampling error, and the host can increase the reverse rate divisor and try to request a status packet again. If the second requested packet is corrupt, the divisor can be increased again and the request made again. If this packet is decoded correctly, this reverse rate divisor can be used for all future reverse packets.This method is effective and useful because the reverse timing should not change from the initial round trip timing estimate. If the forward link is stable, the client should continue to decode forward link packets even if there are reverse link failures. Of course, it is still the responsibility of the host to set a reverse link divisor for the link, since this method does not guarantee a perfect reverse link. In addition, the divisor will depend primarily on the quality of the clock that is used to generate an IO clock. If that clock has a significant amount of jitter, there is a greater possibility of a sampling error. This error probability increases with the amount of clock cycles in the round trip delay.This implementation appears to work best for Type 1 reverse data, but may present problems for Type 2 through Type 4 reverse data due to the skew between data lines potentially being too great to run the link at the rate that works best for just one data pair. However, the data rate probably does not need to be reduced to the previous method even with Type 2 through Type 4 for operation. This method may also work best if duplicated on each data line to select the ideal or an optimal clock sample location. If they are at the same sample time for each data pair, this method would continue to work. If they are at different sample periods, two different approaches may be used. The first is to select an desired or more optimized sample location for each data point, even if it is not the same for each data pair. The host can then reconstruct the data stream after sampling all of the bits from the set of data pairs: two bits for Type 2, four bits for Type 3, and eight bits for Type 4. The other option is for the host to increase the reverse rate divisor such that the data bits for every data pair can be sampled at the same clock edge. XIII. Effects of Link Delay and Skew Delay skew on the forward link between the MDDI_Data pairs and MDDI_Stb can limit the maximum possible data rate unless delay skew compensation is used. The differences in delay that cause timing skew are due to the controller logic, the line drivers and receivers, and the cable and connectors as outlined below. A. Link Tuning Analysis Limited by Skew (MDDI Type-1) 1. Delay and Skew Example of a Type 1 Link A typical interface circuit similar to that shown in FIG. 41 , is shown in FIG. 57 for accommodating a Type 1 interface link. In FIG. 57 , exemplary or typical values for propagation delay and skew are shown for each of several processing or interface stages of an MDDI Type 1 forward link. Skew in the delay between MDDI_Stb and MDDI_Data0 causes the duty-cycle of the output clock to be distorted. Data at the D input of the receiver flip-flop (RXFF) stage using flip-flops 5728, 5732, changes slightly after the clock edge so that it can be sampled reliably. The figure shows two cascaded delay lines 5732a and 5732b being used to solve two different problems with creating this timing relationship. In the actual implementation these may be combined into a single delay element.Data, Stb, and Clock Recovery Timing on a Type 1 Link for exemplary signal processing through the interface are illustrated in FIG. 58 .The total delay skew that is significant generally arises or comes from the sum of the skew in the following stages: transmitter flip-tlop (TXFF) with flip-flops 5704, 5706; transmitter driver (TXDRVR) with drivers 5708, 5710; the CABLE 5702; receiver line receiver (RXRCVR) with receivers 5722, 5724; and receiver XOR logic (RXXOR). Delay1 5732a should match or exceed the delay of the XOR gate 5736 in the RXXOR stage which is determined by the relationship: t PD - min Delay ⁢ 1 ≥ t PD - max XORIt is desirable to meet this requirement so that the D input of receiver flip-flop 5728, 5732 does not change before its clock input. This is valid if the hold-time of RXFF is zero.The purpose or function of Delay2 is to compensate for the hold-time of the RUFF Sip-Hop according to the relationship: t PD - min Delay ⁢ 2 = t H RXFFIn many systems this will be zero because the hold time is zero, and of course in that case the maximum delay of Delay2 can also be zero.The worst-case contribution to skew in the receiver XOR stage is in the data-late/strobe-early case where Delay1 is at a maximum value and the clock output from the XOR gate comes as early as possible according to the relationship: t SKEW - max RXXOR = t PD - max Delay ⁢ 1 - t PD - min XORIn this situation, the data may change between two bit periods, n and n+1, very close to the time where bit n+1 is clocked into the receiver flip-flop.The maximum data rate (minimum bit period) of an MDDI Type 1 link is a function of the maximum skew encountered through all the drivers, cable, and receivers in the MDDI link plus the total data setup into the RXFF stage. The total delay skew in the link up to the output of the RXRCVR stage can be expressed as: t SKEW - max LINK = t SKEW - max TXFF + t SKEW - max TXDRVR + t SKEW - max CABLE + t SKEW - max RXRCVRwith the "cable" representing a variety of conductors or interconnections or wires and corresponding delay, and the minimum bit period is given by: t BIT - min = t SKEW - max LINK + 2 ⋅ t B - TP ⁢ 4 + t Asymmetry + t SKEW - max RXXOR + t filter - host + t PD - max Delay ⁢ 2 + t SU RXFFIn the example shown in FIG. 57 for external mode, tSKEW-max(LINK)= 1000psec and the minimum bit period can be expressed as: t BIT - min = 1000 + 2 ⋅ 125 + 625 + 125 + 200 + 0 + 100 = 2300 ⁢ p ⁢ sec ,or stated as approximately 434 Mbps. In the example shown in FIG. 57 for internal mode, tSKEW-max(LINK)= 500psec and the minimum bit period can be expressed as: t BIT - min = 500 + 2 ⋅ 125 + 625 + 125 + 200 + 0 + 100 = 1800 ⁢ p ⁢ sec ,or stated as approximately 555 Mbps. B. Link Timing Analysis for MDDI Type 2,3, and 4 A typical interface circuit similar to that shown in FIGS. 41 and 57 , is shown in FIG. 59 for accommodating Type 2, 3, and 4 interface links. Additional elements are used in the TXFF (5904), TXDRVR (5908), RXRCVCR (5922), and RXFF (5932, 5928, 5930) stages to accommodate the additional signal processing. In FIG. 59 , exemplary or typical values for propagation delay and skew are shown for each of several processing or interface stages of an MDDI Type 2 forward link. In addition to skew in the delay between MDDI_Stb and MDDI_Data0 affecting the duty-cycle of the output clock, there is also skew between both of these two signals and the other MDDI_Data signals. Data at the D input of the receiver flip-flop B (RXFFB) stage consisting of flip-flops 5928 and 5930, is changed slightly after the clock edge so it can be sampled reliably. If MDDI_Datal arrives earlier than MDDI_Stb or MDDI_Data0 then MDDI_Data1 should be delayed to be sampled by at least the amount of the delay skew. To accomplish this, data is delayed using the Delay3 delay line. If MDDI_Data1 arrives later than MDDI_Stb and MDDI_Data0 and it is also delayed by Delay3 then the point where MDDI_Data changes is moved closer to the next clock edge. This process determines an upper limit of the data rate of an MDDI Type 2, 3, or 4 link. Some exemplary different possibilities for the timing or skew relationship of two data signals and MDDI_Stb with respect to each other is illustrated in PIGS. 60A, 60B, and 60C.In order to sample data reliably in RXFFB when MDDI_DataX arrives as early as possible, Delay3 is set according to the relationship: t PD - min Delay ⁢ 3 ≥ t SKEW - max LINK + t H RXFFB + t PD - max XORThe maximum link speed is determined by the minimum allowable bit period. This is most affected when MDDI_DataX arrives as late as possible. In that case, the minimum allowable cycle time is given by: t BIT - min = t SKEW - max LINK + t PD - max Delay ⁢ 3 + t SU RXFFB - t PD - min XORThe upper bound of link speed is then: t PD - max Delay ⁢ 3 = t PD - min Delay ⁢ 3and given that assumption: t BIT - min ⁢ lower - bound = 2 ⋅ t SKEW - max LINK + t PD - max XOR + t SU RXFFB + t H RXFFBIn the example given above, the lower bound of the minimum bit period is given by the relationship: t BIT - min ⁢ lower - bound = 2 ⋅ 1000 + 2 • 125 + 625 + 200 + 1500 + 100 + 0 = 5750 ⁢ p ⁢ sec ,which is approximately 174 Mbps.This is much slower than the maximum data rate that can be used with a Type 1 link. The automatic delay skew compensation capability of MDDI significantly reduces the affect that delay skew has on the maximum link rate factor is just on-the-edge of valid data setup. The calibrated skew between NDDI_Data0 and MDDI_Stb is: t SKEW - max Calibrated = 2 ⋅ t TAP - SPACING - max ,and the minimum bit period is: t BIT - min - Calibrated = t SKEW - max Calibrated + 2 ⋅ t B - TP ⁢ 4 + t Asymmetry + t filter - host + t SKEW - max ⁢ RXAND + RXXOR + t SU RXFFWhere "TB" or tBrepresents signal jitter from a bit boundary to minimum output level. Asymmetry simply refers to the asymmetrical nature of internal delay through or of the differential receivers. "TP4" is associated with or is effectively defined for electrical characterization and testing purposes as the connection or interface (pins of the MDDI controller device in the client) for the differential line drivers and receivers for the client. It represents a convenient or predetermined point from which signal delay is measured and characterized for the link throughout the rest of a system. In one embodiment, a maximum value of the parameter tBat TP4 is defined by the relationshiptDifferential-Skew-TP 4-DRVR-EXT= 0.3·tBITfor the external mode andtDifferential-Skew-TP 4-DRVR-INT=0.6·tBITfor the internal mode for the client transmitters; andtB-TP 4-RCVR-EXT= 0.05·tBIT+175psfor the external mode for the client receivers.The label TP4 being simply useful in numbering various test points (TP) in the interface and links. In one embodiment, this test point location is defined to be the same for both internal and external modes. There is a corresponding "TP0" test point for, or associated with, the connection or interface pins of the MDDI controller device in the host that contains the differential line drivers and receivers. In this embodiment, a maximum value of the parameter TBat TP0 is defined by the relationshiptB-TP0-RCVR-INT= 0.05·tBIT+50ps, for the internal mode, andtB-TP 0-RCVR-EXT= 0.051·tBIT+175psfor the external mode for the host receivers; andtB-TP0= 0.102·tBITfor the host transmitters.In the example shown in PIG. 59,tSKEW -max(Data0-Stb-Calibrated)= 300psec and the minimum bit period: t BIT - min - Calibrated = 300 + 2 ⋅ 125 + 625 + 200 + 175 + 100 = 1650 ⁢ p ⁢ sec ,approximately 606 Mbps.In order to sample data reliably in RXFFB when MDD_Datal arrives as early as possible, the associated programmable delay is adjusted to the optimal setting with an accuracy of one tap, and an additional tap delay is added for safety. The maximum link speed is determined by the minimum allowable bit period. This is most affected when NDDI_Data1 arrives as late as possible. In that case the minimum allowable cycle time is: t BIT - min - Data ⁢ 1 - Calibrated = 2 ⋅ t TAP - Spacing - max + 2 ⋅ t TA - TP ⁢ 4 ,where "TA" or tArepresents signal jitter from a bit boundary to center crossing.In the example given in FIG. 59 , the lower bound of the minimum bit period based on sampling MDDI_Datal is: t BIT - min - Data ⁢ 1 - Calibrated = 2 ⋅ 150 + 2 ⋅ 125 = 550 ⁢ p ⁢ secIn one embodiment, a typical total delay time for delay skew, delay asymmetry, and Clock Jitter in the host transmitter for Internal Mode would be defined as: t Asymmerty - TXFF + t Asymmetry - TXDRVR + t Skew - TXFF + t Skew - TXDRVR + t filter - host = 0.467 ⋅ t BIT - 150 ⁢ ps ,and for the external mode as: t Asymmerty - TXFF + t Asymmetry - TXDRVR + t Skew - TXFF + t Skew - TXDRVR + t flter - host = 0. ⁢ TBD ⋅ t BIT - 150 ⁢ TBDpswhile a typical total delay time for delay skew, delay asymmetry, and setup time in the client device (tB-TP4) for internal mode is: t Asymmerty - RXRCVR + t Asymmetry - RXXOR + t Skew - RXRCVR + t Skew - RXXOR + t setup - RXFF = 0.307 ⋅ t BIT - 150 ⁢ psand for the external mode: t Asymmerty - RXRCVR + t Asymmetry - RXXOR + t Skew - RXRCVR + t Skew - RXXOR + t setup - RXFF = 0. ⁢ TBD ⋅ t BIT - TBDpswhere the term TBD is a flexible place keeping label for future to be determined values which will depend on a variety of well understood characteristics and operational requirements for the external mode connections. XIV. Physical Layer Interconnection Description Physical connections useful for implementing an interface according to the present invention can be realized using commercially available parts such as part number 3200-8S2(01) as manufactured by Hirose Electric Company Ltd. on the host side, and part number 3240-8P-C as manufactured by Hirose Electric Company Ltd. on the client device side. An exemplary interface pin assignment or "pinout" for such connectors used with a Type-1/Type 2 interfaces is listed in Table XIII, and illustrated in FIG. 61 .Table XIIIMDDI_Pwr1MDDI_Gnd11MDDI_Stb+2MDDI_Stb-12MDDI_Data0+4MDDI_Data0-14MDDI_Data1+6MDDI_Data1-16MDDI_Data2+8MDDI_Data2-18MDDI_Data3+10MDDI_Data3-20NDDI_Data4+9MDDI_Data4-19MDDI_Data5+7MDDI_Data5-17MDDI_Data6-5MDDI_Data6-15MDDI_Data7+3MDDI_Data7-13ShieldThe shield is connected to the HOST_Gnd in the host interface, and a shield drain wire in the cable is connected to the shield of the client connector. However, the shield and drain wire are not connected to the circuit ground inside of a client.Interconnection elements or devices are chosen or designed in order to be small enough for use with mobile communication and computing devices, such as PDAs and wireless telephones, or portable game devices, without being obtrusive or unaesthetic in comparison to relative device size. Any connectors and cabling should be durable enough for use in the typical consumer environment and allow for small size, especially for the cabling, and relatively low cost. The transfer elements should accommodate data and strobe signals that are differential NRZ data having a transfer rate up to around 450 Mbps for Type 1 and Type 2 and up to 3.6 Gbps for the 8-bit parallel Type 4 version.For internal mode applications there are either no connectors in the same sense for the conductors being used or such connection elements tend to be very miniaturized. One example is zero insertion force "sockets" for receiving integrated circuits or elements housing either the host or client device. Another example is where the host and client reside on printed circuit boards with various interconnecting conductors, and have "pins" or contacts extending from housings which are soldered to contacts on the conductors for interconnection of integrated circuits. XV. Operation A summary of the general steps undertaken in processing data and packets during operation of an interface using embodiments of the invention is shown in FIGS. 54A and 54B , along with an overview of the interface apparatus processing the packets in FIG. 55 . In these figures, the process starts in a step 5402 with a determination as to whether or not the client and host are connected using a communication path, here a cable. This can occur through the use of periodic polling by the host, using software or hardware that detects the presence of connectors or cables or signals at the inputs to the host (such as is seen for USB interfaces), or other known techniques. If there is no client connected to the host, then it can simply enter a wait state of some predetermined length, depending upon the application, go into a hibernation mode, or be inactivated to await future use which might require a user to take action to reactivate the host. For example, when a host resides on a computer type device, a user might have to click on a screen icon or request a program that activates the host processing to look for the client. Again, simple plug in of a USB type connection could activate host processing, depending on the capabilities and configuration of the host or resident host software.Once a client is connected to the host, or visa versa, or detected as being present, either the client or the host sends appropriate packets requesting service in steps 5404 and 5406. The client could send either Client Service Request or Status packets in step 5404. It is noted that the link, as discussed above, could have been previously shut down or be in hibernation mode so this may not be a complete initialization of the communication link that follows. Once the communication link is synchronized and the host is trying to communicate with the client, the client also provides a Client Capabilities packet to the host, as in step 5408. The host can now begin to determine the type of support, including transfer rates, the client can accommodate.Generally, the host and client also negotiate the type (rate/speed) of service mode to be used, for example Type 1, Type 2, and so forth, in a step 5410. Once the service type is established the host can begin to transfer information. In addition, the host may use Round Trip Delay Measurement Packets to optimize the timing of the communication links in parallel with other signal processing, as shown in step 5411.As stated earlier, all transfers begin with a Sub-Frame Header Packet, shown being transferred in step 5412, followed by the type of data, here video and audio stream packets, and filler packets, shown being transferred in step 5414. The audio and video data will have been previously prepared or mapped into packets, and filler packets are inserted as needed or desired to fill out a required number of bits for the media frames. The host can send packets such as the Forward Audio Channel Enable Packets to activate sound devices. In addition, the host can transfer commands and information using other packet types discussed above, here shown as the transfer of Color Map, Bit Block Transfer or other packets in step 5416. Furthermore, the host and client can exchange data relating to a keyboard or pointing devices using the appropriate packets.During operation, one of several different events can occur which lead to the host or client desiring a different data rate or type of interface mode. For example, a computer or other device communicating data could encounter loading conditions in processing data that causes a slow down in the preparation or presentation of packets. A client device receiving the data could change from a dedicated AC power source to a more limited battery power source, and either not be able to transfer in data as quickly, process commands as readily, or not be able to use the same degree of resolution or color depth under the more limited power settings. Alternatively, a restrictive condition could be abated or disappear allowing either device to transfer data at higher rates. This being more desirable, a request can be made to change to a higher transfer rate mode.If these or other types of known conditions occur or change, either the host or client may detect them and try to renegotiate the interface mode. This is shown in step 5420, where the host sends Interface Type Handoff Request Packets to the client requesting a handoff to another mode, the client sends Interface Type Acknowledge Packets confirming a change is sought, and the host sends Perform Type Handoff Packets to make the change to the specified mode.Although, not requiring a particular order of processing, the client and host can also exchange packets relating to data intended for or received from pointing devices, keyboards, or other user type input devices associated primarily with the client, although such elements may also be present on the host side. These packets are typically processed using a general processor type element and not the state machine (5502). In addition, some of the commands discussed above will also be processed by the general processor. (5504,5508)After data and commands have been exchanged between the host and client, at some point a decision is made as to whether or not additional data is to be transferred or the host or client is going to cease servicing the transfer. This is shown in step 5422. If the link is to enter either a hibernation state or be shut down completely, the host sends a Link Shutdown packet to the client, and both sides terminate the transfer of data.The packets being transferred in the above operations processing will be transferred using the drivers and receivers previously discussed in relation to the host and client controllers. These line drivers and other logic elements are connected to the state machine and general processors discussed above, as illustrated in the overview of FIG. 55 . In Fig. 55 , a state machine 5502 and general processors 5504 and 5508 may further be connected to other elements not shown such as a dedicated USB interface, memory elements, or other components residing outside of the link controller with which they interact, including, but not limited to, the data source, and video control chips for view display devices.The processors, and state machine provide control over the enabling and disabling of the drivers as discussed above in relation to guard times, and so forth, to assure efficient establishment or termination of communication link, and transfer of packets. XVI. Display Frame Buffers Video data buffering requirements are different for moving video images compared to computer graphics. Pixel data is most often stored in a local frame buffer in the client so the image on the client can be refreshed locally.When full-motion video is being displayed (nearly every pixel in the display changes each Media Frame) it is usually preferred to store the incoming pixel data in one frame buffer while the image on the display is being refreshed from a second frame buffer. More than two display buffers may be used to eliminate visible artifacts as described below. When an entire image has been received in one frame buffer then the roles of the buffers can be swapped, and the newly received image is then used to refresh the display and the other buffer is filled with the next frame of the image. This concept is illustrated in FIG. 88A , where pixel data is written to the offline image buffer by setting the Display Update bits to "01."In other applications the host needs to update only a small portion of the image without having to repaint the entire image. In this situation it is desired to write the new pixels directly to the buffer being used to refresh the display, as illustrated in detail FIG. 88B .In applications that have a fixed image with a small video window it is easiest to write the fixed image to both buffers (display update bits equal to "11") as shown in FIG. 88C , and subsequently write the pixels of the moving image to the offline buffer by setting the display update bits to "01."The following rules describe the useful manipulation of buffer pointers while simultaneously writing new information to the client and refreshing the display. Three buffer pointers exist: current_fill points to the buffer currently being filled from data over the MDDI link. Just_filled points to the buffer that was most recently filled. being_displayed points to the buffer currently being used to refresh the display. All three buffer pointers may contain values from 0 to N-1 where N is the number of display buffers, and N ≥ 2. Arithmetic on buffer pointers is mod N, e.g. when N=3 and current_fill=2, incrementing current_fill causes current_fill to be set to 0. In the simple case where N=2, just_filled is always the complement of current_fill. On every MDDI Media Frame boundary (Sub-frame Header Packet with the Sub-frame Count field equal so zero) perform the following operations in the order specified, set just_filled equal to current_fill, and set current_fill equal to current_fill + 1.MDDI Video Stream Packets update the buffers according to the structure or methodology of: when Display Update Bits equal to '01', pixel data is written to the buffer specified by current_fill; when Display Update Bits equal to '00', pixel data is written to the buffer specified by just_filled; and when Display Update Bits equal to "11," pixel data is written to all buffers. The display is refreshed from the buffer specified by the being_displayed pointer. After the display refreshes the last pixel in one frame refresh epoch and before it begins to refresh the first pixel in the next frame refresh epoch the display update process performs the operation of setting being_refreshed equal to just_filled;The Video Stream Packet contains a pair of Display Update Bits that specify the frame buffer where the pixel data is to be written. The Client Capability Packet has three additional bits that indicate which combinations of the Display Update Bits are supported in the client In many cases, computer-generated images need to be incrementally updated based on user input or derived from information received from a computer network. Display Update Bit combinations "00" and "11" support this mode of operation by causing the pixel data to be written to the frame buffer being displayed or to both frame buffers.When accommodating video images, FIG. 89 illustrates how video images are displayed using a pair of frame buffers when video data is transmitted over the MDDI link with the Display Update Bits equal to "01." After a media-frame boundary is detected on the MDDI link, the display refresh process will begin refreshing from the next frame buffer when the refresh process for the frame currently being refreshed is completed.An important assumption related to FIG. 89 is that the image is received from the host as a continuous stream of pixels that are transmitted in the same order that the client uses to read the pixels from the frame buffer to refresh the display (usually upper-left, reading row by row, to the bottom-right corner of the screen. This is an important detail in the cases where the Display Refresh and Image Transfer operations reference the same frame buffer.It is necessary for the display refresh frame rate to be greater than the image transfer frame rate to avoid displaying partial images. FIG. 90 shows how image fragmentation can occur with a slow display refresh rate that is the display refresh is slower than the image transfer.In an image that contains a combination of computer graphic images and moving video pictures the video pixel data might occupy a small portion of a media-frame. This could be significant in situations where the display refresh operation and the image transfer reference the same frame buffer. These situations are shown by a cross-hatched shading in FIG. 91 , where the pixels read from the buffer to refresh the display might be the pixels written to the buffer two frames ago, or they may correspond to the frame immediately being written to the same frame buffer.The use of three frame buffers in the client will resolve the problem of the small window of contention for access to a frame buffer as shown in FIG. 92 .However, there is still a problem if the display refresh rate is less than the media-frame rate over the MDDI link as shown in FIG. 93 .The use of a single buffer for moving video images is somewhat problematic as shown FIG. 94 . With the display refresh faster than the image transfer into the buffer, the image being refreshed sometimes will show the upper portion of the frame being written and the lower portion of the image will be the frame previously transferred. With the display refresh faster than the image transfer (the preferred mode of operation) there will be more frequent instances of frames showing a similar split image. XVII. Delay Value Table The Packet Processing Delay Parameters Packet uses a table-lookup function to calculate the predicted delay to process certain commands in the client. Values in the table increase in a logarithmic fashion to provide a very wide dynamic range of delay values. An exemplary table of delay values useful for implementing embodiments of the invention is found in Table XX below, with corresponding index values versus delay values.Table XX0 - no_delay37 - 1.5ns74 - 51ns111 - 1.8us148 - 62us185 - 2.2ms222 - 75ms1 - 46ps38 - 1.6ns75 - 56ns112 - 2.0us149 - 68us186 - 2.4ms223 - 83ms2 - 51ps39 - 1.8ns76 - 62ns113 - 2.2us150 - 75us187 - 2.6ms224 - 91ms3 - 56ps40-2.0ns77 - 68ns114 - 2.4us151 - 83us188 - 2.9ms225-100ms4 - 62ps41 - 2.2ns78 - 75ns115 - 2.6us152 - 91us189 - 32ms226 - 110ms5 - 68ps42 - 2.4ns79 - 83ns116 - 2.9us153 - 100us190 - 3.5ms227 - 120ms6 - 75ps43 - 2.6ns80 - 91ns117 - 3.2us154 - 110us191 - 3.8ms228 - 130ms7 - 83ps44 - 2.9ns81 - 100ns118 - 3.5us155 - 120us192 - 4.2ms229 - 150ms8 - 91ps45 - 3.2ns82 - 110ns119 - 3.8us156 - 130us193 - 4.6ms230 - 160ms9 - 100ps46 - 3.5ns83 - 120ns120 - 4.2us157 - 150us194 - 5.1ms231 - 180ms10 - 110ps47 - 3.8ns84 - 130ns121 - 4.6us158 - 160us193 - 5.6ms232 - 200ms11 - 120ps48 - 4.2ns85 - 150ns122 - 5.1us159 - 180us196 - 6.2ms233 - 220ms12 - 130ps49 - 4.6ns86 - 160ns123 - 5.6us160 - 200us197 - 6.8ms234 - 240ms13 - 150ps50 - 5.1ns87 - 180ns124 - 6.2us161 - 220us198 - -7.5ms235 - 260ms14 - 160ps51 - 5.6ns88 - 200ns125 - 6.8ns162 - 240ns199 - 83m3236 - 290ms15 - 180ps52 - 6.2ns89 - 220ns126 - 7.5us163 - 260us200 - 9.1ms237 - 320ms16 - 200ps53 - 6.8ns90 - 240ns127 - 8.3us164 - 290us201 - 10ms238 - 350ms17 - 220ps54 - 7.5ns91 - 260ns128 - 9.1us165 - 320us202 - 11ms239 - 380ms18 - 240ps55 - 8.3ns92 - 290ns129 - 10us166 - 350us203 - 12ms240 - 420rns19 - 260ps56 - 9.1ns93 - 320ns130 - 11us167 - 380us204 - 13ms241 - 460ms20 - 290ps57 - 10ns94 - 350ns131 - 12us168 - 420us205 - 15ms242 - 510ms s21 - 320ps58 - 11us95 - 380ns132 - 13us169 - 460us206 - 16ms243 - 560ms22 - 350ps59 - 12ns96 - 420ns133 - 15us170 - 510us207 - 18ms244 - 620ms23 - 380ps60 - 13ns97 - 460ns134 - 16us171 - 560us208 - 20ms245 - 6SOms24 - 420ps61 - 15ns98 - 510ns135 - 18us172 - 620us209 - 22ms246 - 750ms25 - 460ps62 - 16ns99 - 560ns136 - 20us173 - 680us210 - 24ms247 - 830ms26 - 510ps63 - 18ns100 - 620ns137 - 22us174 - 750us211 - 26ms248 - 910ms27 - 560ps64 - 20ns101 - 680ns138 - 24us175 - 830us212 - 29ms249 - 1.0sec28 - 620ps65-22ns102 - 750ns139 - 26us176 - 910us213 - 32ms250 - 1.1sec29 - 680ps66 - 24ns103 - 830ns140 - 29us177 - 1.0ms214 - 35ms251 - 1.2see30 - -750ps67 - 26ns104 - 910ns141 - 32us178 - 1.1ms215 - 38ms252 - 1.3sec31 - 830ps68 - 29ns105 - 1.0us142 - 35us179 - 1.2ms216 - 42ms253 - 1.5sec32 - 910ps69 - 32ns106 - 1.1us143 - 38us180 - 1.3ms217 - 46ms254 - 1.6s33 - 1.0ns70 - 35ns107 - 1.2us144 - 42us181 - 1.5ms218 - 51ms255 - indefinite34 - 1.1ns71 - 38ns108 - 1.3us145 - 46us182 - 1.6ms219 - 56ms35 - 1.2ns72 - 42ns109 - 1.5us146 - 51us183 - 1.8ms220 - 62ms36 - 1.3ns73 - 46ns110 - 1.6us147 - 56us184 - 2.0ms221 - 68msThe delay is computed by performing a table lookup using the specified parameter as an index into the table. This means a delay is equal to PacketProcesssingTable(index). For example: if one of the parameters from the Delay Parameters List Item is an 8-bit value equal to 134, then the delay is equal to PacketProcessingTable(134) which is 16 µsec. The value 255 indicates that the command completion time cannot be determined by calculation, and that the host will check the Graphics Busy Flags in the Client Request and Status Packet or MCCS VCP Control Parameter B7h.In some cases this delay is multiplied by the height, width, or number of pixels in the destination image and added to other delays to compute the overall packet processing delay. XVIII. Multiple Client Support The current protocol version does not appear to directly support multiple client devices. However, most packets contain a reserved Client ID field that can be used to address specific client devices in a system with multiple clients. Currently, for many applications this client ID or these client IDs are set to be zero. The sub-frame header packet also contains a field to indicate whether or not the host supports a multiple client system. Therefore, there is a manner in which multiple client devices would likely be connected and addressed in future applications of the MDDI or protocol to aid system designers to plan for future compatibility with multiple client hosts and clients.In systems having multiple clients it is useful for clients to be connected to the host using a daisy-chain of clients, or using hubs, as shown in Fig. 95 , or using a combination of these techniques as shown in FIG. 96 . XIX. Addendum In addition to the formats, structures, and contents discussed above for the various packets used to implement the architecture and protocol for embodiments of the invention, more detailed field contents or operations are presented here for some of the packet types. These are presented here to further clarify their respective use or operations to enable those skilled in the art to more readily understand and make use of the invention for a variety of applications. Only a few of the fields not already discussed are discussed further here. In addition, these fields are presented with exemplary definitions and values in relation to the embodiments presented above. However, such values are not to be taken as limitations of the invention, but represent one or more embodiments useful for implementing the interface and protocol, and not all embodiments need be practiced together or at the same time. Other values can be used in other embodiments to achieve the desired presentation of data or data rate transfer results, as will be understood by those skilled in the art. A. For Video Stream Packets In one embodiment, the Pixel Data Attributes field (2 byte) has a series of bit values that are interpreted as follows. Bits 1 and 0 select how the display pixel data is routed. For bit values of '11' pixel data is displayed to or for both eyes, for bit values '10', pixel data is routed only to the left eye, and for bit values '01', pixel data is routed only to the right eye, and for bit values of '00' the pixel data is routed to an alternate display as may be specified by bits 8 through 11 discussed below. If the primary display in or being used or operated by a client does not support stereo images or imaging in some form, then these commands cannot effectively be implanted to have an impact as desired by the display. In this situation or configuration the client should route pixel data to a primary display regardless of the bit values or for any of the bit combinations '01,' '10,' or '11,' since the resulting commands or control won't be implemented by the display. It is recommended, but not required by the embodiments, that the value '11' be used to address the primary display in those clients that do not support stereo display capability.Bit 2 indicates whether or not the Pixel Data is presented in an interlace format, with a value of '0' meaning the pixel data is in the standard progressive format, and that the row number (pixel Y coordinate) is incremented by 1 when advancing from one row to the next. When this bit has a value of '1', the pixel data is in interface format, and the row number is incremented by 2 when advancing from one row to the next. Bit 3 indicates that the Pixel Data is in alternate pixel format. This is similar to the standard interlace mode enabled by bit 2, but the interlacing is vertical instead of horizontal, When Bit 3 is '0' the Pixel Data is in the standard progressive format, and the column number (pixel X coordinate) is incremented by 1 as each successive pixel is received. When Bit 3 is '1' the Pixel Data is in alternate pixel format, and the column number is incremented by 2 as each pixel is received.Bit 4 indicates whether or not the Pixel data is related to a display or a camera, as where data is being transferred to or from an internal display for a wireless phone or similar device or even a portable computer, or such other devices as discussed above, or the data is being transferred to or from a camera built into or directly coupled to the device. When Bit 4 is '0' the Pixel data is being transferred to or from a display frame buffer. When Bit 4 is '1' Pixel data is being transferred to or from a camera or video device of some type, such devices being well known in the art.Bit 5 is used to indicate when the pixel data contains the next consecutive row of pixels in the display. This is considered the case when Bit 5 is set equal to '1'. When bit 5 is set to '1' then the X Left Edge, Y Top Edge, X Right Edge, Y Bottom Edge, X Start, and Y Start parameters are not defined and are ignored by the client. When Bit 15 is set at a logic-one level, this indicates that the pixel data in this packet is the last row of pixels in the image. Bit 8 of the Client Feature Capability Indicators field of the Client Capability Packet indicates whether this feature is supported.Bits 7 and 6 are Display Update Bits that specify a frame buffer where the pixel data is to be written. The more specific effects are discussed elsewhere. For bit values of '01' Pixel data is written to the offline image buffer. For bit values of '00' Pixel data is written to the image buffer used to refresh the display. For bit values of '11' Pixel data is written to all image buffers. The bit values or combination of '10' is treated as an invalid value or designation and Pixel data is ignored and not written to any of the image buffers. This value may have use for future applications of the interface.Bits 8 through 11 form a 4-bit unsigned integer that specifies an alternate display or display location where pixel data is to be routed, Bits 0 and 1 are set equal to '00' in order for the display client to interpret bits 8 through 11 as an alternate display number. If bits 0 and 1 are not equal to '00' then bits 8 through 11 are set to logic-zero levels.Bits 12 through 14 are reserved for future use and are generally set to logic-zero levels. Bit 15, as discussed, is used in conjunction with bit 5, and setting bit 15 to logic-one indicates that the row of pixels in the Pixel Data field is the last row of pixels in a frame of data. The next Video Stream Packet having bit 5 set to logic-one will correspond to the first row of pixels of the next video frame.The 2-byte X Start and Y Start fields specify the absolute X and Y coordinates of the point (X Start, Y Start) for the first pixel in the Pixel Data field, The 2-byte X Left Edge and Y Top Edge fields specify the X coordinate of the left edge and Y coordinate of the top edge of the screen window filled by the Pixel Data field, while the X Right Edge and Y Bottom Edge fields specify the X coordinate of the right edge, and the Y coordinate of the bottom edge of the window being updated.The Pixel Count field (2 bytes) specifies the number of pixels in the Pixel Data field below.The Parameter CRC field (2 bytes) contains a CRC of all bytes from the Packet Length to the Pixel Count. If this CRC fails to check then the entire packet is discarded.The Pixel Data field contains the raw video information that is to be displayed, and which is formatted in the manner described by the Video Data Format Descriptor field. The data is transmitted one "row" at a time as discussed elsewhere. When Bit 5 of the Pixel Data Attributes field is set at logic level one, then the Pixel Data field contains exactly one raw of pixels, with the first pixel being transmitted corresponding to the left-most pixel and the last pixel transmitted corresponding to the right-most pixel.The Pixel Data CRC field (2 bytes) contains a 16-bit CRC of only the Pixel Data. If a CRC verification of this value fails then the Pixel Data can still be used, but the CRC error count is incremented. B. For Audio Stream Packets In one embodiment, the Audio Channel ID field (1 byte) uses an 8 bit unsigned integer value to identify a particular audio channel to which audio data is sent by the client device. The physical audio channels are specified in or mapped to physical channels by this field as values of 0, 1, 2, 3, 4, 5, 6, or 7 which indicate the left front, right front, left rear, right rear, front, center, sub-woofer, surround left, and surround right channels, respectively. An audio channel ID value of 254 indicates that the single stream of digital audio samples is sent to both the left front and right front channels, This simplifies communications for applications such as where a stereo headset is used for voice communication, productivity enhancement apps are used on a PDA, or other applications where a simple User Interface generates warning tones. Values for the ID field ranging from 8 through 253, and 255 are currently reserved for use where new designs desire additional designations, as anticipated by those skilled in the art.The Reserved 1 field (1 byte) is generally reserved for future use, and has all bits in this field set to zero. One function of this field is to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.The Audio Sample Count field (2 bytes) specifies the number of audio samples in this packet.The Bits Per Sample and Packing field contains 1 byte that specifies the packing format of audio data. In one embodiment, the format generally employed is for Bits 4 through 0 to define the number of bits per PCM audio sample. Bit 5 then specifies whether or not the Digital Audio Data samples are packed. As mentioned above, FIG. 12 illustrates the difference between packed and byte-aligned audio samples. A value of '0' for Bit 5 indicates that each PCM audio sample in the Digital Audio Data field is byte-aligned with the interface byte boundary, and a value of '1' indicates that each successive PCM audio sample is packed up against the previous audio sample. This bit is effective only when the value defined in bits 4 through 0 (the number of bits per PCM audio sample) is not a multiple of eight. Bits 7 through 6 are reserved for use where system designs desire additional designations and are generally set at a value of zero.The Audio Sample Rate field (1 byte) specifies the audio PCM sample rate. The format employed is for a value of 0 to indicate a rate of 8,000 samples per second (sps), a value of 1 indicates 16,000 sps., value of 2 for 24,000 sps, value of 3 for 32,000 sps, value of 4 for 40,000 sps, value of 5 for 48,000 sps, value of 6 for 11,025 sps, value of 7 for 22,050 sps, and value of 8 for 44,100 sps, respectively, with values of 9 through 255 being reserved for future use, so they are currently set to zero.The Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Audio Sample Rate. If this CRC fails to check appropriately, then the entire packet is discarded. The Digital Audio Data field contains the raw audio samples to be played, and is usually in the form of a linear format as unsigned integers. The Audio Data CRC field (2 bytes) contains a 16-bit CRC of only the Audio Data. If this CRC fails to check, then the Audio Data can still be used, but the CRC error count is incremented. C. For User-Derined Stream Packets In one embodiment, the 2-byte Stream ID Number field is used to identify a particular user defined stream. The contents of the Stream Parameters and Stream Data fields, are typically defined by the MDDI equipment manufacturer. The 2-byte Stream Parameter CRC field contains a 16-bit CRC of all bytes of the stream parameters starting from the Packet Length to the Audio Coding byte. If this CRC fails to check, then the entire packet is discarded. Both the Stream Parameters and Stream Parameter CRC fields may be discarded if not needed by an end application of the MDDI, that is, they are considered optional. The 2-byte Stream Data CRC field contains a CRC of only the Stream Data. If this CRC fails to check appropriately, then use of the Stream Data is optional, depending on the requirements of the application. Use of the stream data contingent on the CRC being good, generally requires that the stream data be buffered until the CRC is confirmed as being good. The CRC error count is incremented if the CRC does not check. D. For Color Map Packets The 2-byte hClient ID field contains information or values that are reserved for a Client ID, as used previously. Since this field is generally reserved for future use, the current value is set to zero, by setting the bits to '0'.The 2-byte Color Map Item Count field uses values to specify the total number of 3-byte color map items that are contained in the Color Map Data field, or the color map table entries that exist in the Color Map Data in this packet. In this embodiment, the number of bytes in the Color Map Data is 3 times the Color Map Item Count. The Color Map Item Count is set equal to zero to send no color map data. If the Color Map Size is zero then a Color Map Offset value is generally still sent but it is ignored by the display. The Color Map Offset field (4 bytes) specifies the offset of the Color Map Data in this packet from the beginning of the color map table in the client device.A 2-byte Parameter CRC field contains a CRC of all bytes from the Packet Length to the Audio Coding byte. If this CRC fails to check then the entire packet is discarded.For the Color Map Data field, the width of each color map location is a specified by the Color Map Item Size field, where in one embodiment the first part specifies the magnitude of blue, the second part specifies the magnitude of green, and the third part specifies the magnitude of red. The Color Map Size field specifies the number of 3-byte color map table items that exist in the Color Map Data field. If a single color map cannot fit into one Video Data Format and Color Map Packet, then the entire color map may be specified by sending multiple packets with different Color Map Data and Color Map Offsets in each packet. The number of bits of blue, green, and red in each color map data item is generally the same as specified in the Color Map RGB Width field of the Display Capability Packet.A 2-byte Color Map Data CRC field contains a CRC of only the Color Map Data. If this CRC fails to check then the Color Map Data can still be used but the CRC error count is incremented.Each color map data item is to be transmitted in the order: blue, green, red, with the least significant bit of each component transmitted first. The individual red, green, and blue components of each color map item are packed, but each color map item (the least significant bit of the blue component) should be byte-aligned. Fig. 97 illustrates an example of color map data items with 6 bits of blue, 8 bits of green, and 7 bits of red. For this example, the Color Map Item Size in the Color Map Packet is equal to 21, and the Color Map RGB Width field of the Client Capability Packet is equal to 0x0786. E. For Reverse Link Encapsulation Packets The Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Turn-Around Length. If this CRC fails to check, then the entire packet is discarded.In one embodiment, the Reverse Link Flags field (1 byte) contains a set of flags to request information from the client. If a bit (for example, Bit 0) is set to a logic-one level, then the host requests the specified information from the display using the Client Capability Packet. If the bit is set to a logic-zero level, then the host does not need the information from the client. The remaining bits (here Bits 1 through 7) are reserved for future use and are set to zero. However, more bits can be used as desired to set flags for the reverse link.The Reverse Rate Divisor field (1 byte) specifies the number of MDDI_Stb cycles that occur in relation to the reverse link data clock. The reverse link data clock is equal to the forward link data clock divided by two times the Reverse Rate Divisor. The reverse link data rate is related to the reverse link data clock and the Interface Type on the reverse link. In this embodiment, for a Type 1 interface the reverse data rate equals the reverse link data clock, for Type 2, Type 3, and Type 4 interfaces the reverse data rates equal two times, four times, and eight times the reverse link data clock, respectively.The All Zero 1 field contains a group of bytes, here 8, that is set equal to zero in value by setting the bits at a logic-zero level, and is used to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using only MDDI_Stb prior to disabling the host's line drivers during the Turn-Around 1 field. In one embodiment, the length of the All Zero 1 field is greater than or equal to the number of forward link byte transmission times in the round-trip delay of the cable.The Turn-Around 1 Length field (1 byte) specifies the total number of bytes that are allocated for Turn-Around 1, establishing the first turn-around period. The Turn-Around 1 field employs the number of bytes specified by the Turn-Around 1 Length parameter are allocated to allow the NMDDI__Data line drivers in the client to enable, before the line drivers in the host are disabled. The client enables its MDDI_Data line drivers during bit 0 of Turn-Around 1 and the host disables its outputs so as to be completely disabled prior to the last bit of Turn-Around 1. The MDDI_Stb signal behaves as though MDDI_Data0 were at a logic-zero level during the entire Turn Around 1 period. A more complete description of the setting of Turn-Around 1 is given above.The Reverse Data Packets field contains a series of data packets transferred from the client to host. The client may send filler packets or drive the MDDI_Data lines to a logic-zero state or level when it has no data to send to the host. In this embodiment, if the NIDDI_Data lines are driven to zero, the host will interpret this as a packet with a zero length (not a valid length) and the host will accept no additional packets from the client for the duration of the current Reverse Link Encapsulation Packet.The Turn-Around 2 Length field (1 byte) specifies the total number of bytes that are allocated for Turn-Around 2, for establishing a second turn-around period. The recommended length of Turn-Around 2 is the number of bytes required for the round-trip delay plus the time required for the host to enable its MDDI_Data drivers. Turn-Around 2 Length may be also be a value larger than the minimum required or calculated value to allow sufficient time to process reverse link packets in the host.The Turn Around 2 field consists of the number of bytes as specified by the Turn-Around Length parameter. The host waits for at least the round trip delay time before it enables its MDDI_Data line drivers during Turn-Around 2. The host enables its MDDI_Data line drivers so that they are generally completely enabled prior to the last bit of Turn-Around 2, and the client disables its outputs so that they are generally completely disabled prior to the last bit of Turn-Around 2. The purpose of the Turn-Around 2 field is to allow the remaining amount of data from the Reverse Data Packets field to be transmitted or transferred from the client. Variations in different systems implementing the interface and the amount of safety margin allocated, it is possible that neither the host nor client will be driving the MDDI_Data signals to a logic-zero level during some parts of the Turn Around 2 field period, as seen by the line receivers in or at the host. The NIDDI_Stb signal behaves as though the NDDI_Data0 were at a logic-zero level during substantially the entire Turn Around 2 period. A description of the setting of Turn-Around 2 is given above.The Reverse Data Packets field contains a series of data packets being transferred from the client to a host. As stated earlier, Filler packets are sent to fill the remaining space that is not used by other packet types.The All Zero 2 field contains a group of bytes (8 in this embodiment) that is set equal to zero in value by setting the bits at a logic-zero level, and is used to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering clock using both MDDI_Data0 and MDDI_Stb after enabling the host's line drivers following the Turn-Around 2 field. F. For Client Capability Packets As illustrated for one embodiment, the Protocol Version field uses 2 bytes to specify a protocol version used by the client. The initial version is currently set equal to one, and will be changed over time as new versions are generated as would be known, while the Minimum Protocol Version field uses 2 bytes to specify the minimum protocol version that the client can employ or interpret. In this case, a zero value is also a valid value. The Data Rate Capability field (2 bytes) specifies the maximum data rate the client can receive on each data pair on the forward link of the interface, and is specified in the form of megabits per second (Mbps). The Interface Type Capability field (1 byte) specifies the interface types that are supported on the forward and reverse links. A bit set to '1' indicates that a specified interface type is supported, and a bit set to '0' indicates that the specified type is not supported. Hosts and clients should support at least Type 1 on the forward and reverse lines. There is no requirement to support a contiguous range of interface types. For example, it would be perfectly valid to support only Type 1 and Type 3, but not Type 3 and Type 4 in an interface. It is also not necessary for the forward and reverse links to operate with the same interface type. However, when a link comes out of hibernation both forward and reverse links should commence operating in Type 1 mode, until other modes may be negotiated, selected, or otherwise approved for use by both the host and client.The supported interfaces are indicated in one embodiment by selecting Bit 0, Bit 1, or Bit 2 to select either a Type 2 (2 bit), Type 3 (4 bit), or Type 4 (8 bit) mode on the forward link, respectively; and Bit 3, Bit 4, or Bit 5 to select either a Type 2, Type 3, or Type 4 mode on the reverse link, respectively; with Bits 6 and 7 being reserved and generally set to zero at this time. The Bitmap Width and Height fields, here each being 2 bytes, specify the width and height of the bitmap, respectively, in pixels.The Monochrome Capability field (1 byte) is used to specify the number of bits of resolution that can be displayed in a monochrome format. If a display cannot use a monochrome format then this value is set at zero. Bits 7 through 4 are reserved for future use and are, thus, set as zero. Bits 3 through 0 define the maximum number of bits of grayscale that can exist for each pixel. These four bits make it possible to specify values of 1 to 15 for each pixel, If the value is zero then monochrome format is not supported by the display.The Bayer Capability field uses 1 byte to specify the number of bits of resolution, pixel group, and pixel order that can be transferred in Bayer format, If the client cannot use the Bayer format then this value is zero. The Bayer Capability field is composed of the following values: Bits 3 through 0 define the maximum number of bits of intensity that exist in each pixel, while Bits 5 through 4 define the pixel group pattern that is required, while Bits 8 through 6 define the pixel order that is required; with Bits 14 through 9 being reserved for future use and generally set to zero in the meantime. Bit 15, when set to a logic-one level indicates that the client can accept Bayer pixel data in either packed or unpacked format. If bit 15 is set to zero this indicates that the client can accept Bayer pixel data only in unpacked format.The Color Map Capability field (3 bytes) specifies the maximum number of table items that exist in the color map table in the display. If the display cannot use the color map format then this value is set at zero.The RGB Capability field (2 bytes) specifies the number of bits of resolution that can be displayed in RGB format. If the display cannot use the RGB format then this value is equal to zero. The RGB Capability word is composed of three separate unsigned values where: Bits 3 through 0 define the maximum number of bits of blue, Bits 7 through 4 define the maximum number of bits of green, and Bits 11 through 8 define the maximum number of bits of red in each pixel, Currently, Bits 14 through 12 are reserved for future use and are generally set to zero. Bits 14 through 12 are reserved for future use and generally set to zero. Bit 15, when set to a logic-one level indicates that the client can accept RGB pixel data in either packed or unpacked format, If bit 15 is set to a logic-zero level, this indicates that the client can accept RGB pixel data only in unpacked format.The Y Cr Cb Capability field (2 bytes) specifies the number of bits of resolution that can be displayed in Y Cr Cb format. If the display cannot use the Y Cr Cb format then this value is set equal to zero. The Y Cr Cb Capability word is composed of three separate unsigned values where: Bits 3 through 0 define the maximum number of bits in the Cb sample, Bits 7 through 4 define the maximum number of bits in the Cr sample, Bits 11 through 8 define the maximum number of bits in the Y sample, and Bits 15 through 12 are currently reserved for future use and are set to zero.The Client Feature Capability field uses 4 bytes that contain a set of flags that indicate specific features in the client that are supported. A bit set to a logic-one level indicates the capability is supported, while a bit set to a logic-zero level indicates the capability is not supported. In one embodiment, the value for Bit 0 indicates whether or not Bitmap Block Transfer Packet (packet type 71) is supported. The value for Bits 1, 2, and 3 indicate whether or not Bitmap Area Fill Packet (packet type 72), Bitmap Pattern Fill Packet (packet type 73), or Communication Link Data Channel Packet (packet type 74), respectively, are supported. The value for Bit 4 indicates whether or not the client has the capability to make one color transparent using the Transparent Color Enable Packet, while values for Bits 5 and 6 indicate if the client can accept video data or audio data in packed format, respectively, and the value for Bit 7 indicates whether or not the client can send a reverse-link video stream from a camera. The value for Bit 8 indicates whether or not the client has the ability to receive a full line of pixel data and ignore display addressing as specified by bit 5 of the Pixel Data Attributes field of the Video Stream Packet, and the client can also detect frame sync or end of video frame data using bit 15 of the Pixel Data Attributes Field.The value for Bits 11 and 12 indicate when the client is communicating either with a pointing device and can send and receive Pointing Device Data Packets, or with a keyboard and can send and receive Keyboard Data Packets, respectively. The value for Bit 13 indicates whether or not the client has the ability to set one or more audio or video parameters by supporting the VCP Feature packets: Request VCP Feature Packet, VCP Feature Reply Packet, Set VCP Feature Packet, Request Valid Parameter Packet, and Valid Parameter Reply Packet. The value for Bit 14 indicates whether or not the client has the ability to write pixel data into the offline display frame buffer. If this bit is set to a logic-one level then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the values '01'.The value for Bit 15 indicates when the client has the ability to write pixel data into only the display frame buffer currently being used to refresh the display image. If this bit is set to one then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the values '00'. The value for Bit 16 indicates when the client has the ability to write pixel data from a single Video Stream Packet into all display frame buffers. If this bit is set to one then the Display Update Bits (bits 7 and 6 of the Pixel Data Attributes field of the Video Stream Packet) may be set to the value '11'.The value for Bit 17 indicates when a client has the ability to respond to the Request Specific Status Packet, the value for Bit 18 indicates when the client has the ability to respond to the Round Trip Delay Measurement Packet, and the value for Bit 19 indicates when the client has the ability to the Forward Link Skew Calibration Packet.The value for Bit 21 indicates when the client has the ability to interpret the Request Specific Status Packet and respond with the Valid Status Reply List Packet. The client indicates an ability to return additional status in the Valid Parameter Reply List field of the Valid Status Reply List Packet as described elsewhere.The value for Bit 22 indicates whether or not the client has the ability to respond to the Register Access Packet. Bits 9 through 10, 20, and 23 through 31 are currently reserved for future use or alternative designations useful for system designers and are generally set equal to zero.The Display Video Frame Rate Capability field (1 byte) specifies the maximum video frame update capability of the display in frames per second. A host may choose to update the image at a slower rate than the value specified in this field.The Audio Buffer Depth field (2 bytes) specifies the depth of the elastic buffer in a Display which is dedicated to each audio stream.The Audio Channel Capability field (2 bytes) contains a group of flags that indicate which audio channels are supported by the client or client connected device. A bit set to one indicates the channel is supported, and a bit set to zero indicates that channel is not supported. The Bit positions are assigned to the different channels, for example Bit positions 0, 1,2, 3, 4, 5,6, and 7 in one embodiment, indicate the left front, right front, left rear, right rear, front center, sub-woofer, surround left, and surround right channels, respectively. Bits 8 through 14 are currently reserved for future use, and are generally set to zero. In one embodiment Bit 15 is used to indicate if the client provides support for the Forward Audio Channel Enable Packet. If this is the case, Bit 15 set to a logic-one level. If, however, the client is not capable of disabling audio channels as a result of the Forward Audio Channel Enable Packet or if the client does not support any audio capability, then this bit is set to a logic-zero level or value.A 2-byte Audio Sample Rate Capability field, for the forward link, contains a set of flags to indicate the audio sample rate capability of the client device. Bit positions are assigned to the different rates accordingly, such as Bits 0, 1, 2, 3, 4, 5, 6, 7, and 8 being assigned to 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with Bits 9 through 15 being reserved for future or alternative rate uses, as desired, so they are currently set to '0'. Setting a bit value for one of these bits to '1' indicates that that particular sample rate is supported, and setting the bit to '0' indicates that that sample rate is not supported.The Minimum Sub-frame Rate field (2 bytes) specifies the minimum sub-frame rate in frames per second. The minimum sub-frame rate keeps the client status update rate sufficient to read certain sensors or pointing devices in the client.A 2-byte Mic Sample Rate Capability field, for the reverse link, contains a set of flags that indicate the audio sample rate capability of a microphone in the client device. For purposes of the MDDI, a client device microphone is configured to minimally support at least an 8,000 sample per second rate. Bit positions for this field are assigned to the different rates with bit positions 0, 1, 2, 3, 4, 5, 6, 7, and 8, for being used to represent 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with Bits 9 through 15 being reserved for future or alternative rate uses, as desired, so they are currently set to '0'. Setting a bit value for one of these bits to '1' indicates that that particular sample rate is supported, and setting the bit to '0' indicates that that sample rate is not supported. If no microphone is connected then each of the Mic Sample Rate Capability bits are set equal to zero.The Keyboard Data Format field (here 1 byte) specifies whether or not a keyboard is connected to the client system and the type of keyboard that is connected. In one embodiment, the value established by Bits 6 through 0 is used to define the type of keyboard that is connected. If the value is zero (0) then the keyboard type is considered as unknown. For a value of 1, the keyboard data format is considered to be a standard PS-2 style. Currently values in the range of 2 through 125 are not in use, being reserved for use of system designers and interface incorporators or product developers to define specific keyboard or input devices for use with the MDDI and corresponding clients or hosts. A value of 126 is used to indicate that the keyboard data formal is user-defined, while a value of 127 is used to indicate that a keyboard cannot be connected to this client. In addition, Bit 7 can be used to indicate whether or not the keyboard can communicate with the client. The intended use of this bit is to indicate when the keyboard can communicate with the client using a wireless link. Bit 7 would be set to a zero level if bits 6 through 0 indicate that a keyboard cannot be connected to the client. Therefore, for one embodiment, when the value of Bit 7 is 0, the keyboard and client cannot communicate, while if the value of Bit 7 is 1, the keyboard and client have acknowledged that they can communicate with each other.The Pointing Device Data Format field (here 1 byte) specifies whether or not a pointing device is connected to the client system and the type of pointing device that is connected. In one embodiment, the value established by Bits 6 through 0 is used to define the type of pointing device that is connected. If the value is zero (0) then the pointing device type is considered as unknown. For a value of 1, the pointing device data format is considered to be a standard PS-2 style. Currently values in the range of 2 through 125 are not in use, being reserved for use of system designers and interface incorporators or product developers to define specific pointing device or input devices for use with the MDDI and corresponding clients or hosts. A value of 126 is used to indicate that the pointing device data format is user-defined, while a value of 127 is used to indicate that a pointing device cannot be connected to this client. In addition, Bit 7 can be used to indicate whether or not the pointing device can communicate with the client. The intended use of this bit is to indicate when the keyboard can communicate with the client using a wireless link. Bit 7 would be set to a zero level if bits 6 through 0 indicate that a pointing device cannot be connected to the client. Therefore, for one embodiment, when the value of Bit 7 is 0, the pointing device and client cannot communicate, while if the value of Bit 7 is 1, the pointing device and client have acknowledged that they can communicate with each other.The Content Protection Type field (2 bytes) contains a set of flags that indicate the type of digital content protection that is supported by the Display. Currently, bit position 0 is used to indicate when DTCP is supported and bit position 1 is used to indicate when HDCP is supported, with bit positions 2 through 15 being reserved for use with other protection schemes as desired or available, so they are currently set to zero.The Mfr Name field (here 2 bytes) contains the EISA 3-character ID of the manufacturer, packed into three 5-bit characters in the same manner as in the VESA EDID specification. The character 'A' is represented as 00001 binary, the character 'Z' is represented as 11010 binary, and all letters between 'A' and 'Z' are represented as sequential binary values that correspond to the alphabetic sequence between 'A' and 'Z'. The most significant bit of the Mfr Name field is unused and is generally set to logic-zero for now until a use is made in the future implemtnaitons. Example: a manufacturer represented by the string "XYZ" would have a Mfr Name value of 0x633a. If this field is not supported by the client it is generally set to zero. Product Code field uses 2 bytes to contain a product code assigned by the display manufacturer. If this field is not supported by the client it is generally set to zero.Reserved 1, Reserved 2, and Reserved 3 fields (here 2 bytes each) are reserved for future use in imparting information. All bits in these field are generally be set to a logic-zero level. The purpose of such fields is currently to cause all subsequent 2 byte fields to align to a 16-bit word address and cause 4-byte fields to align to a 32-bit word address.Serial Number filed uses 4 bytes in this embodiment to specify the serial number of the display in numeric form. If this field is not supported by the client it is generally set to zero. The Week of Manufacture field uses 1 byte to define the week of manufacture of the display. This value is typically in the range of 1 to 53 if it is supported by the client. If this field is not supported by the client it is set to zero. The Year of Manufacture field is 1 byte that defines the year of manufacture of the display. This value is an offset from the year 1990. Years in the range of 1991 to 2245 can be expressed by this field. Example: the year 2003 corresponds to a Year of Manufacture value of 13. If this field is not supported by the client it is set to zero.The CRC field (here 2 bytes) contains a 16-bit CRC of all bytes in the packet including the Packet Length. G. For Client Request and Status Packets The Reverse Link Request field (3 byte) specifies the number of bytes the client needs in the reverse link in the next sub-frame to send information to the host.The CRC Error Count field (1 byte) indicates how many CRC errors have occurred since the beginning of the media-frame. The CRC count is reset when a sub-frame header packet with a Sub-frame Count of zero is sent. If the actual number of CRC errors exceeds 255 then this value generally saturates at 255.The Capability Change field uses 1 byte to indicate a change in the capability of the client. This could occur if a user connects a peripheral device such as a microphone, keyboard, or display, or for some other reason. When Bits[7:0] are equal to 0, then the capability has not changed since the last Client Capability Packet was sent. However, when Bits[7:0] are equal to 1 to 255, the capability has changed. The Client Capability Packet is examined to determine the new display characteristics.The Client Busy Flags field uses 2 bytes to indicate that the client is performing a specific function and is not ready to yet accept another packet related to that function. A bit set to a logic-one level or value indicates that the particular function is currently being performed by the client and that the related function in the client is busy. If the related function in the client is ready, the bit is set t a logic-zero. The client should return a busy status (bit set to one) for all functions that are not supported in the client.In one embodiment these bytes are interpreted according to the relationship: if Bit 0 is a '1' then the bitmap block transfer function is busy, while if Bit 1 is a '1', then a bitmap area fill function is busy, and if Bit 2 is a '1', then a bitmap pattern fill function is busy. Currently, Bits 3 through 15 remain reserved for future use and are generally set to a logic-one level or state to indicate a busy status in case these bits are assigned in the future. H. For Bit Block Transfer Packets The Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be moved. The Window Width and Height fields use 2 bytes each to specify the width and height of the window to be moved. The Window X Movement and Y Movement fields use 2 bytes each to specify the number of pixels that the window is to be moved horizontally and vertically, respectively. Typically, these coordinates are configured such that positive values for X cause the window to be moved to the right, and negative values cause movement to the left, while positive values for Y cause the window to be moved down, and negative values cause upward movement. I. For Bitmap Area Fill Packets Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be filled. The Window Width and Height fields (2 bytes each) specify the width and height of the window to be filled. The Video Data Format Descriptor field (2 bytes) specifies the format of the Pixel Area Fill Value. The format is the same as the same field in the Video Stream Packet. The Pixel Area Fill Value field (4 bytes) contains the pixel value to be filled into the window specified by the fields discussed above. The format of this pixel is specified in the Video Data Format Descriptor field. J. For Bitmap Pattern Fill Packets Window Upper Left Coordinate X Value and Y Value fields use 2 bytes each to specify the X and Y value of the coordinates of the upper left corner of the window to be filled. The Window Width and Height fields (2 bytes each) specify the width and height of the window to be filled. The Pattern Width and Pattern Height fields (2 bytes each) specify the width and height, respectively, of the fill pattern, The Horizontal Pattern Offset field (2 bytes) specifies a horizontal offset of the pixel data pattern from the left edge of the specified window to be filled. The value being specified is to be less than the value in the Pattern Width Field. The Vertical Pattern Offset field (2 bytes) specifies a vertical offset of the pixel data pattern from the top edge of the specified window to be filled. The value being specified is to be less than the value in the Pattern Height field.The 2-byte Video Data Format Descriptor field specifies the format of the Pixel Area Fill Value. FIG. 11 illustrates how the Video Data Format Descriptor is coded. The format is the same as the same field in the Video Stream Packet.The Parameter CRC field (2 bytes) contains a CRC of all bytes from the Packet Length to the Video Format Descriptor. If this CRC fails to check then the entire packet is discarded. The Pattern Pixel Data field contains raw video information that specifies the fill pattern in the format specified by the Video Data Format Descriptor. Data is packed into bytes, and the first pixel of each row is to be byte-aligned. The fill pattern data is transmitted a row at a time. The Pattern Pixel Data CRC field (2 bytes) contains a CRC of only the Pattern Pixel Data. If this CRC fails to check then the Pattern Pixel Data can still be used but the CRC error count is incremented. K. Communication Link Data Channel Packets The Parameter CRC field (2 bytes) contain a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The Communication Link Data field contains the raw data from the communication channel. This data is simply passed on to the computing device in the display.The Communication Link Data CRC field (2 bytes) contains a 16-bit CRC of only the Communication Link Data. If this CRC fails to check then the Communication Link Data is still used or useful, but the CRC error count is incremented. L. For Interface Type Handoff Request Packets The Interface Type field (1 byte) specifies the new interface type to use. The value in this field specifies the interface type in the following manner. If the value in Bit 7 is equal to '0' the Type handoff request is for the forward link, if it is equal to '1', then the Type handoff request is for the reverse link. Bits 6 through 3 are reserved for future use, and are generally set to zero. Bits 2 through 0 are used to define the interface Type to be used, with a value of 1 meaning a handoff to Type 1 mode, value of 2 a handoff to Type 2 mode, a value of 3 a handoff to Type 3 mode, and a value of 4 a handoff to Type 4 mode. The values of '0' and 5 through 7 are reserved for future designation of alternative modes or combinations of modes. M. For Interface Type Acknowledge Packets The Interface Type field (1 byte) has a value that confirms the new interface type to use. The value in this field specifies the interface type in the following manner. If Bit 7 is equal to '0' the Type handoff request is for the forward link, alternatively, if it is equal to '1' the Type handoff request is for the reverse link. Bit positions 6 through 3 are currently reserved for use in designating other handoff types, as desired, and are generally set to zero. However, bit positions 2 through 0 are used define the interface Type to be used with a value of '0' indicating a negative acknowledge, or that the requested handoff cannot be performed, values of '1', '2', '3', and '4' indicating handoff to Type 1, Type 2, Type 3, and Type 4 modes, respectively. Values of 5 through 7 are reserved for use with alternative designations of modes, as desired. N. For Perform Type Handoff Packets The 1-byte Interface Type field indicates the new interface type to use, The value present in this field specifies the interface type by first using the value of Bit 7 to determine whether or not the Type handoff is for the forward or reverse links. A value of '0' indicates the Type handoff request is for the forward link, and a value of '1' the reverse link. Bits 6 through 3 are reserved for future use, and as such are generally set to a value of zero. However, Bits 2 through 0 are used to define the interface Type to be used, with the values 1, 2, 3, and 4 specifying the use of handoff to Type 1, Type 2, Type 3, and Type 4 modes, respectively. The use of values 0 and 5 through 7 for these bits is reserved for future use. O. For Forward Audio Channel Enable Packets The Audio Channel Enable Mask field (1 byte) contains a group of flags that indicate which audio channels are to be enabled in a client. A bit set to one enables the corresponding channel, and a bit set to zero disables the corresponding channel Bits 0 through 5 designate channels 0 through 5 which address left front, right front, left rear, right rear, front center, and sub-woofer channels, respectively. Bits 6 and 7 are reserved for future use, and in the mean time are generally set equal to zero. P. For Reverse Audio Sample Rate Packets The Audio Sample Rate field(1 byte) specifies the digital audio sample rate. The values for this field are assigned to the different rates with values of 0, 1, 2, 3, 4, 5, 6, 7, and 8 being used to designate 8,000, 16,000, 24,000, 32,000, 40,000, 48,000, 11,025, 22,050, and 44,100 samples per second (SPS), respectively, with values of 9 through 254 being reserved for use with alternative rates, as desired, so they are currently set to '0'. A value of 255 is used to disable the reverse-link audio stream.The Sample Format field (1 byte) specifies the format of the digital audio samples. When Bits[1:0] are equal to '0', the digital audio samples are in linear format, when they are equal to 1, the digital audio samples are in µ-Law format, and when they are equal to 2, the digital audio samples are in A-Law format. Bits[7:2] are reserved for alternate use in designating audio formats, as desired, and are generally set equal to zero, Q. For The Digital Content Protection Overhead Packets The Content Protection Type field (1 byte) specifies the digital content protection method that is used. A value of '0' indicates Digital Transmission Content Protection (DTCP) while a value of 1 indicates High-bandwidth Digital Content Protection System (HDCP). The value range of 2 through 255 is not currently specified but is reserved for use with alternative protection schemes as desired. The Content Protection Overhead Messages field is a variable length field containing content protection messages sent between the host and client. R. For The Transparent Color Enable Packets The Transparent Color Enable field (1 byte) specifies when transparent color mode is enabled or disabled. If Bit 0 is equal to 0 then transparent color mode is disabled, if it is equal to 1 then transparent color mode is enabled and the transparent color is specified by the following two parameters. Bits 1 through 7 of this byte are reserved for future use and are typically set equal to zero.The Video Data Format Descriptor field (2 bytes) specifies the format of the Pixel Area Fill Value. FIG. 11 illustrates how the Video Data Format Descriptor is coded. The format is generally the same as the same field in the Video Stream Packet.The Pixel Area Fill Value field uses 4 bytes allocated for the pixel value to be filled into the window specified above. The format of this pixel is specified in the Video Data Format Descriptor field. S. For The Round Trip Delay Measurement Packets The 2-byte Packet Length field specifies the total number of bytes in the packet not including the packet length field, and in one embodiment is selected to have a fixed length of 159. The 2-byte Packet Type field that identifies this packet type with a value of 82, identifying a packet as a Round Trip Delay Measurement Packet. The client ID field, as before is reserved for future use as a Client ID, and is generally set to zero.In one embodiment, the Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The Guard Time 1 field (here 64 bytes) is used to allow the MDDI_Data line drivers in the client to enable before the line drivers in the host are disabled. The client enables its MDDIData line drivers during bit 0 of Guard Time 1 and the host disenables its line drivers so as to be completely disabled prior to the last bit of Guard Time 1. The host and client both drive a logic-zero level during Guard Time 1 when they are not disabled. Another purpose of this field is to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering a clock or clock signal using only MDDI_Stb prior to disabling the host's line drivers.The Measurement Period field is a 64 byte window used to allow the client to respond with two bytes of 0xff, and 30 bytes of 0x00 at half the data rate used on the forward link. This data rate corresponds to a Reverse Link Rate Divisor of 1. The client returns this response immediately at the time it perceives as being the beginning of the Measurement Period. This response from the client will be received at a host at precisely the round trip delay of the link plus logic delay in the client after the beginning of the first bit of the Measurement Period at the host.The All Zero 1 field (2 bytes) contains zeroes to allow the NDDI_Data line drivers in the host and client to overlap so that MDDI_Data is always driven, The host enables MDDI_Data line drivers during bit 0 of the All Zero 1 field, and the client also continues to drive the signal to a logic-zero level as it did at the end of the Measurement Period.The value in the Guard Time 2 field (64 bytes) allows overlap of the Measurement Period driven by the client when the round trip delay is at the maximum amount that can be measured in the Measurement Period. The Client disables its line drivers during bit 0 of Guard Time 2 and the Host enables its line drivers immediately after the last bit of Guard Time 2. The host and client both drive a logic-zero level during Guard Time 2 when they are not disabled. Another purpose of this field is to ensure that all MDDI_Data signals are at a logic-zero level for a sufficient time to allow the client to begin recovering a clock signal using both MDDI_Data0 and NDDI_Stb after enabling the line drivers for a host. T. For The Forward Link Skew Calibration Packets In one embodiment, the Parameter CRC field (2 bytes) contains a 16-bit CRC of all bytes from the Packet Length to the Packet Type. If this CRC fails to check then the entire packet is discarded.The All Zero 1 field uses 1 byte to ensure that there will be an transitions on the MDDI_Stb at the end of the Parameter CRC field.The Calibration Data Sequence field contains a data sequence that causes the MDDI_Data signals to toggle at every data period. The length of the Calibration Data Sequence field is determined by the interface being used on the forward link. During the processing of the Calibration Data Sequence, the MDDI host controller sets all MDDI_Data signals equal to the strobe signal. The client clock recovery circuit should use only MDDI_Stb rather than MDDI_Stb Xor MDDI_Data0 to recover the data clock while the Calibration Data Sequence field is being received by the client Display. Depending on the exact phase of the MDDI_Stb signal at the beginning of the Calibration Data Sequence field, the Calibration Data Sequence will generally be one of the following based on the interface Type being used when this packet is sent:Type 1 - (64 byte data sequence) 0xaa, 0xaa ... or 0x55, 0x5...Type 2 - (128 byte data sequence) 0xcc, 0xcc... or 0x33, 0x33.Type 3 -(256 byte data sequence) 0xf0, 0xf0 or 0x0f, 0x0f ...Type 4 - (512 byte data sequence) 0xff, 0x00, 0xff, 0x00 ... or 0x00, 0xff, 0x00, 0xff...An example of the possible MDDI_Data and MDDI_Stb waveforms for both the Type 1 and Type 2 Interfaces are shown in FIGS. 62A and 62B , respectively. XX. CONCLUSION While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. XXI. Further aspects: According to one aspect, a digital data interface for transferring digital presentation data at a high rate between a host device and a client device over a communication path comprising a plurality of packet structures linked together to form a communication protocol for communicating a pre-selected set of digital control and presentation data between a host and a client, over said communication path; and at least one link controller residing in said host device coupled to said client through said communications path, being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. The interface may further comprising said packets grouped together within media frames that are communicated between said host and client having a pre- defined fixed length with a pre-determined number of said packets have differing and variable lengths, The interface may further comprising a Sub-frame Header Packet positioned at the beginning of transfers of packets from said host. In the interface, said link controller may be a host link controller and may further comprising at least one client link controller residing in said client device coupled to said host through said communications path, being configured to generate, transmit., and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. The interface may further comprising a plurality of transfer modes, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period, with each mode selectable by negotiation between said host and client link drivers; and wherein said transfer modes are dynamically adjustable between said modes during of data. The interface may further comprising a Link Shutdown type packet for transmission by said host to said client to terminate the transfer of data in either direction over said communication path. The interface may further comprising means for said client to wake up said host from a hibernation state.According to one aspect, a method of transit-ring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising generating one or more of a plurality of pre-defined packet structures and linking them together to form a pre-defined communication protocol; communicating a set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; coupling at least one host link controller residing in said host device to said client device through said communications path, the host link controller being to generate, transmit, and receive packets forming said communications protocol and to form digital presentation data into one or more types of data packed; and transferring data in the form of packets over said communications path using said link controllers. The method may further comprising grouping said packets together within media frames for communication between said host and client, the media frames having a pre-defined fixed length with a pre-determined number of said packets have differing and variable lengths, The method may further comprising commencing transfer of packets from said host with a Sub-frame Header type packet. The method may further comprising generating, transmitting, and receiving packets forming said communications protocol through at least one client link controller residing in said client device coupled to said host device through said communications path. The method may further comprising negotiating between host and client link drivers the use of one of a plurality of transfer modes in each direction, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period; and dynamically adjusting between said transfer modes during transfer of data. The method may further comprising waking up a communication link by driving a data line to a high state for at least 10 clock cycles and starting to transmit a strobe signal as if the data line was zero, by said host. The method may further comprising driving the data line low for a predetermined number of clock cycles by said host while continuing to transmit a strobe signal after the host has driven the data line high for about 150 clock cycles. The method may further comprising beginning to transmit the first sub- frame header packet by said host. The method may further comprising counting at least 150 continuous clock cycles of the data line being high, followed by at least 50 continuous clock cycles of the data line being low, by said client. The method may further comprising stopping driving the data line high by said client after the client counts 70 continuous clock cycles of the data being high. The method may further comprising counting another 80 continuous clock cycles of the data line being high to reach the 150 clock cycles of the data line being high by said client, and looking for about 50 clock cycles of the data line being low, and looking for the unique word. The method may further comprising counting the number of clock cycles occurring a one is sampled by said host, by sampling the data line on both the rising and falling edges during the reverse timing packet. The method may further comprising counting the number of clock cycles occurring until a one is sampled by said host, by sampling the data line on both the rising and falling edges during the reverse timing packet. The method may further comprising terminating the transfer of data in either direction over said communication path using a Link Shutdown type packet for transmission by said host to said client. The method may further comprising waking up said host from a hibernation state by communication with said client.According to one aspect, an apparatus for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising at least one host link controller disposed in said host device for generating one or more of a plurality of pre-delined packet structures and linking them together to a pre-defined communication protocol, and for a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; at least one client controller disposed in said client device and coupled to said host link controller through said communications path; and each link controller being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets. In the apparatus, said host controller comprises a state machine. In the apparatus, said host controller may comprise a general purpose signal processor. The apparatus may further comprising a Sub-frame Header type packet at the commencing of transfer of packets from said host. In the apparatus, said host controller may comprise one or more differential line drivers; and said client receiver comprises one or more differential line receivers coupled to said communication path. In the apparatus, said host and client link controllers may be configured to use of one of a plurality of transfer modes in each direction, each allowing the transfer of different maximum numbers of bits of data in parallel over a given time period; and being capable of being dynamically adjusting between said transfer modes during transfer of data. In the apparatus, said host controller may be configured to transmit a Link Shutdown type packet to said client means for terminating the transfer of data in either direction over said communication path.According to one aspect, for use in an electronic system for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, a computer program product comprising a computer usable medium having computer readable program code means embodied in said medium for causing an application program to execute on the computer system, said computer readable program code means comprising a computer readable first program code means for causing the computer system to generate one or more of a plurality of pre-defined packet structures and link them together to form a pre-defined communication protocol; a computer readable second program code means for causing the computer system to communicate a pre-selected set of digital control and presentation data between said host and said client devises over said communication path using said communication protocol; a computer readable third program code means for causing the computer system to couple at least one host link controller disposed in said host device to at least one client controller disposed in said client device through said communications path, the link controllers being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets; and a computer readable fourth program code means for causing the computer system to transfer data in the form of packets over said communications path using said link controllers.According to one aspect, an apparatus for transferring digital data at a high rate between a host device and a client device over a communication path for presentation to a user, comprising means for generating one or more of a plurality of pre-defined packet structures and linking them together to form a pre-defined communication protocol; means for communicating a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; means for coupling at least two link controllers together through said communications path, one in each of said host and client and each being configured to generate, transmit, and receive packets forming said communications protocol, and to form digital presentation data into one or more types of data packets; and means for transferring data in the form of packets over said communications path using said link controllers. The apparatus may further comprising means for commencing transfer of packets from said host with a Sub-frame Header type packet. The apparatus may further comprising means for requesting display capabilities information from the client by a host link controller so as to determine what type of data and data rates said client is capable of accommodating through said interface.According to one aspect, a processor for use in an electronic system for transferring digital data at a high rate between a host device and a client device over a communication path, the processor configured to generate one or more of a plurality of pre-defined packet structures and link them together to form a pre-defined communication protocol; to form digital presentation data into one or more types of data packets; communicate a pre-selected set of digital control and presentation data between said host and said client devices over said communication path using said communication protocol; and transfer data in the form of packets over said communications path.According to one aspect, a state machine for use in obtaining sychronization in an electronic system transferring digital data at a high rate between a host device and a client device over a communication path, the state machine configured to have at least one Async Frames State synchronization state, at least two Acquiring Sync States synchronization states, and at least three In-States synchronization states.According to one aspect, a state machine for use in obtaining synchronization in an electronic system transferring digital data at a high rate between a host device and a client device over a communication path, the state machine configured to have at least one Acquiring Sync States synchronization states, and at least two In-Sync States synchronization states. In the state machine, one condition for shifting between an Acquiring Sync State and a first In-Sync State may be detecting the presence of a synchronization pattern in the communication link. In the state machine, a second condition for shifting between an Acquiring Sync State and a first In-Sync State may be detecting the presence of a sub- frame header packet and good CRC value at a frame boundary. In the stale machine, one condition for shifting between a first In-Sync Stats and an Acquiring Sync State may be detecting the presence of no synchronization pattern or a bad CRC value at a sub-frame boundary. In the state machine, one condition for shifting between a first In-Sync State and a second In-Sync State may be detecting the presence of no synchronization pattern or a bad CRC value at a sub-frame boundary.
A multiprocessor system (10) includes a plurality of processing modules, such as MPUs (12), DSPs (14), and coprocessors/DMA channels (16). Power management software (38) in conjunction with profiles (36) for the various processing modules and the tasks to executed are used to build scenarios which meet predetermined power objectives, such as providing maximum operation within package thermal constraints or using minimum energy. Actual activities associated with the tasks are monitored during operation to ensure compatibility with the objectives. The allocation of tasks may be changed dynamically to accommodate changes in environmental conditions and changes in the task list. Temperatures may be computed at various points in the multiprocessor system by monitoring activity information associated with various subsystems. The activity measurements may be used to compute a current power dissipation distribution over the die. If necessary, the tasks in a scenario may be adjusted to reduce power dissipation. Further, activity counters may be selectively enabled for specific tasks in order to obtain more accurate profile information.
A method for controlling the execution of multiple tasks in a processing circuit including several modules, comprising the steps of:determining temperature-associated information at various areas of the processing circuit; andexecuting the tasks on said plurality of processing modules responsive to said temperature-associated information to prevent problems associated with one or more areas exceeding a temperature threshold.The method of claim 1 wherein said determining step comprises the step of monitoring operations executed by said modules.The method of claim 1 wherein said determining step comprises the step of calculating power dissipation information at various locations in said processing circuit.The method of claim 1 wherein said determining step comprises the step of calculating a current temperature at various locations in said processing circuit.The method of claim 1 wherein said determining step comprises the steps of:generating a task allocation scenario;estimating temperature-associated information for various locations in the processing circuit;computing the temperature associated with said activities.The method of claim 5 wherein said step of generating a task allocation scenario comprises the step of receiving a task list describing the tasks to be executed and a task model describing the tasks.The method of claim 6 wherein the task model includes initial area-specific power dissipation estimates for each task.A processing circuit including a plurality of processing modules for executing multiple tasks comprising:circuitry for determining temperature-associated information at various areas of the processing circuit; andcircuitry for executing the tasks on said plurality of processing modules responsive to said temperature-associated information to prevent problems associated with one or more areas exceeding a temperature threshold.The processing circuit of claim 8 wherein said determining circuitry comprises circuitry for monitoring operations executed by said processing modules.The processing circuit of claim 8 wherein said determining circuitry comprises circuitry for calculating power dissipation information at various locations in said processing circuit.The processing circuit of claim 8 wherein said determining circuitry comprises circuitry for calculating a current temperature at various locations in said processing circuit.The processing circuit of claim 8 wherein said determining circuitry comprises circuitry for generating a task allocation scenario, estimating temperature-associated information for various locations in the processing circuit and computing the temperature associated with said activities.The processing circuit of claim 12 wherein said circuitry for generating a task allocation scenario comprises circuitry for receiving a task list describing the tasks to be executed and a task model describing the tasks.The processing circuit of claim 13 wherein the task model includes initial area-specific power dissipation estimates for each task.A mobile communications device comprising:an antenna for receiving and transmitting signals; andreceiver/transmitter circuitry coupled to said antenna for sending and receiving audio and data signals, said receiver/transmitter circuitry including a processing circuit comprising:circuitry for determining temperature-associated information at various areas of the processing circuit; andcircuitry for executing the tasks on said plurality of processing modules responsive to said temperature-associated information to prevent problems associated with one or more areas exceeding a temperature threshold.
BACKGROUND OF THE INVENTION1. TECHNICAL FIELDThis invention relates in general to integrated circuits and, more particularly, to managing energy in a processor.2. DESCRIPTION OF THE RELATED ARTFor many years, the focus of processor design, including designs for microprocessor units (MPUs), co-processors and digital signal processors (DSPs), has been to increase the speed and functionality of the processor. Presently, energy consumption has become a serious issue. Importantly, maintaining low energy consumption, without seriously impairing speed and functionality, has moved to the forefront in many designs. Energy consumption has become important in many applications because many systems, such as smart phones, cellular phones, PDAs (personal digital assistants), and handheld computers operate from a relatively small battery. It is desirable to maximize the battery life in these systems, since it is inconvenient to recharge the batteries after short intervals.Currently, approaches to minimizing energy consumption involve static energy management; i.e., designing circuits which use less energy. In some cases, dynamic actions have been taken, such as reducing clock speeds or disabling circuitry during idle periods.While these changes have been important, it is necessary to continuously improve energy management, especially in systems where size and, hence, battery size, is important to the convenience of using a device.In addition to overall energy savings, in a complex processing environment, the ability to dissipate heat from the integrated circuit becomes a factor. An integrated circuit will be designed to dissipate a certain amount of heat. If tasks (application processes) require multiple systems on the integrated circuit to draw high levels of current, it is possible that the circuit will overheat, causing system failure or errant behavior.In the future, applications executed by integrated circuits will be more complex and will likely involve multiprocessing by multiple processors, including MPUs, DSPs, coprocessors and DMA channels in a single integrated circuit (hereinafter, a "multiprocessor system"). DSPs will evolve to support multiple, concurrent applications, some of which will not be dedicated to a specific DSP platform, but will be loaded from a global network such as the Internet. This is especially true in wireless multimedia appliances domain, where severe cost constraints require the use of a small, low pin count, low cost, packaging technology. Accordingly, the tasks that a multiprocessor system will be able to handle without overheating will become uncertain.Accordingly, a need has arisen for a method and apparatus for managing energy in a circuit without seriously impacting performance.BRIEF SUMMARY OF THE INVENTIONThe present invention provides a method and apparatus for controlling the execution of multiple tasks in a processing circuit including several modules, where temperature-associated information is determined at various areas of the processing circuit. Tasks are executed on the plurality of modules of the processing circuit responsive to the temperature-associated information in order to prevent problems associated with one or more areas exceeding a temperature threshold.The present invention provides significant advantages over the prior art by providing a fully dynamic energy management based on the temperature, or estimated temperature, of various areas of a device. As the tasks executed in the device change, the energy management can build new scenarios to ensure that temperature thresholds are not exceeded in any area of the device.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSFor a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:Figure 1 illustrates a block diagram of a multiprocessor system;Figure 2 illustrates a software layer diagram for the multiprocessor system;Figure 3 illustrates an example showing the advantages of energy management for a multiprocessor system;Figures 4a and 4b illustrate flow diagrams showing preferred embodiments for the operation of the energy management software of Figure 2;Figure 5 illustrates the building system scenario block of Figure 4;Figure 6 illustrates the activities estimate block of Figure 4;Figure 7 illustrates the power compute block of Figure 4;Figure 8 illustrates the activity measure and monitor block of Figure 4;Figure 9 illustrates a block diagram showing the multiprocessor system with activity counters;Figure 10 illustrates an exploded view of a processing device of the type shown in the block diagram of Figure 9 along with a graph of dissipated power (mW/mm2) during operation;Figure 11 illustrates a flow chart describing the use of area-specific temperature data for task scheduling;Figure 12 illustrates a block diagram of an embodiment for providing accurate measurements of activity associated with a task; andFigure 13 illustrates a mobile communications device using the invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention is best understood in relation to Figures 1-13 of the drawings, like numerals being used for like elements of the various drawings.Figure 1 illustrates a general block diagram of a general multiprocessor system 10, including an MPU 12, one or more DSPs 14 and one or more DMA channels or coprocessors (shown collectively as DMA/Coprocessor 16). In this embodiment, MPU 12 includes a core 18 and a cache 20. The DSP 14 includes a processing core 22 and a local memory 24 (an actual embodiment could use separate instruction and data memories, or could use a unified instruction and data memory). A memory interface 26 couples a shared memory 28 to one or more of the MPU 12, DSP 14 or DMA/Coprocessor 16. Each processor (MPU 12, DSPs 14) can operate in full autonomy under its own operating system (OS) or real-time operating system (RTOS) in a real multiprocessor system, or the MPU 12 can operate the global OS that supervises shared resources and memory environment.Figure 2 illustrates a software layer diagram for the multiprocessor system 10. As shown in Figure 1, the MPU 12 executes the OS, while the DSP 14 executes an RTOS. The OS and RTOSs comprise the OS layer 30 of the software. A distributed application layer 32 includes JAVA, C++ and other applications 34, power management tasks 38 which use profiling data 36 and a global tasks scheduler 40. A middleware software layer 42 communicates between the OS layer 30 and the applications in the distributed application layer 32.Referring to Figures 1 and 2, the operation of the multiprocessor system 10 is discussed. The multiprocessor system 10 can execute a variety of tasks. A typical application for the multiprocessor system 10 would be in a smartphone application where the multiprocessor system 10 handles wireless communication, video and audio decompression, and user interface (i.e., LCD update, keyboard decode). In this application, the different embedded systems in the multiprocessor system 10 would be executing multiple tasks of different priorities. Typically, the OS would perform the task scheduling of different tasks to the various embedded systems.The present invention integrates energy consumption as a criterion in scheduling tasks. In the preferred embodiment, the power management application 38 and profiles 36 from the distributed applications layer 32 are used to build a system scenario, based on probabilistic values, for executing a list of tasks. If the scenario does not meet predetermined criteria, for example if the power consumption is too high, a new scenario is generated. After an acceptable scenario is established, the OS layer monitors the hardware activity to verify that the activity predicted in the scenario was accurate.The criteria for an acceptable task scheduling scenario could vary depending upon the nature of the device. One important criterion for mobile devices is minimum energy consumption. As stated above, as electronic communication devices are further miniaturized, the smaller battery allocation places a premium on energy consumption. In many cases during the operation of a device, a degraded operating mode for a task may be acceptable in order to reduce power, particularly as the batteries reach low levels. For example, reducing the LCD refresh rate will decrease power, albeit at the expense of picture quality. Another option is to reduce the MIPs (millions of instructions per second) of the multiprocessor system 10 to reduce power, but at the cost of slower performance. The power management software 38 can analyze different scenarios using different combinations of degraded performance to reach acceptable operation of the device.Another objective in managing power may be to find the highest MIPs, or lowest energy for a given power limit setup.Figures 3a and 3b illustrate an example of using the power management application 38 to prevent the multiprocessor system 10 from exceeding an average power dissipation limit. In Figure 3a, the DSP 14, DMA 16 and MPU 12 are concurrently running a number of tasks. At time t1, the average power dissipation of the three embedded systems exceeds the average limit imposed on the multiprocessor system 10. Figure 3b illustrates a scenario where the same tasks are executed; however, an MPU task is delayed until after the DMA and DSP tasks are completed in order to maintain an acceptable average power dissipation profile.Figure 4a illustrates a flow chart describing operation of a first embodiment of the power management tasks 38. In block 50, the power management tasks are invoked by the global scheduler 40, which could be executed on the MPU 12 or one of the DSPs 14; the scheduler evaluate the upcoming application and splits it into tasks with associated precedence and exclusion rules. The task list 52 could include, for example, audio/video decoding, display control, keyboard control, character recognition, and so on. In step 54, the task list 52 is evaluated in view of the task model file 56 and the accepted degradations file 58. The task model file 56 is part of the profiles 36 of the distributed applications layer 32. The task model file 56 is a previously generated file that assigns different models to each task in the task list. Each model is a collection of data, which could be derived experimentally or by computer aided software design techniques, which defines characteristics of the associated task, such as latency constraints, priority, data flows, initial energy estimate at a reference processor speed, impacts of degradations, and an execution profile on a given processor as a function of MIPs and time. The degradation list 58 sets forth the variety of degradations that can be used in generating the scenario.Each time the task list is modified (i.e., a new task is created or a task is deleted) or when a real time event occur, based on the task list 52 and the task model 56 in step 54, a scenario is built. The scenario allocates the various tasks to the modules and provides priority information setting the priority with which tasks are executed. A scenario energy estimate 59 at a reference speed can be computed from the tasks' energy estimate. If necessary or desirable, tasks may be degraded; i.e., a mode of the task that uses fewer resources may be substituted for the full version of a task. From this scenario, an activities estimate is generated in block 60. The activities estimate uses task activity profiles 62 (from the profiling data 36 of the distributed application layer 32) and a hardware architectural model 64 (also from the profiling data 36 of the distributed application layer 32) to generate probabilistic values for hardware activities that will result from the scenario. The probabilistic values include each module's wait/run time share (effective MHz), accesses to caches and memories, I/O toggling rates and DMA flow requests and data volume. Using a period T that matches the thermal time constant, from the energy estimate 59 at a reference processor speed and the average activities derived in step 60 (particularly, effective processors speeds), it is possible to compute an average power dissipation that will be compared to thermal package model. If the power value exceeds any thresholds set forth in the package thermal model 72, the scenario is rejected in decision block 74. In this case, a new scenario is built in block 54 and steps 60, 66 and 70 are repeated. Otherwise, the scenario is used to execute the task list.During operation of the tasks as defined by the scenario, the OS and RTOSs track activities by their respective modules in block 76 using counters 78 incorporated in the hardware. The actual activity in the modules of the multiprocessor system 10 may vary from the activities estimated in block 60. The data from the hardware counters are monitored on a T periodic basis to produce measured activity values. These measured activity values are used in block 66 to compute an energy value for this period, and hence, an average power value in block 66, as described above, and are compared to the package thermal model in block 72. If the measured values exceed thresholds, then a new scenario is built in block 54. By continuously monitoring the measured activity values, the scenarios can be modified dynamically to stay within predefined limits or to adjust to changing environmental conditions.Total energy consumption over T for the chip is calculated as: where, f is the frequency, Vdd is the supply voltage and α is the probabilistic (or measured, see discussion in connection with block 76 of this figure) activity. In other words, ΣT(α)*Cpd*f*V2 dd is the energy corresponding to a particular hardware module characterized by equivalent dissipation capacitance Cpd ; counters values give ΣT(α) and E is the sum of all energies for all modules in the multiprocessor system 10 dissipated within T. Average system power dissipation W = E/T. In the preferred embodiment, measured and probabilistic energy consumption is calculated and the average power dissipation is derived from the energy consumption over period T. In most cases, energy consumption information will be more readily available. However, it would also be possible to calculate the power dissipation from measured and probabilistic power consumption.Figure 4b is a flow chart describing operation of a second embodiment of the power management tasks 38. The flow of Figure 4b is the same as that of Figure 41, except when the scenario construction algorithm is invoked (new task, task delete, real time event) in step 50, instead of choosing one new scenario, n different scenarios that match the performances constraints can be pre-computed in advance and stored in steps 54 and 59, in order to reduce the number of operations within the dynamic loop and provide faster adaptation if the power computed in the tracking loop leads to current scenario rejection in block 74. In Figure 4b, if the scenario is rejected, another pre-computed scenario is selected in block 65. Otherwise the operation is the same as shown in Figure 4a.Figures 5 - 8 illustrate the operation of various blocks of Figure 3 in greater detail. The build system block 54 is shown in Figure 5. In this block, a task list 52, a task model 56, and a list of possible task degradations 58 are used to generate a scenario. The task list is dependent upon which tasks are to be executed on the multiprocessor system 10. In the example of Figure 5, three tasks are shown: MPEG4 decode, wireless modem data receive and keyboard event monitor. In an actual implementation, the tasks could come from any number of sources. The task model sets forth conditions which must be taken in consideration in defining the scenario, such as latency and priority constraints, data flow, initial energy estimates, and the impact of degradations. Other conditions could also be used in this block. The output of the build system scenario block is a scenario 80, which associates the various tasks with the modules and assigns priorities to each of the tasks. In the example shown in Figure 5, for example, the MPEG4 decode task has a priority of 16 and the wireless modem task has a priority of 4.The scenarios built in block 54 could be based on a number of different considerations. For example, the scenarios could be built based on providing the maximum performance within the packages thermal constraints. Alternatively, the scenarios could be based on using the lowest possible energy. The optimum scenario could change during operation of a device; for example, with fully charged batteries a device may operate at a maximum performance level. As the power in the batteries diminished below a preset level, the device could operate at the lowest possible power level to sustain operation.The scenario 80 from block 54 is used by the activities estimate block 60, shown in Figure 6. This block performs a probabilities computation for various parameters that affect power usage in the multiprocessor system 10. The probabilistic activities estimate is generated in conjunction with task activity profiles 62 and hardware architectural models 64. The task activity profiles include information on the data access types (load/store) and occurrences for the different memories, code profiles, such as the branches and loops used in the task, and the cycles per instruction for instructions in the task. The hardware architectural model 64 describes in some way the impact of the task activity profiles 62 on the system latencies, that will permit computation of estimated hardware activities (such as processor run/wait time share). This model takes into account the characteristics of the hardware on which the task will be implemented, for example, the sizes of the caches, the width of various buses, the number of I/O pins, whether the cache is write-through or write back, the types of memories used (dynamic, static, flash, and so on) and the clock speeds used in the module. Typically, the model can consist of a family of curves that represent MPU and DSP effective frequency variations with different parameters, such as data cacheable/non-cacheable, read/write access shares, number of cycles per instruction, and so on. In the illustrated embodiment of Figure 6, values for the effective frequency of each module, the number of memory accesses, the I/O toggling rates and the DMA flow are calculated. Other factors that affect power could also be calculated.The power compute block 66 is shown in Figure 8. In this block, the probabilistic activities from block 60 or the measured activities from block 76 are used to compute various energy values and, hence, power values over a period T. The power values are computed in association with hardware power profiles, which are specific to the hardware design of the multiprocessor system 10. The hardware profiles could include a Cpd for each module, logic design style (D-type flip-flop, latches, gated clocks and so on), supply voltages and capacitive loads on the outputs. Power computations can be made for integrated modules, and also for external memory or other external devices.Activity measure and monitor block 76 is shown in Figure 8. Counters are implemented throughout the multiprocessor system 10 to measure activities on the various modules, such as cache misses, TLB (translation lookaside buffer) misses, non-cacheable memory accesses, wait time, read/write requests for different resources, memory overhead and temperature. The activity measure and monitor block 76 outputs values for the effective frequency of each module, the number of memory accesses, the I/O toggling rates and the DMA flow. In a particular implementation, other values may also be measured. The output of this block is sent to the power compute block 66.Figure 9 illustrates and example of a multiprocessor system 10 using power/energy management software. In this example, the multiprocessor system 10 includes a MPU 12, executing an OS, and two DSPs 14 (individually referenced as DSP1 14a and DSP2 14b), each executing a respective RTOS. Each module is executing a monitor task 82, which monitors the values in various activity counters 78 throughout the multiprocessor system 10. The power compute task is executed on DSP 14a. The various monitor tasks retrieve data from associated activity counters 78 and pass the information to DSP 14a to calculate a power value based on measured activities. The power management tasks, such as power compute task 84 and monitor task 82, can be executed along with other application tasks.In the preferred embodiment, the power management tasks 38 and profiles 36 are implemented as JAVA class packages in a JAVA real-time environment.Figure 10 illustrates an exploded view of a processing device 100, of the type shown in Figure 9, with the layout of various components displayed on a semiconductor die 102. For example, the boundaries of components MPU 12, DSP1 14a and DSP2 14b are shown on Figure 10 on die 102. Die 102 would be mounted within packaging 110. Above die 102, an example of a power dissipation profile 112 that could occur during operation of the processing device 100 is shown. The power dissipation profile 112 shows peaks, 114, 116 and 118, which are associated with the operation of respective components. As can be seen from this Figure, power dissipation peak 114 exceeds a predetermined safe range.The power dissipation profile 102 can be computed from the events detected by various counters 78 associated with the components as shown in Figure 9. A temperature field for the die may be computer from the dissipated power profile. When a critical power surge, such as the one shown at peak 114, is detected, a rescheduling of tasks may be computed by the power computing task 84. In this case, several solutions may be available to bring peak 114 down to an acceptable level. First, if the task running on MPU 12 was a high priority task, it might be possible to reschedule lower priority tasks on DSP1 14a or DSP2 14b. Since the power dissipation in the areas designated by DSP1 14a and DSP2 14b contribute to the power dissipation in the area designated by MPU 12, rescheduling one or more of the tasks using DSP1 14a or DSP2 14b may reduce the peak. Alternatively, it may be possible to reduce the power dissipation shown at peak 114 by reducing the frequency of the MPU 12, DSP1 14a and DSP2 14b.Counters 78 can measure activity in many areas of the die 102. For example, for MPU 12, a first counter could measure activity of the instruction cache, a second counter could measure activity of the data cache and a third counter could measure activity of the MAC (multiplier accumulator). The counters 78 need not be physically located in the area of the circuit whose activity is being measured. It would also be possible for a single counter to measure activity that affects multiple areas of the die 100.Because the effect of an activity can be translated directly to an estimate of power dissipation in one or more areas of the die 102, an ongoing measurement of activities can identify potentially dangerous power surges that could affect device performance. Thresholds can be set to identify dangerous situations.Figure 11 illustrates a flow chart describing operation of scheduling events to avoid critical temperature effects in a specific area of a die. In step 120, power management software receives activity information. This information is used to compute a power dissipation distribution over the semiconductor die 102 in step 122. The power dissipation distribution is analyzed in step 124. If a threshold is exceeded in any area of the semiconductor die 102, the tasks are adjusted in step 126 to reduce the power dissipation in that area.While the power dissipation distribution is estimated using activity measurements, it would be possible to measure temperatures directly at various points on the semiconductor and schedule tasks based on actual temperature measurements. For example, the temperature could be estimated by a measured change of an I-V characteristic of a PN junction.In addition to monitoring activity on the various components during operation of the device 100, the counter 78 may be used to derive information necessary to profile tasks for area-specific temperatures, in order to create schedules that avoid critical temperatures in any area of the die 102 during the execution of the tasks. This could be performed as shown in Figures 4a and 4b, with the thresholds be applied to various areas of the device 100.This aspect of the present invention provides significant advantages over the prior art. First, it provides for a fully dynamic power management based on the temperature, or estimated temperature, of various areas of a device. As the tasks executed in the device 100 change, the power management can build new scenarios to ensure that temperature thresholds are not exceeded in any area of the device.The power management software is transparent to the various tasks that it controls. Thus, even if a particular task does not provide for any power management, the power management software assumes responsibility for executing the task in a manner that is consistent with the power capabilities of the device 100.Figure 11 illustrates an embodiment of the invention that accurately measures energy information regarding the operation of a specific task. By increasing the accuracy of the energy information associated with a task, the probability of success for a proposed global scenario is similarly increased.Figure 11 provides a more detailed block diagram of MPU 12. An MPU core 130 includes a TaskID register 132 and a compare circuit 134. Core 130 is coupled to instruction cache 20a and data cache 20b. Counters 78 monitor activity within the core. Counters 78 have enable ports (En) coupled to the output of compare circuit 134. Each processor that may independently execute a task (i.e, "autonomous" processors such as an MPU or DSP, but generally not a co-processor or a DMA physical channel) may have a TaskID register 132 and a compare circuit 134. Thus, the device shown in Figure 9 might have three TaskID registers 132 corresponding to the MPU 12, DSP1 14a and DSP2 14b.In operation, each task being executed by the processing system 10 has a unique identification tag, the Task ID. When a task is being executed, its Task ID is stored in the TaskID register 132. When an accurate estimate of system energy consumption is being measured for a specific task, the Task ID of the specific task is loaded into comparator 134 (the Task ID of the task being monitored may be stored within comparator 134 or in a register or other memory). Comparator 134 outputs a first logical signal (for example, a logical "1") when the identifier in the TaskID register matches the Task ID loaded into comparator 134. Similarly, comparator 134 outputs a second logical signal (for example, a logical "0") when the identifier in the TaskID register is different from the Task ID loaded into comparator 134.The output of the comparator 134 is coupled to enable ports of the various counters on the device 10. When there is a match in the comparator 134, the counters are enabled to measure activity associated with the task. When there is a mismatch, the counters are disabled, such that activities associated with other tasks are not measured. Some hardware systems are shared between multiple processors. Therefore, in order to accurately measure activity on these shared systems attributable to a distinct task, multiple counters can be coupled to respective compare circuits 134. Alternatively, the counter on the shared hardware system could have a Task ID register and comparator to allow counting only when a certain task was active.This embodiment can be used for generating energy information for task profiles. The energy information can be gathered in an "off-line" mode, where the device 10 is being operated for the purpose of gathering the energy information, or in an "on-line" mode, where the information is gathered during actual operation of the device 10, and the task profiles 36 are updated dynamically during the operation of the device to improve scheduling as tasks are created and terminated.In addition to energy profiling, the task-specific events monitoring capabilities described above can be used for other purposes. One such purpose would be to provide enhanced debugging techniques. For example, a breakpoint could be set when an activity counter reaches a certain value for a given task.During operation of the device 10, for each autonomous processor, the Task ID of the current task is stored in the TaskID register 132. In a multitasking system, a processor switches between each current task, giving the appearance that all of the current tasks are being executed simultaneously. As each task is loaded by the processor (the "active" task), various state information will be restored to the processor. In the embodiment shown in Figure 12, the Task ID associated with the active task is stored in the TaskID register 132 as the state information for the task is restored. During times when the contents of the TaskID register 132 for the autonomous processor matches the Task ID of the task being monitored, then the counters 78 will be enabled to accumulate activity information. When the processor switches to a different task, the counters will ignore activity. Thus, accurate information regarding activity associated with a task during multitasking operations can be obtained.The embodiment shown in Figure 12 can be used in conjunction with the embodiment shown in Figures 10 and 11 to obtain area-specific temperate data, if desired.This aspect of the invention provides for more accurate profile data that may be used for scheduling tasks. By providing better energy information, the success rate of computing global scenarios, as discussed in connection with Figures 4a and 4b.Figure 13 illustrates an implementation of a mobile communications device 150 with microphone 152, speaker 154, keypad 156, display 158 and antenna 140. Internal processing circuitry 162 includes one or more processing devices with the energy saving features described herein. It is contemplated, of course, that many other types of communications systems and computer systems may also benefit from the present invention, particularly those relying on battery power. Examples of such other computer systems include personal digital assistants (PDAS), portable computers, personal digital assistants (PDAs), smart phones, web phones, and the like. As power dissipation is also of concern in desktop and line-powered computer systems and micro-controller applications, particularly from a reliability standpoint, it is also contemplated that the present invention may also provide benefits to such line-powered systems.Telecommunications device 150 includes microphone 152 for receiving audio input, and speaker 154 for outputting audible output, in the conventional manner. Microphone 152 and speaker 154 are connected to processing circuitry 162 which receives and transmits audio and data signals.Although the Detailed Description of the invention has been directed to certain exemplary embodiments, various modifications of these embodiments, as well as alternative embodiments, will be suggested to those skilled in the art. The invention encompasses any modifications or alternative embodiments that fall within the scope of the Claims.
A semiconductor device comprising a first transistor device (130) on or in a semiconductor substrate (115) and a second transistor device (132) on or in the substrate. The device further comprises an insulating trench (200) located between the first transistor device and the second transistor device. At least one upper corner (610) of the insulating trench is a rounded corner in a lateral plane of the substrate.
CLAIMS What is claimed is: 1. A semiconductor device, comprising: a bipolar complementary metal-oxide-semiconductor device including: a semiconductor layer on a semiconductor substrate, wherein said semiconductor layer and said substrate include a same first dopant type, and a concentration of said first dopant is greater in said substrate than in said semiconductor layer; a metal-oxide-semiconductor transistor on or in said semiconductor layer; a bipolar transistor on or in said semiconductor layer; and an insulating trench located between said MOS transistor and said bipolar transistor, wherein said insulating trench is formed through said semiconductor layer and a bottom side of said insulating trench ends in said substrate and at least one top corner of said insulating trench is a rounded corner in a lateral plane of said substrate; insulating layers on said substrate and covering said bipolar complementary metal- oxide-semiconductor device; and interconnects through one or more of said insulating layers to electrically connect said metal-oxide-semiconductor transistor and bipolar transistor to each other, or to other active or passive components of said semiconductor device. 2. The device of Claim 1, wherein said insulating trench forms a ring that surrounds one of said MOS transistor or said bipolar transistor. 3. The device of Claim 1, further including an insulating material that conformally coats said insulating trench wherein a thickness of said insulating material at said rounded corner is substantially equal to a thickness of said insulating material in linear portions of said insulating trench. 4. A semiconductor device, comprising: a first transistor device on or in a semiconductor substrate; a second transistor device on or in said substrate; and an insulating trench located between said first transistor device and said second transistor device, wherein at least one upper corner of said insulating trench is a rounded corner in a lateral plane of said substrate. 5. The device of Claim 4, wherein said first transistor device includes one or more MOS transistors or bipolar transistors and said second transistor device includes one or more MOS transistors or bipolar transistors. 6. The device of Claim 4, wherein said insulating trench has a single linear segment, wherein said rounded upper corner is located at one or both ends of said single linear segment. 7. The device of Claim 4, wherein said insulating trench has two perpendicular linear segments that are coupled together by a connecting corner having said rounded top corner. 8. The device of Claim 4, wherein said insulating trench includes four linear segments that are coupled together by connecting corners to form a ring that surrounds at least one of said MOS transistor device or said bipolar transistor device and wherein at least one of said four connecting corners has said upper corner. 9. The device of Claim 4, wherein said rounded top corner has an interior surface that defines a circular arc having a radius of curvature of about 1 micron or greater. 10. The device of Claim 4, wherein said rounded top corner has a shape that corresponds to a one-quarter sector of a 24-sided regular polygon having six adjacent insulating trench edges that each form interior angles of about 165 degrees with adjacent edges. 11. A method of manufacturing a semiconductor device, comprising: depositing a semiconductor layer on a semiconductor conductor substrate; forming a bipolar complementary metal-oxide-semiconductor device, including: forming a first transistor device and second transistor device in or on said semiconductor layer; and forming an insulating trench through said semiconductor layer such that a bottom side of said insulating trench ends in said substrate, wherein said insulating trench is located between said first and second transistor devices and at least one top corner of said insulating trench is a rounded corner in a lateral plane of said substrate; depositing insulating layers on said substrate and covering said a bipolar complementary metal-oxide-semiconductor device; andforming interconnects through one or more of said insulating layers to electrically connect said first and second transistors to each other, or to other transistors of said semiconductor device. 12. The method of Claim 11, wherein forming said insulating trench includes developing a portion of a photoresist layer for said at least one top corner to have at least three edges, where adjacent edges of said portion form an interior angle of between about 90 and 180 degrees.
ISOLATION TRENCH WITH ROUNDED CORNERS FOR BiCMOS PROCESS This is directed generally to semiconductor devices and their manufacture; and, in particular, to such devices having a trench. BACKGROUND Semiconductor devices (e.g., integrated circuits) can integrate a number of different components, including different transistor types. For instance, bipolar transistors and metal- oxide-semiconductors (including complementary MOS transistors, CMOS) can be integrated on a single substrate to form BiCMOS devices. Such integration designs allow one to take advantage of selected properties of the components. E.g., bipolar transistors can be used in some analog operations requiring high speed or greater drive currents, while CMOS transistors can be used in some digital operations requiring low power dissipation. The push to increase transistor density in devices results in a smaller size and closer proximity of transistors. SUMMARY The disclosure provides in one embodiment, a semiconductor device, comprising a first transistor device on or in a semiconductor substrate and a second transistor device on or in the substrate. The device further comprises an insulating trench located between the first transistor device and the second transistor device. At least one upper corner of the insulating trench is a rounded corner in a lateral plane of the substrate. In another embodiment the semiconductor device comprises a bipolar complementary metal- oxide- semiconductor device that includes a metal-oxide-semiconductor transistor, a bipolar transistor and the above-described insulating trench. The transistors are formed in or on a semiconductor layer on a semiconductor substrate. The semiconductor layer and the substrate include a same first dopant type, and a concentration of the first dopant is greater in the substrate than in the semiconductor layer. The insulating trench is located between the metal- oxide- semiconductor transistor and said bipolar transistor, and the insulating trench is formed through the semiconductor layer and a bottom side of the insulating trench ends in the substrate. The device further comprises insulating layers on the substrate and covering the bipolar complementary metal-oxide-semiconductor device. The device also comprises interconnects through one or more of the insulating layers to electrically connect said metal-oxide-semiconductor transistor and bipolar transistor to each other, or to active or passive components of the semiconductor device. Another embodiment comprises a method of manufacturing the semiconductor device. The method comprises depositing the above-described semiconductor layer on the semiconductor substrate and forming the bipolar complementary metal-oxide-semiconductor device. Forming the bipolar complementary metal-oxide-semiconductor device includes forming the first and second transistor devices in or on said semiconductor layer and forming the above-described insulating trench between transistor devices. The trench is formed through said semiconductor layer such that a bottom side of said insulating trench ends in said substrate. Forming the device also comprises forming the above-described insulating layers and interconnects. BRIEF DESCRIPTION OF DRAWINGS The disclosure is described with reference to example embodiments and to accompanying drawings, wherein: FIGS. 1 to 10 are cross- sectional views of steps in an example method of manufacturing a semiconductor device according to the principles of the invention; and FIG. 11 is a plan view of an example device that depicts various embodiments of an insulating trench. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS To electrically isolate transistor components on a substrate, an insulating trench is formed between the components laying adjacent to each other on the substrate. The insulating trench is configured to prevent the lateral transfer of electrical carriers (e.g., holes or electrons) between adjacent transistors. The transfer of carrier can result in latchup phenomenon which can cause the device to not operate properly and in some cases permanently damage the device. To deter the formation of cracks in an insulating layer in the trench, top corners of the trench are rounded. Fabricating a trench with at least one rounded top corner deters the formation of cracks or voids in materials deposited in the trench. The formation of cracks or void in such materials undesirably reduces the trench's ability to prevent the lateral transfer of electrical carriers.FIG. 1 shows an example device 100 after depositing a semiconductor layer 110 on a semiconductor substrate 115 (e.g., silicon wafer). The semiconductor layer 110 can include one or more epitaxially- grown silicon layers 120 on a silicon wafer substrate 115. In some cases, the semiconductor layer 110 has a thickness 125 of about 15 to 30 microns. The semiconductor layer 110 can be implanted with a first dopant type (e.g., one of an n-type or p-type dopant) followed by a first thermal diffusion process (e.g., at least about 9000C for at least about 60 minutes). Prior to depositing the semiconductor layer 110, the substrate 115 can be implanted with the first dopant type followed by a second thermal diffusion processes. The semiconductor layer 110 and the substrate 115 are configured such that the concentration of first dopant type in the substrate 115 is higher than the concentration of first dopant type in the semiconductor layer 110. To minimize the number times the device 100 is exposed to thermal cycles, the first and second thermal processes can be applied as a single thermal process after implanting all of the desired dopants into the semiconductor layer 110 or substrate 115, and in some cases, after forming the insulating trench. After diffusing the dopants, there can be a gradient zone 127 within the substrate 115 that has a gradually increasing concentration gradient of the first dopant type as one moves from the semiconductor layer 110 towards the substrate 115. FIG. 1 also shows selected steps in the formation of first and second transistor devices 130, 132 in or on the substrate 115 or, in some cases, in or on the semiconductor layer 110. The transistor devices 130, 132 can be one or more MOS transistors (e.g., pMOS, nMOS or CMOS transistors), bipolar transistors, or other conventional transistors, or two- terminal such as diodes, capacitors, and resistors. The transistor devices 130, 132 can be formed as part of forming a BiCMOS device 135 (e.g., a linear BiCMOS device) having such transistors 130, 132. U.S. Patent No. 4,994,887 to Flutter et al. ("Flutter"), incorporated by reference in its entirety, provides example processes for forming such BiCMOS devices. One skilled in the art would be aware of alternative process for forming BiCMOS devices and the various configuration BiCMOS devices can have. FIG. 1 shows selected steps in the formation of example first and second transistors 130, 132 configured as a MOS transistor and bipolar transistor, respectively. FIG. 1 shows the device after forming a doped buried layer 140 by implanting a second dopant type into the substrate 115 (e.g., the second dopant type is an n-type dopant when the first dopant typeis an p-type dopant), followed by a thermal diffusion process. In some embodiments the doped buried layer 140 is an n-type dopant buried layer 140 (NBL). FIG. 1 further shows the device 100 after forming well regions 150, 152 on the substrate 115 by implanting second dopant types (e.g., n-type dopants) into the semiconductor layer 110. As shown in FIG. 1, some well regions 150 can serve as the well for a MOS transistor 130 (e.g., a pMOS transistor), while other well regions 152 can serve as the deep-n well (DEEPN) for the bipolar transistor 132. FIG. 1 also shows the device 100 after forming a second well region 160 in the semiconductor layer 110 such that the second well region 160 (e.g., a deep n-well, DNWELL) is formed adjacent to at least one of the well regions 152. In some cases, the amount of second dopant type implanted into the well region 152 is greater than the amount of second dopant type implanted into the second well region 160 (e.g., the DEEPN well region 152 is n++, while the DNWELL second well region 160 is n-). In such embodiments, the second well region 160 (DNWELL) can be configured as a collector of a biopolar transistor device 132. Forming the doped buried layer 140, the well regions 150, 152 and second well region 160 can include separate thermal processes or be combined with a single thermal process. FIG. 1 further shows the device 100 after forming a doped surface layer 165 of the bipolar transistor 132 within the second well region 160, where the doped surface layer 165 is doped with the first dopant type. E.g., the doped surface layer 165 can be p-type doped layer of a bipolar transistor 132. The doped surface layer 165 can further include an emitter layer 170 (e.g., a second dopant type such as an n-dopant) and a base layer 175 (e.g., a first dopant type such as a p-dopant). Conventional dopant implantation processes can be performed to form source and drain regions 180 of the MOS transistor 130. FIGS. 2-9 show selected steps in forming an insulating trench 200 of the device 100. FIG. 2 show a cross-sectional view of the example device 100 after depositing a photoresist layer 210 on the substrate 115 and after developing the photoresist layer 210 (e.g., by exposure to ultraviolet or visible light) to form openings 215 as part of forming the trench 200. FIGS. 3 and 4 are views (taken along the line 3,4-3,4 in FIG. 2) of different embodiments of the developed photoresist layer 210 that defines the trench's corner.Forming the insulating trench 200 includes forming a corner template portion 305 in the photoresist layer 210 to define a rounded trench corner. The photolithographic mask used to develop the layer 210 is shaped to approximate rounded trench corners as a plurality of adjacent and intersecting planar edges, which in turn get transferred into the corner template portion 305 when the layer 210 is developed. The portion 305 has at least three edges 310, where adjacent edges 310 form an interior angle 320 of between about 90 and 180 degrees. In some cases, as shown in FIG. 3, the portion 305 has three edges 310 and the interior angle 320 between adjacent edges 310 equals about 135 degrees. E.g., the portion 305 of the developed photoresist layer 210 corresponds to a quarter sector of a regular octagon having three patterned edges 310, the adjacent edges 310 forming interior angles of about 135 degrees. In other cases, as shown in FIG. 4, the portion 305 has six edges 310 and the interior angle 320 between adjacent edges 310 equals about 165 degrees. E.g., the portion 305 of the developed photoresist layer 210 corresponds to a quarter sector of a 24-sided polygon having six patterned edges 310, the adjacent edges 310 forming interior angles 320 of about 165 degrees. Using a corner template portion 305 with six edges 310 (FIG. 4) is more desirable than three edges 310 (FIG. 3) because the former facilitates forming a closer approximation to a smoothly rounded trench corner than the latter. One skilled in the art would understand that other numbers of edges 310 and interior angles 320 could be used. FIG. 5 show cross- sectional views of the device 100 after patterning the substrate 115 to form the insulating trench 200 and after removing the photoresist layer 210. Patterning can include a dry etch processes such as reactive ion etching. As illustrated in FIG. 5, the trench 200 is located between the first and second transistor devices 130, 132. Although a plurality of such trenches 200 can be formed between the transistor devices 130, 132, in some cases, it is preferable to form a single trench 200. Forming a single trench minimizes the amount of area occupied by the trench, thereby facilitating the formation of a higher density of devices 130, 132 on the substrate 115. As further illustrated in FIG. 5, to improve isolation against lateral carrier transfer, some embodiments of the trench 200 are formed deeply in the substrate 115. For instance, some preferred embodiments of the trench 200 pass through the semiconductor layer 110 and into the substrate 115. E.g., the trench 200 is formed through the semiconductor layer 110such that a bottom side 510 of the insulating trench 200 ends in the substrate 115. E.g., when the semiconductor layer 110 has a thickness 125 (FIG. 1) of about 20 microns, the trench has a depth 520 of greater than about 20 microns. Some embodiments of the trench 200 have a width of about 1 to 3 microns. FIGS. 6 and 7 show plan views of the example devices 100 analogous to that shown in FIGS. 3 and 4, respectively, but after patterning the substrate 115 and removing the photoresist layer 210. As illustrated in FIG. 6, at least one top corner 610 of the insulating trench 200 is a rounded corner in a lateral plane 620 of the substrate 115 (or semiconductor layer 110). The term top corner 610 as used herein refers to that portion of the insulating trench 200 located between two perpendicular linear portions 630, 632 of the insulating trench 200 and located at the substrate's (or semiconductor layer 110) surface 635. The term rounded corner, as used herein, refers to an interior surface 640 of the top corner 610 that forms a circular arc 645. In some embodiments, the circular arc 645 is defined by a radius of curvature 650 of about 1 micron or greater. In some cases, the radius of curvature 650 is adjusted depending upon the width 530 (FIG. 5) of the insulating trench 200. E.g., consider when the insulating trench's width 530 ranges from about 0.5 to 3 microns, then the radius of curvature 650 ranges from about 0.5 to 3 microns. In some cases, such as shown in FIG. 6, the developed photoresist's edges 310 (FIG. 3) are patterned into the top corner 610 of the insulating trench 200. E.g., the top corner 610 can have three or more trench edges 660 with interior angles 665 between about 90 and 180 degrees. In some cases, e.g., the rounded top corner 610 has a shape that corresponds to a one-quarter sector of an octagon or a 24- sided regular polygon having three or six adjacent insulating trench edges 660 that each form interior angles 665 of about 135 and 165 degrees, respectively. In other cases, however, the developed photoresist's edges 310 are not patterned into the top corner 610 of the insulating trench 200 because the dry etch process used for patterning tends to smooth out planarities in the trench edges 660. In such cases the corner 610 forms a smooth arc with no discernable transitions between edges 660. As the number of edges 310 in the photoresist layer 210 (FIG. 3) is increased, the patterning process is more effective at smoothing the top corner 610 to form a rounded corner. E.g., for the device 100 shown in FIG. 7, patterning results in the top corner 610 ofthe insulating trench 200 having an interior surface 640 with a smooth arc that is free of discernable trench edges. FIG. 8 shows a cross- sectional view of the device 100 shown in FIG. 5 after filling the trench 200. In some cases the trench 200 is entirely filled with an insulating material 810 (e.g., silicon oxide and silicon nitride) using chemical vapor deposition (CVD) or other conventional deposition process. In other cases, such as shown in FIG. 8, a conformal coating of the insulating material 810 (e.g., thermal silicon oxide) is formed in the trench 200. The interior of the trench 200 is then filled with a material 820 whose thermal expansion coefficient closely match (e.g., within about 10 percent) that of the semiconductor layer 110 and substrate 115. E.g., the material 820 can include a conductive material such as polysilicon when the semiconductor layer 110 and substrate 115 comprise epitaxially- grown silicon and silicon wafer, respectively. Filling the trench 200 with a material 820 having such thermal properties is desirable because this makes the trench less prone to cracking when the device is subjected to thermal processes. It can be difficult, however, to deposit a void- free coating of insulating material 810 and void-free interior of conductive material 820 in the trench 200 when the trench's width 530 is about 1 micron or less and its depth 520 is about 20 microns or greater (FIG. 5). In such cases, is can be advantageous for the trench 200 to be entirely filled with the insulating material 810. Insulating material 810 and conducting material 820 outside of trench 200 can be removed using FIG. 9 shows a plan view of an example device 100 at the same stage of manufacture as shown in FIG. 8 and analogous to the device shown in FIG. 7. The trench 200 is shown after depositing a conformal coating of the insulating material 810 in the trench, and filling the trench's interior with the thermally matching material 820. Having at least one, and preferably all, of the top corners 610 configured as rounded corners helps to prevent the formation of voids during the deposition of materials 810, 820 in the trench 200. Rounded top corners 610 also help to prevent the formation of cracks in the insulating material 810 when the device 100 is subject to a thermal process. Because the top corners 610 are rounded, a thickness 910 of the conformal coating of insulating material 810 deposited at the rounded corner 610 is substantially equal to (e.g., within 10 percent) a thickness 920 of the insulating material 810 deposited at linear portions 630 of the trench 200. Having a substantially same thickness 910 deters crack formation at the corners 610 because theinsulating material 810 in the corners 610 is not subjected to greater thermally-induced stresses than the insulating material 810 at linear portions 630 of the trench 200. FIG. 10 shows a cross-sectional view of the device 100 shown in FIG. 1 after forming a gate structure 1005 of the MOS transistor 130, and transistor contacts 1010. FIG. 10 also shows the device 100, after depositing insulating layers 1020 (e.g., pre-metal and interlayer dielectric layers) on the substrate 115 and covering the transistors 130, 132 (e.g., the BiCMOS device 135). FIG. 10 also shows the device after forming interconnects 1030 (e.g., lines, via, trenches including single or dual damascene structures, or other conventional interconnect structures) through one or more of the insulating layers 1020 to electrically connect the first and second transistors 130, 132 to each other, or to active (e.g., other transistors) or passive devices (resistors, inductors, capacitors, diodes) of the device 100. FIG. 10 also illustrates another embodiment, the semiconductor device 100 itself. Embodiments of the device 100 can be configured as an integrated circuit semiconductor device. Any of the above-described methods of manufacture can be used to fabricate the device 100 and the device 100 can comprise any of the above-discussed embodiments of its components structures. For instance, the semiconductor device 100 can comprise a first and second transistor devices 130 (each device including one or more MOS or bipolar transistors) on or in the semiconductor layer 110. The device 100 also includes an insulating trench 200 located between the devices 130, 132. At least one upper corner 610 of the insulating trench 200 is a rounded corner in a lateral plane 620 of the substrate 115. Some embodiments of the device 100 can include a BiCMOS device 135 that includes semiconductor layer 110, transistors 130, 132, and trench 200. Some embodiments of the device 100 can include the above- described insulating layers 1020 and interconnects 1030. Some embodiments of the trench 200 are formed through the semiconductor layer 110 and a bottom side 510 of the trench 200 ends in the substrate 200. The depth 520 that the trench is formed to is carefully selected to optimize the ability of the trench 200 to deter the transfer of carriers from one device 130 to another device 132. By forming a sufficiently deep trench 200, to potential to encounter a latchup phenomenon during the operation of the device 100 can be reduced.For instance, in some embodiments, the bottom side 510 of the trench 200 stops at a depth 520 in the substrate 115 (e.g., the gradient zone 127) where the dopant concentration in the substrate 115 is greater than an injected carrier concentration in the first transistor device 130. The specific value of such a depth 520 depends on a number of factors including the concentrations of first and second dopants in the semiconductor layer 110, the substrate 115, well region 150, and the voltage applied to the transistor device 130. Consider the case where the first transistor device 130 includes an about 20 micron thick 125 semiconductor layer 110 having a first (e.g., p-type) dopant type concentration of about 1E15 to 5E15 atoms/cm3, the substrate has a first dopant type concentration of about 1E19 to 5E19, the well region 150 has a second (e.g., n-type) dopant type concentration of about 1E16 to 1E17 atoms/cm3, and about 50 to 100 Volts is applied the device 100 (e.g., to the source region 180 of a MOS transistor 130). In such embodiments, the trench 200 is formed to a depth 520 of about 20 microns, which corresponds to a first dopant type concentration of about 5E17 atoms/cm3 in the substrate 115 (e.g., in the gradient zone 127). The insulating trench 200 can have a variety of configuration to facilitate deterring the transfer of carriers between adjacent transistor devices 130, 132. E.g., as shown for the example device 100 depicted in FIG. 9, the insulating trench 200 has a single linear segment 930, wherein the rounded upper corner 610 is located at one or both ends 940 of the single linear segment 930. This is illustrated in FIG. 11, which presents a plan view of the device 100 analogous to FIG. 9 but at a lower magnification. For clarity, details of the transistor devices 130, 132 are not shown. However, the devices 130, 132 could each include one or more of the above- described MOS and bipolar transistor devices, or types of transistor devices. As illustrated, one embodiment of the trench 200 comprises single linear segment 930 and rounded upper corner 610 is located at one or both ends 940 of the trench 200. As also illustrated in FIG. 11, other embodiments of the trench 200 can entirely or partially surround the transistor devices 130, 132. E.g., some embodiments of the trench 200 include four linear segments 930 that are coupled together by connecting corners 1105 to form a ring 1110 surrounding at least one of the first or second transistor devices 130, 132. At least one, and in some cases all, of the corners 1105 has the rounded top corner 610. The ring 1110 can have a rectangular, or other closed-shaped structure.As further illustrated in FIG. 11, there can be a plurality of insulating trenches 200 located between the first and second transistor device 130, 132. In some cases, e.g., there can be a plurality of insulating trenches 200 that form concentric rings 1110, 1115 of insulating trenches 200 around one or both of said first and second transistor devices 130, 132. As also illustrated in FIG. 11, some embodiments of the trench 200 have two perpendicular linear segments 930 that are coupled together by a connecting corner 1105 having the rounded top corner 610 to form an L-shaped structures 1130 that partially surrounds at least one of the first or second transistor devices 130, 132. Such a configuration can be advantageous when one of the transistor devices 130 is located at a corner 1130 of the substrate 115 (e.g., the corner of an integrated circuit die). Other open structures having two or more linear segments 930 and connecting corners 1105 having at least one rounded top corner 610 are also within the scope of the disclosure. Those skilled in the art to which the disclosure relates will appreciate that other and further additions, deletions, substitutions, and modifications may be made to the described example embodiments, without departing from the claimed invention+-.
A channel strained multi-gate transistor with low parasitic resistance and method of manufacturing the same. A gate stack may be formed over a semiconductor fin having a gate-coupled sidewall height (Hsi), an etch rate controlling dopant may be implanted into a source/drain region of the semiconductor fin adjacent to the gate stack and into a source/drain extension region of the semiconductor fin. The doped fin region may be etched to remove a thickness of the semiconductor fin equal to at least Hsi proximate a channel region and form a source/drain extension undercut. A material may be grown on the exposed semiconductor substrate to form a regrown source/drain fin region filling the source/drain extension undercut region.
CLAIMS What is claimed is: 1. A method of forming a multi-gate transistor, comprising: forming a gate stack over a channel region of semiconductor fin having a gate- coupled channel sidewall height (¾); implanting an etch rate controlling dopant into a source/drain region of the semiconductor fin adjacent to the gate stack; etching the doped fin region to remove a thickness of the semiconductor fin equal to approximately ¾ and form a source/drain extension cavity exposing a semiconductor substrate portion subjacent to a portion of the gate stack; and growing a material on the exposed semiconductor substrate to form a regrown source/drain fin region filling the source/drain extension cavity and extending a length away from the gate stack in a direction substantially parallel to a length of the channel. 2. The method of claim 1 , wherein the regrown source/drain fin region is grown to a greatest width, along a dimension parallel to a transistor channel width (Wsi), that is greater than Wsl. 3. The method of claim 1, wherein an etch rate of the doped fin region is higher than an etch rate of the subjacent semiconductor substrate, and wherein the material regrown on the exposed semiconductor substrate contains silicon. 4. The method of claim 1, wherein the etching of the doped fin regions forms the source/drain extension cavity with an undercut length (Xuc) along a dimension perpendicular to a transistor channel width (Wsi) that is substantially constant across the height Hsi. 5. The method of claim 4, wherein the undercut length (Xuc) extends subjacent to a portion of the gate stack and decreases with etch depths greater than the height Hsi. 6. The method of claim 4, wherein the undercut length Xuc, is constant across the transistor channel width Wsi. 7. The method of claim 1, further comprising: forming a first pair of spacers on laterally opposite sidewalls of the gate stack and a second pair of spacers on laterally opposite sidewalls of the semiconductor fin subsequent to implanting the dopant, wherein the first pair of spacers are disposed over the implantedregions of the semiconductor fin and the second pair of spacers are disposed adjacent to the implanted regions of the semiconductor fin; and removing the second pair of spacers prior to growing the silicon-containing material, the second pair of spacers removed without etching the first pair of spacers enough to expose a gate electrode layer of the gate stack. 8. The method of claim 7, wherein removing the second pair of spacers further comprises an etching performed subsequent to the etching of the doped fin regions. 9. The method of claim 7, wherein removing the second pair of spacers further comprises an etching performed during the etching of the doped fin regions. 10. The method of claim 1 , wherein etching the doped fin region to remove a thickness of the semiconductor fin equal to at least Hsi further comprises etching a thickness approximately equal to ¾ in a region of the semiconductor fin proximate to a channel region and etching a thickness greater than ¾ in a region of the semiconductor fin distal from the channel region. 1 1. The method of claim 10, wherein the etching of a thickness greater than ¾ in the region of the semiconductor fin distal from the channel region comprises recessing the semiconductor substrate below a top surface of an isolation top surface disposed below a portion of the gate stack adjacent to the semiconductor fin. 12. The method of claim 1 , wherein the implanting of the dopant comprises implanting at least one of carbon, phosphorous, or arsenic; and wherein the etching etching the doped fin region comprises a dry etch including a mixture of Cl2 and another compound selected from the group consisting of NF3, HBr, SF6, and Ar.. 13. A multi-gate transistor comprising : a gate stack including a gate dielectric and a gate electrode disposed over a channel region of a semiconductor fin extending from a semiconductor substrate, the channel region having a gate-coupled channel sidewall height of ¾; a regrown source/drain semiconductor fin disposed on the substrate, the regrown source/drain semiconductor fin including a source/drain extension region adjacent to the channel region, wherein the source/drain extension region and the channel region form an interface along a height that is approximately equal to ¾. 14. The multi-gate transistor of claim 13, wherein the source/drain extension region is subjacent to the gate stack by an amount, along a dimension perpendicular to a transistor channel width (Wsi), that is constant across the height ¾. 15. The multi-gate transistor of claim 14, wherein the amount the source/drain extension region overlaps the gate stack decreases for heights greater than ¾, as measured from the gate dielectric interface. 16. The multi-gate transistor of claim 13, wherein the regrown source/drain fin height distal from the channel region is equal to at least ¾ with a regrown source/drain fin width, along a dimension parallel to a transistor channel width (Wsi), greater than Wsi. 17. The multi-gate transistor of claim 16, wherein the regrown source/drain fin width is greater than Wsi along at least half the height ¾. 18. The multi-gate transistor of claim 13, wherein laterally opposite sides of the gate stack are adjacent to a dielectric spacer, and wherein an interlay er dielectric (ILD) is in contact both with an outer sidewall of the dielectric spacer and with a sidewall portion of the regrown source/drain fin located within the height ¾. 19. The multi-gate transistor of claim 13, wherein the gate stack comprises a high-k gate dielectric layer and a metal gate electrode, wherein the regrown source/drain fin region comprises carbon and phosphorus doped silicon or boron doped silicon germanium to strain the channel region. 20. The transistor of claim 19, wherein the source/drain extension region is subjacent to the high-k gate dielectric layer by a distance greater than zero.
MULTI-GATE SEMICONDUCTOR DEVICE WITH SELF-ALIGNED EPITAXIAL SOURCE AND DRAIN BACKGROUND [0001] For increased performance, it is often desired to reduce transit time of electrons in N-type metal oxide semiconductor (NMOS) device channel regions and of positive charged holes in P-type MOS device (PMOS) channel regions used in a complementary metal oxide semiconductor (CMOS) devices on a substrate (e.g., integrated circuit (IC) transistor, etc. on a semiconductor substrate). Reduction in channel lengths is a favored way of reducing transit times however because such reductions induce short channel effects, multi-gate devices have been developed in which the channel regions are a portion of a non-planar semiconductor body, or "fin" covered by a gate stack. For such multi-gate devices, the transistor can be gated by the gate stack through a sidewall, as well as a top surface of the fin for better gate control. [0002] With the improved gate control possible with multi-gate designs, the dimension of the fin may be scaled to such a point that the contact to the fin results in a parasitic resistance, Rexternai, which can severely limit the operational performance of the multi-gate device. One method of reducing the overall resistance is to dope the fin source/drain regions. For instance, a dopant may be implanted in the source/drain regions and an anneal may be carried out to activate and diffuse the dopant towards the channel region. [0003] Where an implant and diffusion method is used, the ability to control the dopant concentration and location within the fin is limited. Furthermore, the size of other parts of a MOS device, such as the presence of a spacer around the fin, can also greatly hinder reductions in Rexternai. [0004] Furthermore, because fin structures are free of a surrounding substrate, strain- inducing mobility enhancement techniques which have proved advantageous in the past for planar devices may not be readily adaptable to multi-gate devices. Without the ability to enhance channel mobility via strain (e.g., uniaxial or biaxial), the performance improvement in multi-gate devices resulting from the smaller channel lengths possible would be at least partially offset by a comparatively lower channel mobility. Accordingly, improved methods and structures are needed to overcome these limitations in the source/drain fin regions.BRIEF DESCRIPTION OF THE DRAWINGS [0005] Embodiments of the invention, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which: [0006] Figure 1 is a flow diagram depicting a method of forming source and drain epitaxial extensions in a multi-gate device, in accordance with an embodiment of the invention; [0007] Figure 2A is an isometric view of a stage in the fabrication of a multi-gate device corresponding to operation 106 in Figure 1, in accordance with an embodiment of the invention; [0008] Figure 2B is a cross-sectional view of the device depicted in Figure 2A; [0009] Figure 3 A is an isometric view of a stage in the fabrication of a multi-gate device corresponding to operation 108 in Figure 1, in accordance with an embodiment of the invention; [0010] Figure 3B is a cross-sectional view of the device depicted in Figure 3A; [0011] Figure 4 A is an isometric view of a stage in the fabrication of a multi-gate device corresponding to operation 110 in Figure 1, in accordance with an embodiment of the invention; [0012] Figure 4B is a view of the cross-section along the B-B' plane of the device depicted in Figure 4A, in accordance with an embodiment of the invention; [0013] Figure 4C is a view of the cross-section along the B-B' plane of the device depicted in Figure 4A, in accordance with an embodiment of the invention; [0014] Figure 5A is a first cross-sectional view of a stage in the fabrication of a multi-gate device corresponding to operation 112 in Figure 1, in accordance with an embodiment of the invention; [0015] Figure 5B is a second cross-sectional view, orthogonal to the view in Figure 5A, of a stage in the fabrication of a multi-gate device corresponding to operation 112 in Figure 1 , in accordance with an embodiment of the invention; [0016] Figure 6 A is a first cross-sectional view of a stage in the fabrication of a multi-gate device corresponding to operation 114 in Figure 1, in accordance with an embodiment of the invention;[0017] Figure 6B is a second cross-sectional view, orthogonal to the view in Figure 6 A, of a stage in the fabrication of a multi-gate device corresponding to operation 114 in Figure 1 , in accordance with an embodiment of the invention; [0018] Figure 7 is a cross-sectional view of a stage in the fabrication of a multi-gate device corresponding to operation 116 in Figure 1, in accordance with an embodiment of the invention; [0019] Figure 8 is a cross-sectional view of a stage in the fabrication of a multi-gate device corresponding to operation 118 in Figure 1, in accordance with an embodiment of the invention; and [0020] Figure 9 is a cross-sectional view of a stage in the fabrication of a multi-gate device corresponding to operation 120 in Figure 1, in accordance with an embodiment of the invention. [0021] It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. DETAILED DESCRIPTION [0022] Described herein are systems and methods of forming epitaxial source and drain extensions in a multi-gate MOS device (e.g., "finfet"). In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments. [0023] Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments of the present invention; however, the order of description should not be construed to imply that theseoperations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. [0024] Disclosed herein is a multi-gate device including epitaxial source and drain fin regions having a vertical thickness equal to approximately Hsi proximate the channel and which may further include a portion of the epitaxial source and drain fin regions regrown to be disposed under the gate dielectric layer of the transistor. Figure 1 is a flow diagram depicting a method 100 of forming such regrown source/drain regions, in accordance with an embodiment of the invention including source and drain epitaxial extensions. Figures 2-9 depict a multi-gate device after particular operations of the method 100 as it is performed. [0025] Method 100 begins with an ion implantation operation 106 carried out to form doped regions of a semiconductor fin adjacent to a gate stack disposed on the semiconductor fin. The doped regions are to be removed in preparation for regrowing source and drain regions for the multi-gate MOS transistor being formed. When exposed to an appropriate etchant, the doped regions will have an etch rate that is higher than the etch rate of the surrounding substrate and channel semiconductor material which enables excellent control of the etch profile allowing shaping of the regrown source and drain regions for optimal sub-fin leakage characteristics and channel strain. [0026] Figure 2A is an isometric view of a gate stack formed over a semiconductor fin as provided at operation 106 in Figure 1, in accordance with an illustrative embodiment of the invention. Figure 2B represents a cross-sectional view of the multi-gate transistor of Figure 2 taken along the A- A' reference line illustrated in Figure 2A. As shown in Figures 2A and 2B, a non-planar semiconductor body over a substrate 202 forms a fin with a parallelepiped shape having a sidewall 207 with a sidewall height, Hsi, and a top surface 211 extending beyond an adjacent isolation region 210. The top surface 211 and sidewall 207 are apportioned into a non-planar source region 215 and non-planar drain region 216 with a channel region there between covered by a gate stack 217. For the multi-gate transistor, the channel is to be capacitively controllable at least through sidewall 207 such that the ¾ represents the gate-coupled channel sidewall height. The top surface 211 may also be capacitively controllable by an overlying gate stack for greater sub-threshold control. In the exemplary embodiment, the gate stack 217 is sacrificial and subsequently removed for a replacement metal gate process. However, the methods described hereinmay also be adapted to embodiments in which the gate stack 217 is not sacrificial but rather retained in the final multi-gate device. [0027] In the exemplary embodiment, the substrate 202 is a bulk silicon or a silicon-on- insulator substructure. The semiconductor substrate 202 may, however, also be formed using alternate materials, which may or may not be combined with silicon, that include, but are not limited, to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Although a few examples of materials from which the substrate may be formed are described, any material known in the art that upon which a semiconductor device may be built falls within the spirit and scope of the present invention. [0028] As shown, the gate stack 217 includes a gate dielectric 212, a gate electrode 213, and a gate cap 214. The gate dielectric 212 may be a silicon dioxide, silicon nitride, silicon oxynitride, or a dielectric material having a dielectric constant greater than 10 (i.e., "high-k"). Examples of high-k gate dielectric materials that may be used include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. The gate electrode 213 may be poly-silicon, poly-germanium, a metal, or a combination thereof. The gate cap 214 may be any conventional hard mask dielectric material such as, silicon oxide, silicon nitride, and the like. [0029] Figure 2B illustrates how a first non-planar body 250 and a second non-planar body 225 may be formed on opposite side of the doped fin region 208. The second non- planar body 225 may either be the basis for another functional transistor or merely a dummy structure providing a means to control one or more aspects of the fabrication of the first non-planar body 250. As such, Figure 2B illustrates two different exemplary interfaces of the doped fin region 208: an interface with the isolation region 210 and an interface with a second non-planar semiconductor body. It should be appreciated that the doped fin region 208 may have an end distal from the first non-planar body 250 that abuts either one of these two interfaces. [0030] The dopant used in the ion implantation operation 106 is chosen based on its ability to increase the etch rate of the semiconductor fin material in which it is implanted. The specific dopant may therefore vary based on the substrate material and the etchantused in a subsequent etching of the doped fin. Exemplary dopants increase the etch rate of silicon, germanium, or indium antimonide. In particular embodiments, specific dopants include, but are not limited to, carbon, phosphorous, and arsenic. For instance, carbon may be used at a dosage that ranges from 1 x 1014 to 1 x 1016 atoms/cm3. Phosphorous may be used at a dosage that ranges from 1 x 1014 to 5X 1015 atoms/cm3. Arsenic may be used at a dosage that ranges from 1 x 1014 to 5X 1015 atoms/cm3. The ion implantation may be performed in a substantially vertical direction (i.e., a direction perpendicular to substrate). However, in some embodiments, at least a portion of the ion implantation process may occur in an angled direction to implant ions below the gate stack 217. For non- replacement gate embodiments, the gate cap 214 may be formed of a sufficient thickness to prevent doping of the gate electrode 213. With the etch profile control dopants present in the semiconductor fin, an anneal may be carried out to complete operation 106. The anneal drives the dopants further into the semiconductor fin and reduces any damage sustained by the substrate during the ion implantation. The exemplary anneal is between 700°C and 1100°C for a duration of up to one minute, for instance, a duration of five seconds. [0031] The size of the doped fin region 208, including the depth, may vary based on the requirements of the multi-gate MOS transistor being formed. As shown in Figures 2A and 2B, following the implant operation 106, the doped fin region 208 proximate to the channel region 205 extends to a depth of the semiconductor fin no greater than the height Hs;. In the exemplary embodiment illustrated in Figure 2B, the doped fin region 208 forms a substantially perpendicular sidewall interface 209A with the channel region 205. The substantially perpendicular sidewall interface 209A extends through approximately the entire thickness of the semiconductor fin of height Hsi. In one embodiment, the doped fin region 208 further forms a bottom interface 209B with the subjacent semiconductor substrate 202 that is substantially planar with a top surface of the isolation region 210. In another embodiment, the doped fin region 208 forms a bottom interface 209C with the subjacent semiconductor substrate 202 that is an amount DR below a top surface of the isolation region 210. In either case, there may be a transitional interface 245 which slopes laterally away from the gate stack 217 with the transitional interface 245 preferably beginning a distance which is not more than Hsi below the gate dielectric 212. As further shown in Figures 2A and 2B, portions of the doped fin region 208 are sited below, or subjacent to, the gate stack 217 by an amount, X1M. In the exemplary embodiment, theamount by which the doped fin region 208 overlap the gate stack 217 is substantially constant for the height ¾ (along the interface 209A) with the amount of overlap decreasing at depths greater than ¾ (forming the transitional interface 245). [0032] Returning to Figure 1, at operation 108, spacers are formed on either side of the gate stack and the semiconductor fin. The spacers may be formed using conventional dielectric materials, including but not limited to silicon oxide or silicon nitride. The width of the spacers may be chosen based on design requirements for the multi-gate transistor being formed. Figures 3 A and 3B illustrate a gate stack spacer 319 formed on sidewalls of the gate stack 217. Formation of the gate stack spacer 319 also forms a fin spacer 318 on the sidewalls of the semiconductor fin, in particular adjacent to the doped fin region 208, and disposed on the isolation region 210. [0033] Returning to Figure 1, an etch process is performed at operation 110 to etch the doped fin region. In particular embodiments, this etch process may also form a cavity below the gate stack in which regrown source/drain regions may be subsequently formed. The etch operation 110 uses etchants that complements the dopant used in the ion implantation process to increase the etch rate of the doped regions. This enables the etching process to remove the doped fin region at a faster rate than the remainder of the undoped (or more lightly doped) substrate. As such, with an appropriate increase in etch rate, the etching process can selectively remove substantially the entire semiconductor fin (i.e., entire height Hsi over the entire channel width Wsi, as shown in Figure 3 A) and retain only the channel region with good profile and depth control. This includes portions of the doped regions that undercut the gate stack spacer and the gate dielectric, thereby defining a self-aligned source/drain fin extensions of the multi-gate transistor. [0034] In accordance with an exemplary embodiment of the invention, the etch operation 110 includes is a dry etch using a chlorinated chemistry in combination with at least one of NF3, HBr, SF6 and Ar or He used as a carrier gas. The flow rate for the active etchant species may vary between 50 and 200 standard cubic centimeters per minute (SCCM) while the flow rate of the carrier gas may vary between 150 and 400 SCCM. A high energy plasma may be employed at a power that ranges from 700W to 1100W with zero or an RF bias of less than 100W. The reactor pressure may range from around 1 pascal (Pa) to around 2 Pa. In further embodiments, the etch operation 110 further includes a wet etch to clean and further etch the semiconductor substrate 202 where the doped fin region 208 where removed. Conventional wet etch chemistries known in the art for cleaning siliconand oxide material may be used. For instance, wet etch chemistries capable of removing silicon along its crystallographic planes may be used. [0035] Figures 4A, 4B and 4C depict the multi-gate device after the etch operation 110 is performed. In the exemplary embodiment, the source/drain extension cavity 421 is etched with an undercut amount of Xuc that is controlled on the basis of the implant profile XIM to be substantially constant over an etched depth that is approximately equal to Hsi. In particular embodiments, Xuc can range from greater than 0 to 12 nm for a gate length range of 15 to 40nm while the gate stack length (dimension parallel to Xuc) over an entire gate coupled channel height Hsi is, as an example, approximately 25nm. In an alternative embodiment where traditional tip implants are to be employed and the regrown source/drain is not to interface with the channel directly, Xuc is 0. Etching the source/drain fin regions and forming a source/drain extension undercut amount Xuc which is approximately constant over Hsi enables a greater amount of stress to be applied to the channel region 205 than if the source/drain extension cavity 421 had a depth less than Hsi or if the undercut amount Xuc was reduced (e.g., 0 for the tip implant embodiments). Application of greater stress has the effect of increasing IdjSat for the multi-gate transistor. Etching the source/drain fin regions down to Hsi also maximizes the area of the channel region 205 available to be contacted by a subsequently regrown source/drain region for reduced [0036] However, it has also been found that sub-fin leakage (source-to-drain leakage below channel region 205) is a function of the fin etch depth proximate to the channel region 205 with such leakage increasing significantly where the undercut amount Xuc is not reduced for depths greater than Hsi (as measured from the interface of the gate dielectric 212). Thus, the depth and profile of the fin etch should be optimized between stress and channel leakage. As such, in one embodiment providing a substantially flat- bottomed etch profile, the thickness of the doped fin region 208 removed during the etch operation 110 should not be greater than Hsi so that both the source/drain cavity 420 and the source/drain extension cavity 421 are substantially planar to, or flush with, an adjacent isolation region 210 disposed under the gate stack 217 (Figure 2A). In certain embodiments, the surface of the isolation region 210 not covered by the gate stack 217 is recessed as a result of the fabrication processes. [0037] For embodiments where the implant and/or etch is engineered to provide for a tapered or sloped profile away from the channel region 205, the thickness of the doped finregion 208 removed during the etch operation 110 may be greater than ¾ at a point distal from the channel region 205. For such an embodiment, the source/drain cavity 420 is recessed (dashed line 422) an amount below the gate stack protected areas of the isolation region 210 while a portion of source/drain extension cavity 421 proximate to the channel region 205 is substantially planar to, or flush with, areas of the isolation region 210 disposed under the gate stack (corresponding to a source/drain recess depth approximately equal to ¾). For this embodiment, the undercut amount Xuc of the source/drain extension cavity 421 decreases as a function of etch depth greater than the threshold etch depth of Hsi (as illustrated with the slope in 422). [0038] At operation 112, the fin spacer 318 is removed. Depending on the embodiment, the spacer removal operation 112 is either performed prior to the doped fin etch operation 110, during the doped fin etch operation 110, or after the doped fin etch operation 110. In the embodiment shown in Figures 4A, 4B and 4C, the source/drain etch operation 110 is selective to dielectric materials (e.g., to maintain dielectric encapsulation of the gate electrode 213) and both the gate stack spacer 319 and the fin spacer 318 are retained after the etch operation 110. In such an embodiment, the fin spacer 318 remains as a dielectric veil surrounding the source/drain cavity 420. For an embodiment where the source/drain etch operation 110 is less selective to dielectric materials, the fin spacer 318 may be partially or completely removed during the doped fin etch operation 110 (in which case operations 110 and 112 of Figure 1 are performed simultaneously). [0039] For embodiments where at least some portion of the fin spacer 318 remain after operation 110, the fin spacer 318 is removed preferentially to the semiconductor substrate 202 in a manner which retains the gate stack spacer 319 and the gate cap 214, as further depicted in Figures 5A and 5B. In one embodiment, an isotropic etch process (dry or wet) is utilized to etch the fin spacer 318. For such embodiments, the fin spacer 318 may be etched away from the surface of the isolation region 210 while the gate stack spacer 319 and the gate cap 214 are only partially thinned. With the gate electrode 213 remaining encapsulated following removal of the fin spacer 318 the gate electrode does not provide a seed surface during a subsequent source/drain regrowth. [0040] Returning to Figure 1, at operation 114 the source/drain cavity 420, including the source/drain extension cavity 421, is filled with a material using a selective epitaxial deposition process to form regrown source/drain fin. In an embodiment, as depicted in Figures 6A and 6B, the material forming the source/drain fin 618 induces a strain on thechannel region 205. In accordance with particular embodiments, the material forming the regrown source/drain fin 618 contains silicon and follows the crystallinity of the substrate 202 but has a lattice spacing that is different than the lattice spacing of the substrate 202. The difference in lattice spacing induces a tensile or compressive stress in the channel region of the MOS transistor that is accentuated by depositing the silicon alloy in the source and drain extension cavity 421. As is known to those of skill in the art, deciding whether to induce a tensile stress or a compressive stress will depend on whether an NMOS or a PMOS transistor is being formed. [0041] The epitaxial deposition operation 114 therefore regrows source/drain regions and source/drain extensions in one process. For embodiments in which the regrown source/drain regions fill an undercut having an Xuc that is greater than 0, the epitaxially regrown source/drain fin 618 may have a more abrupt interface 609 A than embodiments which employ tip implants to place dopant at interface to the channel (e.g., Xuc is 0). In other words, the interface 609A between the epitaxial regrown source/drain fin 618 and the channel region 205 is well-defined by the regrowth process. On one side of the interface 609A is the epitaxially deposited doped silicon material and on the other side of the interface 609A is the substrate material that makes up the channel region 205. The dopants in the regrown source/drain fin 618 may diffuse into the channel region 205 but such diffusion is engineered by controlling the location of the Xuc dimension (i.e., location of the interface 209A with the channel region 205) and by optimizing the temperature of the EPI deposition and subsequent thermal treatments. This enables the regrown source/drain regions to bring the heavily doped source/drain material in very close proximity to the channel region 205 relative to conventional techniques (i.e., the undercut amount Xuc may overlap the gate stack extensively). As will be appreciated by those of skill in the art, this in turn enables the channel length to be scaled down without having to reduce the dimension of the gate stack. [0042] In an embodiment, the source/drain regions are regrown to a thickness of at least Hsi. In a further embodiment, the source/drain regions are regrown to a width of at least Wsi, and preferably to a width greater than Wsi, as depicted in Figure 6B. Forming the regrown source/drain fin 618 at the height Hsi and in relatively close proximity to the channel region 205 imparts a large hydrostatic stress on the channel. As previously described, this stress increases the strain within the channel region 205, thereby increasing mobility in the channel and increasing drive current. In the exemplary embodiment withthe fin spacer 318 removed, the source/drain regions are regrown defect free or with significantly lower defects than possible with sidewall growth constraints. In the absence of fin spacer 318, the lateral epitaxial growth of the regrown source/drain fin 618 is unobstructed, thereby allowing the formation of {111 } facets and continued growth on {111 } planes to extend over a portion of isolation region 210, as further shown in Figure 6A. Of course, the epitaxial growth facets are dependent on the crystal orientation of the underlying substrate 202 such that different substrate orientations will result in different epitaxial facets. The width of the regrown source/drain fin 618 is therefore greater than the width of the doped fin region 208 which was removed. Thus, the channel region 205 has a width Wsi that is smaller than the width of the regrown source/drain fin 618. For instance, the width of the regrown source/drain fin 618 may be between 10% and 100% wider than Wsi to optimize performance. In one embodiment, the width of the regrown source/drain fin 618 is greater than Wsi along at least half the height Hsi. In other words, as the regrown source/drain fin 618 is formed, it reaches a width greater than Wsi by the time the regrown source/drain thickness is approximately l/2Hsi. The relatively wider regrown source/drain fin 618 provides for a larger surface area upon which a metallized contact may be made, thereby reducing Rexternai relative to a source/drain region having a width equal to Wsi. The greater width of the regrown source/drain fin 618 also increases the amount of strain placed on the channel region 205. [0043] In certain embodiments, a silicon alloy is employed for the regrown source/drain fin 618. The alloy may impart strain on the channel region 205. Depending on the embodiment, the alloy may be in-situ Boron doped silicon germanium (e.g., for a PMOS multi-gate transistor with a compressively strained channel), in-situ Carbon and Phosphorus doped silicon (e.g., for a NMOS multi-gate transistor with a tensilely strained channel), or in-situ Phosphorus doped silicon. In alternate implementations, other silicon alloys may be used. For instance, alternate silicon alloy materials that may be used include, but are not limited to, nickel silicide, titanium silicide, cobalt silicide, and may be doped with one or more of boron and/or aluminum. In still other embodiments, non- silicon materials are employed (e.g., pure germanium, germanate, etc.). [0044] For one NMOS transistor embodiment, the regrown source/drain fin 618 may be filled with carbon doped silicon. The carbon doped silicon may be epitaxially and selectively deposited. In further implementations, the carbon doped silicon may be further doped in situ with phosphorous. The carbon concentration may range from 0.5 atomic %to 5.0 atomic %. The phosphorous concentration may range from 5x 10 /c to 3X 1021/cm3. The thickness of the carbon doped silicon may range from 400 A to 1200 A. The carbon and phosphorous doped silicon may be denoted as (C,P)ySi(1-y). The deposition of the doped (C,P)ySi(1-y) source and drain regions may be carried out in chemical vapor deposition reactor using a co-flown or cyclically deposition and etch sequenced process. In one example, the film is formed by cyclical deposition and etch based on silane (SIH4), diclholorosilane, disilane, PH3, CH3SiH3, and chlorine (Cl2) or HC1 chemistry. [0045] For one PMOS transistor embodiment, the regrown source/drain fin 618 may be filled with silicon germanium. The silicon germanium may be epitaxially deposited. The germanium concentration may range from 10 atomic % to 80 atomic %. In further implementations, the silicon germanium may be further doped in situ with boron. The 19 r 3 21 3 boron concentration may range from 2x Krvcn to 2x KrVcnr . The thickness of the silicon germanium may range from 4θΑ to 1500A. Deposition of the doped silicon germanium may be carried out in a CVD reactor, an LPCVD reactor, or an ultra high vacuum CVD (UHVCVD). The reactor temperature may fall between 600°C and 800°C and the reactor pressure may fall between 1 and 760 Torr. The carrier gas may consist of hydrogen or helium at a flow rate that ranges between 10 and 50 SLM. [0046] As will be appreciated by those of skill in the art, the multi-gate MOS transistor may undergo further processing, such as replacement gate oxide processes, replacement metal gate processes, annealing, or salicidation processes, that may further modify the transistor and/or provide the necessary electrical interconnections. For instance, after the epitaxial deposition of the regrown source/drain fin 618, an interlay er dielectric (ILD) may be deposited and planarized over the multi-gate device at operation 116 (Figure 1), and as further shown in Figure 7. Because the fin spacer 318 was removed, the ILD 723 is deposited directly on sidewalls of the regrown source/drain fin 618, and as such, is in contact both with a sidewall of the gate stack spacer 319 and with a sidewall portion of the regrown source/drain fin 618 located within the height Hsi. The ILD 723 may be formed using materials known for the applicability in dielectric layers for integrated circuit structures, such as low-k dielectric materials. Such dielectric materials include, but are not limited to, oxides such as silicon dioxide (Si02) and carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, ororganosilicate glass. The dielectric layer 723 may include pores or other voids to further reduce its dielectric constant. [0047] Next, for embodiments of the invention in which a replacement metal gate process is used, the gate stack 217 is removed using an etching process at operation 118 to expose the regrown drain/source extension 618 A which filled in the extension cavity 421. Methods for removing layers of the gate stack 217 are well known in the art. In alternate implementations, only the gate electrode 213 and gate cap 214 is removed to expose the gate dielectric 212. Figure 8 illustrates the trench opening that is formed when the gate stack is etched away. [0048] Returning to Figure 1, if the gate dielectric layer is removed, a new gate dielectric layer may be deposited into the trench opening over the channel region 205 at operation 120. The high-k dielectric materials described above may be used here, such as hafnium oxide. The same deposition processes may also be used. Replacement of the gate dielectric layer may be used to address any damage that may have occurred to the original gate dielectric layer during application of the dry and wet etch processes. A metal gate electrode layer may then be deposited over the gate dielectric layer. Conventional metal deposition processes may be used to form the metal gate electrode layer, such as CVD, ALD, PVD, electroless plating, or electroplating. Figure 9 illustrates a high-k gate dielectric layer 924 and a gate electrode layer 926 that have been deposited into the trench opening such that the regrown drain/source extension 618A is disposed below the gate dielectric layer 924 (either subjacent to a portion of gate dielectric layer 924 in contact with a sidewall of the gate electrode layer 926 and the gate stack spacer 319, or below a portion of the gate dielectric layer 924 disposed subjacent to the gate electrode 926). [0049] The gate electrode layer 926 may consist of a P-type workfunction metal or an N- type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, a PMOS transistor is being formed and materials that may be used to form a P-type workfunction metal layer include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. Alternately, in some implementations an NMOS transistor is being formed and materials that may be used to form an N-type workfunction metal layer include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, and their alloys, e.g., metal carbides that includethese elements, i.e., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV. In some implementations, two or more metal gate electrode layers may be deposited. For instance, a workfunction metal may be deposited followed by a metal gate electrode fill metal such as aluminum metal. Of course, doped polysilicon, silicided silicon, etc., may also be employed in conformance with convention in the art. [0050] Accordingly, a multi-gate transistor with self-aligned epitaxially regrown source/drain regions has been disclosed that reduce the overall resistance of the multi-gate transistor and increase channel strain due to increased doped silicon volume (e.g., boron doped silicon germanium volume) combined with reduced channel silicon volume. The epitaxial source and drain extensions extend approximately the entire fin height Hsi, form an abrupt boundary between the channel region and the source/drain regions, and have a doping concentration that is more easily controlled, yielding a more optimized source- drain profile. [0051] The above description of illustrative embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
A method and system for overriding access locks on secure assets in a computer system. The system includes a processor and a device coupled to the processor. The device includes one or more sub-devices, one or more access locks, and an access lock override register that stores one or more access lock override bits, including a lock override bit. The one or more access locks are configured to prevent access to the one or more sub-devices when the one or more access locks are engaged. Access to the one or more sub-devices is not allowed when the lock override bit is set. The method includes requesting a memory transaction for one or more memory addresses and determining a lock status for the one or more memory addresses. The method also includes returning the lock status for the one or more memory addresses. The method may determine if the lock status for the one or more memory address can be changed. The method may change the lock status of the one or more memory addresses to allow the memory transaction.
What is claimed is:1. A system, comprising:a processor configured to operate in an operating mode, wherein the operating mode is one of a plurality of operating modes including a secure operating mode;one or more secured assets coupled to the processor; andsecurity hardware configured to control access to the secured assets dependant upon the operating mode of the processor, wherein the security hardware is configured to allow access to the secure assets in the secure operating mode, and wherein the security hardware includes a lock override register configured to deny access to the secure assets when a lock override bit is set.2. The system of claim 1, wherein the secured assets comprise one or more of the group consisting of:a random number generator,a secure management register,a monotonic counter, anda secure memory.3. The system of claim 1, wherein the security hardware comprises:an initiation register, wherein an entry in the initiation register is an indication to change the operating mode of the processor to the secure mode.4. The system of claim 1, wherein the secure operating mode comprises system management mode.5. The system of claim 1, wherein the security hardware comprises:a kick-out timer configured to provide an indication to the processor to exit the secure mode.6. The system of claim 5, wherein the security hardware further comprises:a re-initiation timer configured to provide an indication to the processor to enter the secure mode.7. The system of claim 1, wherein the security hardware comprises:a duration timer configured to operate while the processor is operating in the secure mode, wherein the duration timer is configured to provide an indication of how long the processor is in the secure mode.8. The system of claim 7, wherein the security hardware comprises:a kick-out timer configured to provide an indication to the processor to exit the secure mode.9. The system of claim 8, wherein the kick-out timer and the duration timer comprise a single timer.10. The system of claim 8, wherein the security hardware further comprises:a re-initiation timer configured to provide an indication to the processor to re-enter the secure mode.11. The system of claim 1, wherein the security hardware comprises:mailbox RAM configured to store input and output data, wherein the mailbox RAM includes an inbox for storing input data for the one or more secured assets and an outbox for storing output data from the one or more secured assets.12. The system of claim 11, wherein the input data for the one or more secured assets is addressed to the inbox of the mailbox RAM.13. The system of claim 11, wherein the output data from the one or more secured assets is retrieved from an address at the outbox of the mailbox RAM.14. The system of claim 11, wherein the security hardware further comprises:access filters configured to provide input data or access requests to the inbox of the mailbox RAM if the processor is operating in the secure operating mode, wherein the access filters are further configured not to provide input data to the inbox of the mailbox RAM if the processor is not operating in the secure operating mode, and wherein the access filters are further configured to provide a predetermined response upon receipt of said access requests if the processor is not operating in the secure operating mode.15. The system of claim 1, wherein the security hardware further comprises:scratchpad RAM, wherein each of the one or more secured assets is configured to access the scratchpad RAM for the storage of data.16. The system of claim 1, further comprising:a memory for storing data, wherein the memory is coupled to the processor and the processor is configured to store and retrieve data from the memory in all of the plurality of operating modes.17. The system of claim 1, wherein the security hardware comprises:access filters configured to provide access requests to one or more of the one or more secured assets when the processor is operating in the secure operating mode, wherein the access filters are further configured to provide a predetermined response if the processor is not operating in the secure operating mode.18. The system of claim 17, wherein the security hardware further comprises:access locks coupled to the access filters, wherein the access locks are configured to disable the access filters in an unlocked mode.19. The system of claim 1, further comprising:a battery, wherein the battery provides reserve power to the one or more secured assets.20. The system of claim 1, further comprising:a battery, wherein the battery provides reserve power to the security hardware.21. A method for providing access to secured assets in a computer system, the method comprising:operating the computer system in a first operating mode different from a secure operating mode;restricting access to the secured assets in response to the computer system being in the first operating mode; anddetermining if the secured assets would be accessible if the computer system were in the secure operating mode;requesting access to the secured assets while in the first operating mode;receiving access to the secured assets while in the first operating mode; andpermitting access to the secured assets in response to receiving access to the secured assets while in the first operating mode.22. The method as set forth in claim 21, wherein the secured assets comprise a secure memory, and wherein permitting access to the secured assets comprises reading data from or writing data to the secure memory.23. The method as set forth in claim 21, wherein the secured assets comprise a random number generator, and wherein permitting access to the secured assets comprises requesting a random number from the random number generator and receiving the random number from the random number generator.24. The method as set forth in claim 21, wherein the secured assets comprise a monotonic counter, and wherein permitting access to the secured assets comprises requesting a value stored in the monotonic counter and receiving the value stored in the monotonic counter.25. The method as set forth in claim 21, wherein permitting access to the secured assets comprises reading output data from or writing input data to a mailbox RAM from which the secure assets write the output data and read the input data.26. The method as set forth in claim 21, further comprising:receiving an access request for one of the secured assets;receiving the access request to the secured assets while in the first operating mode; andwherein restricting access to the secured assets further comprises responding with a predetermined response in response to receiving the access request for one of the secured assets, and in response to the access request to the secured assets while in the first operating mode being denied.27. The method as set forth in claim 21, further comprising:setting an access lock to an unlocked state; andwherein permitting access to the secured assets further comprises overriding restricting access to the secured assets and providing the access request to a selected one of the secured assets in response to receiving the access request for the selected one of the secured assets and in response to setting the access lock to the unlocked state.28. The method of claim 21, wherein determining if the secured assets would be accessible if the computer system were in the secure operating mode comprises determining if a lock is set to indicate that secured assets are accessible when in the secure operating mode.29. A computer readable program storage device encoded with instructions that, when executed by a computer system performs a method for providing access to secured assets in the computer system, the method comprising:operating the computer system in a first operating mode different from a secure operating mode;restricting access to the secured assets in response to the computer system being in the first operating mode; anddetermining if the secured assets would be accessible if the computer system were in the secure operating mode;requesting access to the secured assets while in the first operating mode;receiving access to the secured assets while in the first operating mode; andpermitting access to the secured assets in response to receiving access to the secured assets while in the first operating mode.30. The computer readable program storage device as set forth in claim 29, wherein the secured assets comprise a secure memory, and wherein permitting access to the secured assets comprises reading data from or writing data to the secure memory.31. The computer readable program storage device as set forth in claim 29, wherein the secured assets comprise a random number generator, and wherein permitting access to the secured assets comprises requesting a random number from the random number generator and receiving the random number from the random number generator.32. The computer readable program storage device as set forth in claim 29, wherein the secured assets comprise a monotonic counter, and wherein permitting access to the secured assets comprises requesting a value stored in the monotonic counter and receiving the value stored in the monotonic counter.33. The computer readable program storage device as set forth in claim 29, wherein permitting access to the secured assets comprises reading output data from or writing input data to a mailbox RAM from which the secure assets write the output data and read the input data.34. The computer readable program storage device as set forth in claim 29, the method further comprising:receiving an access request for one of the secured assets;receiving the access request to the secured assets while in the first operating mode; andwherein restricting access to the secured assets further comprises responding with a predetermined response in response to receiving the access request for one of the secured assets, and in response to the access request to the secured assets while in the first operating mode being denied.35. The method as set forth in claim 29, the method further comprising:setting an access lock to an unlocked state; andwherein permitting access to the secured assets further comprises overriding restricting access to the secured assets and providing the access request to a selected one of the secured assets in response to receiving the access request for the selected one of the secured assets and in response to setting the access lock to the unlocked state.36. The method of claim 29, wherein determining if the secured assets would be accessible if the computer system were in the secure operating mode comprises determining if a lock is set to indicate that secured assets are accessible when in the secure operating mode.
This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,395, entitled, "Enhanced Security and Manageability using Secure Storage in a Personal Computer System," filed on May 11, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,446, entitled, "Resource Sequester Mechanism," filed on May 11, 2001, whose inventor is and Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,447, entitled, "Integrated Circuit for Security and Manageability," filed on May 11, 2001, whose inventors are Dale E. Gulick and Geoffrey S. Strongin. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,225, entitled, "System Management Mode Duration and Management," filed on May 11, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/854,040, entitled, "Cryptographic Randomness Register for Computer System Security," filed on May 11, 2001, whose inventor is Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,443, entitled, "Protection Mechanism for Biometric Input Data," filed on May 11, 2001, whose inventors are Dale E. Gulick and Geoffrey S. Strongin. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,437, entitled. "Personal Computer Security Mechanism." filed on May 11, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,335, entitled, "Asset Sharing between Host Processor and Security Hardware" filed on May 11, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. This Application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/853,234, entitled, "Interruptable and Re-enterable System Management Mode Programming Code," filed on May 11, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. This Application is also continuation-in-part, as are each of the above filed on May 11, 2001, of co-pending U.S. patent application Ser. No. 09/852,372, entitled, "Secure Execution Box and Method," filed on May 10, 2001, whose inventors are Dale E. Gulick and Geoffrey S. Strongin. This Application is also a continuation-in-part, as are each of the above filed on May 11, 2001, of co-pending U.S. patent application Ser. No. 09/852,942, entitled, "Computer System Architecture for Enhanced Security and Manageability," filed on May 10, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick.BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to computing systems, and, more particularly, to an locking mechanism override and disable for controlling access to secure hardware, such as a secured ROM, in a personal computer system.2. Description of the Related ArtFIG. 1A illustrates an exemplary computer system 100. The computer system 100 includes a processor 102, a north bridge 104, memory 106, Advanced Graphics Port (AGP) memory 108, a Peripheral Component Interconnect (PCI) bus 110, a south bridge 112, a battery, an AT Attachment (ATA) interface 114 (more commonly known as an Integrated Drive Electronics (IDE) interface), a universal serial bus (USB) interface 116, a Low Pin Count (LPC) bus 118, an input/output controller chip (SuperI/O(TM)) 120, and BIOS memory 122. It is noted that the north bridge 104 and the south bridge 112 may include only a single chip or a plurality of chips, leading to the collective term "chipset." It is also noted that other buses, devices, and/or subsystems may be included in the computer system 100 as desired, e.g. caches, modems, parallel or serial interfaces, SCSI interfaces, network interface cards, etc. ["SuperI/O" is a trademark of National Semiconductor Corporation of Santa Clara, Calif.]The processor 102 is coupled to the north bridge 104. The north bridge 104 provides an interface between the processor 102, the memory 106, the AGP memory 108, and the PCI bus 110. The south bridge 112 provides an interface between the PCI bus 110 and the peripherals, devices, and subsystems coupled to the IDE interface 114, the USB interface 116, and the LPC bus 118. The battery 113 is shown coupled to the south bridge 112. The Super I/O(TM) chip 120 is coupled to the LPC bus 1118.The north bridge 104 provides communications access between and/or among the processor 102, memory 106, the AGP memory 108, devices coupled to the PCI bus 110, and devices and subsystems coupled to the south bridge 112. Typically, removable peripheral devices are inserted into PCI "slots" (not shown) that connect to the PCI bus 110 to couple to the computer system 100. Alternatively, devices located on a motherboard may be directly connected to the PCI bus 110.The south bridge 112 provides an interface between the PCI bus 110 and various devices and subsystems, such as a modem, a printer, keyboard, mouse, etc., which are generally coupled to the computer system 100 through the LPC bus 118 (or its predecessors, such as an X-bus or an ISA bus). The south bridge 112 includes the logic used to interface the devices to the rest of computer system 100 through the IDE interface 114, the USB interface 116, and the LPC bus 118.FIG. 1B illustrates certain aspects of the prior art south bridge 112, including those provided reserve power by the battery 113, so-called "being inside the RTC battery well" 125. The south bridge 112 includes south bridge (SB) RAM 126 and a clock circuit 128, both inside the RTC battery well 125. The SB RAM 126 includes CMOS RAM 126A and RTC RAM 126B. The RTC RAM 126B includes clock data 129 and checksum data 127. The south bridge 112 also includes, outside the RTC battery well 125, a CPU interface 132, power and system management units 133, PCI bus interface logic 134A, USB interface logic 134C, IDE interface logic 134B, and LPC bus interface logic 134D.Time and date data from the clock circuit 128 are stored as the clock data 129 in the RTC RAM 126B. The checksum data 127 in the RTC RAM 126B may be calculated based on the CMOS RAM 126A data and stored by BIOS during the boot process, such as is described below. e.g. block 148, with respect to FIG. 2A. The CPU interface 132 may include interrupt signal controllers and processor signal controllers. The power and system management units 133 may include an ACPI (Advanced Configuration and Power Interface) controller.From a hardware point of view, an x86 operating environment provides little for protecting user privacy, providing security for corporate secrets and assets, or protecting the ownership rights of content providers. All of these goals, privacy, security, and ownership (collectively, PSO) are becoming critical in an age of Internet-connected computers. The original personal computers were not designed in anticipation of PSO needs.From a software point of view, the x86 operating environment is equally poor for PSO. The ease of direct access to the hardware through software or simply by opening the cover of the personal computer allows an intruder or thief to compromise most security software and devices. The personal computer's exemplary ease of use only adds to the problems for PSO.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a system is provided. The system includes a bus, a memory coupled to the bus, and a device coupled to access the memory over the bus. The memory includes a plurality of storage locations divided into a plurality of memory units. The device includes one or more locks configured to control access to one or more of the plurality of memory unit, and an access lock override register that stores one or more access lock override bits, including a lock override bit. Access to the one or more of the plurality of memory units is not allowed when the lock override bit is set. The locks may include a plurality of registers. One or more entries in one or more of the plurality of registers may indicate an access control setting for one or more of the memory units.In another aspect of the present invention, a method for operating a computer system is provided. The method includes requesting a memory transaction for one or more memory addresses and determining a lock status for the one or more memory addresses. The method also includes returning the lock status for the one or more memory addresses. If the lock status indicates that the memory transaction for the one or more memory addresses is not allowed, then the method includes determining if the lock status for the one or more memory address can be changed. If the lock status of the one or more memory addresses can be changed, then the method includes changing the lock status of the one or more memory addresses to allow the memory transaction.In still another aspect of the present invention, another system is provided. This system includes a processor and a device coupled to the processor. The device includes one or more sub-devices, one or more access locks, and an access lock override register that stores one or more access lock override bits, including a lock override bit. The one or more access locks are configured to prevent access to the one or more sub-devices when the one or more access locks are engaged. Access to the one or more sub-devices is not allowed when the lock override bit is set.In yet another aspect of the present invention, a method of operating a computer system in system management mode (SMM) is provided when the computer system including a processor coupled to security hardware and to a first device. The method includes processing SMM code instructions and unlocking security hardware. The method also includes checking a lock status of the security hardware and accessing a first device. The method also includes locking the security hardware, setting a bit preventing changes to the locks of the security hardware, and calling an SMM exit routine.In another aspect of the present invention, yet another system is provided. This system includes a processor configured to operate in an operating mode, one or more secured assets coupled to the processor, and security hardware configured to control access to the secured assets dependant upon the operating mode of the processor. The operating mode is one of a plurality of operating modes including a secure operating mode. The security hardware is configured to allow access to the secure assets in the secure operating mode. The security hardware includes a lock override register configured to deny access to the secure assets when a lock override bit is set.In still another aspect of the present invention, a method for providing access to secured assets in a computer system is provided. The method includes operating the computer system in a first operating mode different from a secure operating mode and restricting access to the secured assets in response to the computer system being in the first operating mode. The method also includes determining if the secured assets would be accessible if the computer system were in the secure operating mode and requesting access to the secured assets while in the first operating mode. The method also includes receiving access to the secured assets while in the first operating mode and permitting access to the secured assets in response to receiving access to the secured assets while in the first operating mode.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify similar elements, and in which:FIG. 1A illustrates a block diagram of a prior art computer system, while FIG. 1B illustrates a block diagram of a prior art south bridge;FIGS. 2A and 2B illustrate flowcharts of prior art methods for operating a computer system using code stored in ROM;FIG. 3 illustrates a flowchart of an embodiment of data and command flow in a computer system having a secure execution box, according to one aspect of the present invention;FIG. 4 illustrates a block diagram of an embodiment of a computer system including security hardware in the south bridge as well as a crypto-processor, according to one aspect of the present invention;FIGS. 5A and 5B illustrate block diagrams of embodiments of a south bridge including security hardware for controlling SMM, according to various aspect of the present invention;FIG. 6 illustrates a block diagram of an embodiment of a south bridge including security hardware for secure SMM operations, according to one aspect of the present invention;FIGS. 7A, 7B, 7C, and 7D illustrate embodiments of secure storage, according to various aspects of the present invention;FIGS. 8A and 8B illustrate block diagrams of embodiments of a BIOS ROM and an SMM ROM for secure SMM operations, respectively, according to various aspects of the present invention;FIGS. 9A and 9B illustrate block diagrams of embodiments of a computer system operable to control the timing and duration of SMM operations, according to one aspect of the present invention;FIG. 10A illustrates a flowchart of an embodiment of a method for forcing a processor out of SMM, according to one aspect of the present invention, while FIG. 10B illustrates a flowchart of an embodiment of a method for reinitiating SMM upon the early termination of SMM, according to one aspect of the present invention;FIGS. 11A and 11B illustrate flowcharts of embodiments of methods for updating a monotonic counter stored in the SMM ROM, according to various aspects of the present invention;FIGS. 12A and 12B illustrate flowcharts of embodiments of methods for updating a monotonic counter in the south bridge, according to various aspects of the present invention;FIGS. 13A and 13B illustrate flowcharts of embodiments of a method for providing a monotonic value in a computer system, according to one aspect of the present invention;FIGS. 14A and 14B illustrate block diagrams of embodiments of processors including random number generators using entropy registers, according to one aspect of the present invention;FIG. 15 illustrates a block diagram of another embodiment of a random number generator, according to one aspect of the present invention;FIGS. 16A, 16B, 16C, 16D, 16E, 16F, and 16G illustrate flowcharts of embodiments of methods for accessing the security hardware, which may be locked, according to various aspects of the present invention;FIGS. 17A, 17B, and 17C illustrate block diagrams of embodiments of the access locks 460 shown in FIG. 6, while FIG. 17D illustrates a block diagram of an embodiment of the override register, all according to various aspects of the present invention;FIG. 18A illustrates a prior art flowchart of an SMM program, while FIG. 18B illustrates a flowchart of an embodiment of operation of an interruptible and re-enterable SMM program, and FIG. 18C illustrated a flowchart of an embodiment of operation of a computer system running the interruptible and re-enterable SMM program, according to various aspects of the present invention;FIGS. 19A, 19B, and 19C illustrate block diagrams of embodiments of computer systems with the BIOS ROM accessible to the processor at boot time and to the south bridge at other times, according to various aspects of the present invention;FIGS. 20A-20D illustrate block diagrams of embodiments of processors including lock registers and logic, according to various aspects of the present invention,FIG. 21 illustrates a flowchart of an embodiment of a method for initiating HDT mode, according to one aspect of the present invention;FIG. 22 illustrates a flowchart of an embodiment of a method for changing the HDT enable status, according to one aspect of the present invention;FIG. 23 illustrates a flowchart of an embodiment of a method for initiating the microcode loader, according to one aspect of the present invention;FIG. 24 illustrates a flowchart of an embodiment of a method for changing the microcode loader enable status, according to one aspect of the present invention;FIGS. 25A, 25B, 26, and 27 illustrate flowcharts of embodiments of methods for secure access to storage, according to various aspects of the present invention;FIG. 28 illustrates a prior art challenge-response method for authentication:FIGS. 29A, 29B, 29C, 29D, and 29E illustrate embodiments of computer devices or subsystems including GUIDs and/or a stored secret and/or a system GUID, according to various aspects of the present invention;FIGS. 30A and 30B illustrate flowcharts of embodiments of methods for operating a computer system including a biometric device, such as the biometric device shown in FIG. 29A, according to various aspects of the present invention:FIGS. 31A, 31B, 32A, 32B, 32C, and 33 illustrate flowcharts of embodiments of methods for authenticating a device in a computer system, such as computer systems including the computer subsystems of FIGS. 29A, 29D, and 29E, according to various aspects of the present invention;FIGS. 34 and 35 illustrate flowcharts of embodiments of methods for removing a device from a computer system once the device has been united with the computer system using a introduced bit, according to various aspects of the present invention;FIG. 36 illustrates a block diagram of an embodiment of a computer subsystem including bus interface logics with master mode capabilities, according to one aspect of the present invention;FIG. 37 illustrates a flowchart of an embodiment of a method for operating in a master mode outside the operating system, according to one aspect of the present invention;FIG. 38A illustrates a flowchart of an embodiment of a method for booting a computer system including authentication via the crypto-processor using master mode logic, while FIG. 38B illustrates a flowchart of an embodiment of a method for booting a computer system including authentication via the security hardware using the master mode logic, according to various aspects of the present invention;FIGS. 39A, 39B, and 39C illustrate block diagrams of embodiments of computer systems 5000 for securing a device, a computer subsystem, or a computer system using timers to enforce periodic authentication, according to various aspects of the present invention;FIGS. 40A and 40B illustrate flowcharts of embodiments of a method for securing a device, a computer subsystem, or a computer system, such as a portable computer, by limiting use to finite periods of time between successive authorizations, according to various aspects of the present invention;FIG. 41 illustrates a flowchart of an embodiment of a method for booting a computer system including initializing a timer to enforce periodic authentication and authorization, according to one aspect of the present invention; andFIGS. 42A and 42B illustrate block diagrams of embodiments of the system management registers, according to various aspects of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will, of course, be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a develop-ment effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. The use of a letter in association with a reference number is intended to show alternative embodiments or examples of the item to which the reference number is connected.System Management Mode (SMM) is a mode of operation in the computer system that was implemented to conserve power. The SMM was created for the fourth generation x86 processors. As newer x86 generation processors have appeared, the SMM has become relatively transparent to the operating system. That is, computer systems enter and leave the SMM with little or no impact on the operating system.Referring now to the drawings, and in particular to FIG. 2A, a flowchart of a prior art method of initializing a computer system using code stored in the BIOS 122 is shown. During initialization of the power supply, the power supply generates a power good signal to the north bridge, in block 136. Upon receiving the power good signal from the power supply, the south bridge (or north bridge) stops asserting the reset signal for the processor, in block 138.During initialization, the processor reads the default jump location, in block 140. The default jump location in memory is usually at a location such as FFFF0h. The processor performs a jump to the appropriate BIOS code location (e.g. FFFF0h) in the ROM BIOS, copies the BIOS code to the RAM memory, and begins possessing the BIOS code instructions from the RAM memory, in block 142. The BIOS code, processed by the processor, performs a power-on self test (POST), in block 144.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller, etc. and displays a start-up information screen, in block 146. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 148. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 150.The BIOS code identifies the boot location, and the corresponding boot sector, in block 152. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 154.It is noted that for a cold boot or a hard (re)boot, all or most of the descriptions given in blocks 136-154 may occur. During a warm boot or a soft (re)boot the BIOS code usually jumps from block 142 into block 148, skipping the POST, memory tests, etc.In FIG. 2B, a flowchart of a prior art method of operating a computer system in SMM using code stored in the BIOS 122 is shown. An interrupt controller receives a request for SMM, in block 172. The interrupt controller signals the request for SMM to the processor by asserting a system management interrupt (SMI#) signal, in block 174.The processor recognizes the request for SMM and asserts an SMI ACTive (SMIACT#) signal, in block 176. The system recognizes the SMIACT# signal, disables access to the system RAM, and enables access to system management RAM (SMRAM) space, in block 178.The current processor state is saved to SMRAM, in block 180. The processor resets to the SMM default state and enters SMM, in block 182. The processor next reads the default pointer and jumps to the appropriate place in SMRAM space, in block 184. In block 186, the source and/or nature of the SMI request is identified.An SMI handler services the SMI request, in block 188. After servicing the SMI request, the SMI handler issues a return from SMM (RSM) instruction to the processor, in block 190. Upon operating on the RSM instruction, the processor restores the saved state information and continues normal operation, in block 192.FIG. 3 illustrates a block diagram of an embodiment of a flowchart showing data and command flow in a computer system having a secure execution box 260, according to one aspect of the present invention. User input and output (I/O) data and/or commands 205 are provided to and received from one or more applications 210. The applications 210 exchange data and commands with cryptography service providers 215 within the computer system, such as the computer system 100 or any other computer system. The cryptography service providers 215 may use API (Application Programming Interface) calls 220 to interact with drivers 225 that provide access to hardware 230.According to one aspect of the present invention, the drivers 225 and the hardware 230 are part of a secure execution box configured to operate in a secure execution mode (SEM) 260. Trusted privacy, security, and ownership (PSO) operations, also referred to simply as security operations, may take place while the computer system is in SEM 260. Software calls propagated from the user I/O 205 and/or the applications 210 may be placed into the secure execution box in SMM 260 via an SMM initiation register 425B (or SMM initiator 425A) discussed below with respect to FIG. 5B (or FIG. 5A). Parameters may be passed into and out of the secure execution box in SEM 260 via an access-protected mailbox RAM 415, also discussed below with FIGS. 5A and 5B. The software calls have access to the secure execution box in SEM 260 to various security hardware resources, such as described in detail below.In various embodiments of the present invention, power management functions may be performed inside SEM 260. One current standard for power management and configuration is the Advanced Configuration and Power Interface (ACPI) Specification. The most recent version is Revision 2.0, dated Jul. 27, 2000, and available from the ACPI website currently run by Teleport Internet Services, hereby incorporated herein by reference in its entirety. According to the ACPI specification, control methods, a type of instruction, tell the system to go do something. The ACPI specification does not know how to carry out any of the instructions. The ACPI specification only defines the calls, and the software must be written to carry out the calls in a proscribed manner. The proscribed manner of the ACPI specification is very restrictive. One cannot access some registers in your hardware. To access those registers, various aspects of the present invention generate an SMI# to enter SMM and read these registers. As power management has the potential to be abused e.g. change the processor voltage and frequency, raised above operating limits to destroy the processor, or lowered below operating limits leading to a denial of service, ACPI calls should be carried out in a secure manner, such as inside SEM 260.Inside SEM 260, each ACPI request can be checked against some internal rules for safe behavior. Using terminology more completely described below, the ACPI request would be placed in the inbox of the mailbox, parameter values read from the inbox, the ACPI request evaluated using the inbox parameters for acceptability, and then either carryout the request or not, based on the evaluation results. For additional details of various embodiments, see FIGS. 6, 42A, and 42B below.FIG. 4 illustrates a block diagram of an embodiment of a portion of an improved version of computer system 100 including security hardware 370 in a south bridge 330, as well as a crypto-processor 305, according to one aspect of the present invention. The south bridge 330 includes the security hardware 370, an interrupt controller (IC) 365, USB interface logic 134C, and the LPC bus interface logic (LPC BIL) 134D. The IC 365 is coupled to the processor 102. The USB interface logic 134C is coupled through an optional USB hub 315 to a biometric device 320 and a smart card reader 325. The LPC bus 118 is coupled to the south bridge 330 through the LPC BIL 134D. The crypto-processor 305 is also coupled to the LPC bus 118. A memory permission table 310 within the Crypto-processor 305 provides address mappings and/or memory range permission information. The memory permission table 310 may be comprised in a non-volatile memory. A BIOS 355. i.e. some memory, preferably read-only memory or flash memory, is coupled to the crypto-processor 305. The security hardware 370 may include both security hardware and secure assets protected by the security hardware.The security hardware 370 in the south bridge 330 may be operable to provide an SMI interrupt request to the IC 365for the processor 102. The security hardware 370 may also interact with the crypto-processor 305. Access to the BIOS 355 is routed through the crypto-processor 305. The crypto-processor 305 is configured to accept and transfer access requests to the BIOS 355. The crypto-processor 305 therefore may understand the address mappings of the BIOS 305. According to one aspect of the present invention, the security hardware 370 allows the computer system 100 to become an embodiment of the secure execution box 260 shown in FIG. 3.In one embodiment, the crypto-processor 305 is configured to accept an input from the biometric device 320 and/or the smart card reader 325 over the USB interface. i.e. through the optional USB hub 315 and the USB interface logic 134C, and over the LPC bus 118. Other interfaces, such as IDE or PCI, may be substituted. The crypto-processor 305 may request one or more inputs from the biometric device 320 and/or the smart card reader 325 to authenticate accesses to the BIOS 355, other storage devices, and/or another device or subsystem in the computer system 100.It is noted that the IC 365 may be included in the processor instead of the south bridge 330. The IC 365 is also contemplated as a separate unit or associated with another component of the computer system 100. It is also noted that the operations of the LPC bus 118 may correspond to the prior art Low Pin Count Interface Specification Revision 1.0 of Sep. 29, 1997. The operations of the LPC bus 118 may also correspond to the extended LPC bus disclosed in co-pending U.S. patent application Ser. No. 09/544,858, filed Apr. 7, 2000, entitled "Method and Apparatus For Extending Legacy Computer Systems", whose inventor is Dale E. Gulick, which is hereby incorporated by reference in its entirety. It is further noted that the USB interface logic 134C may couple to the LPC BIL 134D is any of a variety of ways, as is well known in the art for coupling different bus interface logics in a bridge.FIGS. 5A and 5B illustrate block diagrams of embodiments of the south bridge 330, including the security hardware 370, according to various aspects of the present invention. In FIG. 5A, the south bridge 330A includes the security hardware 370A and IC 365. The security hardware 370A includes sub-devices such as an SMM timing controller 401A, an SMM access controller 402A, and control logic 420A. The sub-devices may be referred to as security hardware or secure assets of the computer system 100. The SMM timing controller 401A includes an SMM indicator 405, a duration timer 406A, a kick-out timer 407A, and a restart timer 408. The SMM access controller 402A includes SMM access filters 410, mailbox RAM 415, and an SMM initiator 425A.As shown in FIG. 5A, the control logic 420 is coupled to control operation of the SMM timing controller 401A, the SMM access controller 402A, and the SMM initiator 425A. Input and output (I/O) to the security hardware 370A pass through the SMM access filters 410 and are routed through the control logic 420A.The SMM timing controller 401A includes the duration timer 406A, which measures how long the computer system 100 is in SMM. The kick-out timer 407A, also included in the SMM timing controller 401A, counts down from a predetermined value while the computer system 100 is in SMM. The control logic 420A is configured to assert a control signal (EXIT SMM 404) for the processor to exit SMM, such as in response to the expiration of the kick-out timer 407A. The restart timer 408, included in the SMM timing controller 401A, starts counting down from a predetermined value after the kick-out timer 407A reaches zero. The SMM indicator 405, also included in the SMM timing controller 401A, is operable to monitor the status of one or more signals in the computer system, such as the SMI# (System Management Interrupt) signal and/or the SMIACT# (SMI ACTive) signal to determine if the computer system is in SMM.The SMM access controller 402A includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370A. When the computer system 100 is in SMM, the SMM access filters are configured to pass access requests (e.g. reads and writes) to the control logic 420A and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters are configured to respond to all access requests with a predetermined value, such as all '1's. The SMM access controller 402A also includes the mailbox RAM 415. In one embodiment, the mailbox RAM 415 includes two banks of RAM, such as 512 bytes each, for passing parameters into and out of the secure execution box 260. Parameters passed to or from the sub-devices included within the security hardware 370 are exchanged at the mailbox RAM 415. One bank of RAM 415, an inbox, is write-only to most of all of the computer system in most operating modes. Thus, parameters to be passed to the sub-devices included within the security hardware 370 may be written into the inbox. During selected operating modes, such as SMM, both read and write accesses are allowed to the inbox. Another bank of RAM 415, an outbox, is read-only to most of all of the computer system in most operating modes. Thus, parameters to be received from the sub-devices included within the security hardware 370 may be read from the outbox. During selected operating modes, preferably secure modes, such as SMM, both read and write accesses are allowed to the outbox.The SMM initiator 425A may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiator 425A over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiator 425A is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365. The SMM initiator 425A is also configured to notify the control logic 420A that the request for SMM has been received and passed to the interrupt controller 365.In FIG. 5B, the south bridge 330B includes the security hardware 370B. The IC 365 is shown external to the south bridge 330B. The security hardware 370B includes an SMM timing controller 401B, an SMM access controller 402B, and control logic 420B. The SMM timing controller 401B includes an SMM indicator 405, a duration/kick-out timer 407B, and a restart timer 408. The SMM access controller 402B includes SMM access filters 410 and mailbox RAM 415. An SMM initiation register 425B is shown external to the south bridge 330B.As shown in FIG. 5B, the control logic 420B is coupled to control operation of the SMM timing controller 401B and the SMM access controller 402B. Input and output (I/O) signals to the security hardware 370B pass through the SMM access filters 410 and are routed through the control logic 420B. The control logic 420B is also coupled to receive an indication of a request for SMM from the SMM initiation register 425B.The SMM timing controller 401B includes the duration/kick-out timer 407B measures how long the computer system 100 is in SMM, counting up to a predetermined value while the computer system 100 is in SMM. The control logic 420B is configured to assert a control signal for the processor to exit SMM in response to the duration/kick-out timer 407B reaching the predetermined value. The restart timer 408 starts counting down from a predetermined value after the duration/kick-out timer 407B reaches the predetermined value. The SMM indicator 405 is operable to monitor the status of one or more signals in the computer system, such as the SMI# (System Management Interrupt) signal and/or the SMIACT# (SMI ACTive) signal, to determine if the computer system is in SMM.The SMM access controller 402B includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370B. When the computer system 100 is in SMM, the SMM access filters are configured to pass access requests (e.g. reads and writes) to the control logic 420B and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters may be configured to respond to all access requests with a predetermined value, such as all '1's. The SMM access controller 402B also includes the mailbox RAM 415, described above with respect to FIG. 5A.The SMM initiation register 425B may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiation register 425B over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiation register 425B is configured to provide the indication to the control logic 420B. The control logic 420B is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365.It is noted that in the embodiment illustrated in FIG. 5A, the SMM initiator 425A includes internal logic for handling the SMM request. In the embodiment illustrated in FIG. 5B, the SMM initiation register 425B relies on the control logic 420B to handle the SMM request. It is also noted that the SMM initiator 425A is part of the security hardware 370A, while the SMM initiation register 425B is not part of the security hardware 370B.FIG. 6 illustrates a block diagram of an embodiment of the south bridge 330C including security hardware 370C, according to one aspect of the present invention. As shown, the security hardware 370C includes sub-devices, such as the SMM timing controller 401, the SMM access controller 402, the control logic 420, a TCO counter 430, a monotonic counter 435A, the scratchpad RAM 440, a random number generator 455, secure system (or SMM) management registers 470. OAR-(Open At Reset) locks 450, and an OAR override register 445. The SMM access controller 402 includes one or more access locks 460 within the SMM access filters 410. Some aspects of embodiments of the SMM timing controller 401, the SMM access controller 402, and the control logic 420 are described herein with respect to FIGS. 5A and 5B, above.The embodiment of the SMM access controller 402 illustrated in FIG. 6 includes the one or more access locks 460 within the SMM access filters 410. The access locks 460 provide a means of preventing (or locking) and allowing (or unlocking) access to one or more of the devices within the security hardware 370C. Various embodiments for the one or more access locks 460 are shown in FIGS. 17A-17C and described with reference thereto.In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in FIG. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 (or 805A or 805B from FIGS. 9A and 9B) or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 or 805 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 1600A-1600C described below with respect to FIGS. 16A-16C.The TCO counter (or timer) 430 may include a programmable timer, such as a count-down timer, that is used to detect a lock-up of the computer system 100. Lock-up may be defined as a condition of the computer system 100 where one or more subsystems or components do not respond to input signals for more than a predetermined period of time. The input signals may include internal signals from inside the computer system 100 or signals from outside the computer system 100, such as from a user input device (e.g. keyboard, mouse, trackball, biometric device, etc.). It is also noted that the lock-ups may be software or hardware in nature. According to various aspects of the present invention, the TCO counter 430 may be programmed and read from inside SMM. The TCO counter 430 is preferably programmed with value less than a default duration for the kick-out timer 407. In one embodiment, the TCO timer 430 generates an SMI# upon a first expiration of the TCO timer 430, and the TCO timer 430 generates a reset signal for the computer system upon a second, subsequent expiration of the TCO timer 430.In one embodiment, the TCO timer 430 may be accessed by the computer system 100 or software running in the computer system 100 for the computer system 100 to recover from lock-ups when the computer system is not in SMM. In another embodiment, the TCO timer 430 may be accessed by the computer system 100 both in and out of SMM.The monotonic counter 435A comprises a counter, preferably at least 32 bits and inside the RTC battery well 125, which updates when the value stored in the monotonic counter 435A is read. The monotonic counter 435A is configured to update the value stored to a new value that is larger than the value previously stored. Preferably, the new value is only larger by the smallest incremental amount possible, although other amounts are also contemplated. Thus, the monotonic counter 435A may advantageously provide a value that is always increasing up to a maximum or rollover value. Additional details may be found below with respect to FIGS. 8, 12, and 13.The scratchpad RAM 440 includes one or more blocks of memory that are available only while the computer system 100 is in certain operating modes, such as SMM. It is also contemplated that other sub-devices of the security hardware 370 may use the scratchpad RAM 440 as a private memory. One embodiment of the scratchpad RAM 440 includes 1 kB of memory, although other amounts of memory are also contemplated. In one embodiment, the scratchpad RAM is open at reset to all or most of the computer system 100, while in another embodiment, the scratchpad RAM is inaccessible while the computer system is booting.The random number generator (RNG) 455 is configured to provide a random number with a number of bits within a predetermined range. In one embodiment, a new random number with from 1 to 32 bits in length is provided in response to a request for a random number. It is noted that restricting access to the RNG, such as only in SMM, may advantageously force software to access the RNG through a standard API (application programming interface), allowing for increased security and easing hardware design constraints. Additional details may be found below with respect to FIGS. 14 and 15.The OAR locks 450 may include a plurality of memory units (e.g. registers), which include associated programming bit (or lock bits) that lock the memory (or memories) used to store BIOS information or other data, for example. BIOS ROM 355 and SMM ROM 550 in FIGS. 7A and 7B below. Each memory unit may have, by way of example, three lock bits associated with it. In one embodiment, four 8-bit registers may store the lock bits for each 512 kB ROM-page, one register for every two 64-kB segment. With sixteen blocks of four registers, a maximum of 8 MB of ROM may be locked. Addressing may be as follows:<tb><sep>64-kB segment<sep>Register<sep>ADDRESS<tb><sep>0, 1<sep>Register 0<sep>FFBx, E000h<tb><sep>2, 3<sep>Register 1<sep>FFBx, E001h<tb><sep>4, 5<sep>Register 2<sep>FFBx, E002h<tb><sep>6, 7<sep>Register 3<sep>FFBx, E003hEach physical ROM chip may include four identification pins (ID[3:0]), known as strapping pins. The strapping pins may be used to construct sixteen spaces of 64 kB each. The 'x' in the address may represent the decode of the strapping pins, or the inverse.The lock registers from the OAR locks 450 may include:<tb>Register\Bits<sep>7<sep>OAR Lock 6:4<sep>3<sep>OAR Lock 2:0<tb>Register 0<sep>Reserved<sep>Segment 1<sep>Reserved<sep>Segment 0<tb>Register 1<sep>Reserved<sep>Segment 3<sep>Reserved<sep>Segment 2<tb>Register 2<sep>Reserved<sep>Segment 5<sep>Reserved<sep>Segment 4<tb>Register 3<sep>Reserved<sep>Segment 7<sep>Reserved<sep>Segment 6In one embodiment, one bit controls write access, one bit controls read access, and one bit prevents the other two bits from being changed. In one embodiment, once the locking bit is set (also described as the state being locked down), the write access bit and read access bit cannot be reprogrammed until the memory receives a reset signal. The layout of each register may include:<tb>Bit<sep>7<sep>6<sep>5<sep>4<sep>3<sep>2<sep>1<sep>0<tb>Value<sep>Rsvrd<sep>Lock<sep>Lock 1<sep>Lock<sep>Rsvrd<sep>Lock<sep>Lock 1<sep>Lock 0<tb><sep><sep>2<sep><sep>0<sep><sep>2With a decode of the three lock bits including:<tb><sep>Read Lock<sep>Lock-Down<sep>Write Lock<sep><tb>Decode<sep>Data 2<sep>Data 1<sep>Data 0<sep>Resulting block state<tb>0x00<sep>0<sep>0<sep>0<sep>Full access<tb>0x01<sep>0<sep>0<sep>1<sep>Write locked (default<tb><sep><sep><sep><sep>state)<tb>0x02<sep>0<sep>1<sep>0<sep>Lock open (full access<tb><sep><sep><sep><sep>locked down)<tb>0x03<sep>0<sep>1<sep>1<sep>Write locked down<tb>0x04<sep>1<sep>0<sep>0<sep>Read locked<tb>0x05<sep>1<sep>0<sep>1<sep>Read and write locked<tb>0x06<sep>1<sep>1<sep>0<sep>Read locked down<tb>0x07<sep>1<sep>1<sep>1<sep>Read and write locked<tb><sep><sep><sep><sep>downThe embodiment of the security hardware 370C illustrated in FIG. 6 also includes the OAR override register 445. The OAR override register 445 provides a mechanism for allowing (or unlocking) and preventing (or locking) access to one or more of the devices within the security hardware 370C. The OAR override register 445 also provides a mechanism to override the access locks 460. In one embodiment, the OAR override register 445 includes a first indicator that the access locks 460 are to be ignored, with access to the security hardware locked by the access locks 460 either always available or never available, as implemented. The OAR override register 445 may also include a second indicator that the status of the first indicator may be changed, or not. If the second indicator shows that the first indicator may not be changed, then the device including the OAR override register 445 preferably needs reset for the second indicator to be changed. In other words, the second indicator is preferably OAR, similar to one embodiment of the access locks 460.Methods that include using the access locks 460 and/or the OAR override indicators are described below with respect to FIGS. 16A-16F. Various embodiments for the one or more access locks 460 are shown in FIGS. 17A-17C and described with reference thereto, and an embodiment of the OAR override register 445 is shown in FIG. 17D and described with reference thereto.Example embodiments of the secure system management registers 470 are shown below in FIGS. 98A and 98B and described therewith. Briefly, in one embodiment, the secure system management registers 470 include one or more ACPI lock bits 9810 to secure various ACPI or related functions against unauthorized changes. The ACPI lock bits 9810, once set, prevent changes to the ACPI or related functions. A request to change one of the ACPI or related functions requires that a respective ACPI lock bit 9810N be released before the respective one of the ACPI or related functions is changed. In another embodiment, the secure system management registers 470 include one or more ACPI range registers 9820 and/or one or more ACPI rule registers 9830. Each ACPI range register 9820 may be configured to store a value or values that define allowable or preferred values for a specific ACPI or related function. Each ACPI rule register 9830 may be configured to store part or all of a rule for determining if a change to one of the ACPI or related functions should be allowed. Examples of ACPI or related functions include changing a voltage, changing a frequency, turning on or off a cooling fan, and a remote reset of the computer system.In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in FIG. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 (or 805A or 805B from FIGS. 9A and 9B) or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 or 805 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 1600A-1600C described below with respect to FIGS. 16A-16C.It is noted that in one embodiment, all of the security hardware 370 (and the SMM initiation register 425B are inside the RTC battery well 125. In other embodiments, selected sub-devices of the security hardware 370 are excluded from the RTC battery well 125. In one embodiment, only a portion of the scratchpad RAM 440 is inside the RTC battery well 125 with the remaining portion outside the RTC battery well 125. For example, in one embodiment, the mailbox RAM 415 is outside the RTC battery well 125.FIGS. 7A and 7B illustrate embodiments of extended BIOS security, according to various aspects of the present invention. In FIG. 7A, the BIOS ROM 355 and the SMM ROM 550 are coupled to the LPC bus 118. As shown, a crypto processor 305, including a secret 610A, is coupled between the BIOS ROM 355 and the LPC bus 118. In FIG. 7B, an extended BIOS ROM 555 is shown coupled to the LPC bus 118. The extended BIOS ROM 555 includes the BIOS ROM 355 and the SMM ROM 550.BIOS ROM 355 memory space in the computer system 100 may include anywhere from 128 kB to 4 MB, divided into 64 kB segments. An additional one or more 4 MB of SMM ROM 550 memory space may be addressed via a paging mechanism, for example, where the second page of ROM memory space is within separate chips and selected by an additional set of identification select (IDSEL) pins. Each segment of the BIOS ROM 355 memory space and the SMM ROM 550 memory space may be lockable, and open at reset. In one embodiment, the access protection mechanism (i.e. the lock) is not implemented in the BIOS ROM 355 or SMM ROM 550, but, for example, in the south bridge 330C in the security hardware 370C, as previously described with respect to FIG. 6.In one embodiment, the BIOS ROM 355 includes 4 MB of memory space. Read access to the BIOS ROM 355 memory space may be unrestricted at any time. Write locks on the BIOS ROM 355 memory space may be OAR and cover the memory space from FFFF,FFFFh to FFC0,0000h, in 32-bit address space on the LPC bus 145.In one embodiment, the crypto processor 305 is a specialized processor that includes specialized cryptographic hardware. In another embodiment, the crypto processor 305 includes a general-purpose processor programmed with cryptographic firmware or software. In still another embodiment, the crypto processor 305 includes a general-purpose processor modified with specialized cryptographic hardware. Selected methods that may use or include the crypto processor 305 are described with respect to FIGS. 25A-26, with an example of a prior art challenge-response authentication (or verification) method shown in FIG. 28.Other embodiments are also contemplated. For example, the BIOS ROM 355 may be coupled to the LPC bus 118, and the crypto processor 305 may be coupled between the SMM ROM 550 and the LPC bus 118. Also, the crypto processor 305 may be coupled between the extended BIOS ROM 555 and the LPC bus 118.FIG. 7C illustrates an embodiment of protected storage 605, according to one aspect of the present invention. As shown, protected storage 605 is coupled to the LPC bus 118 and includes logic 609 and secret 610B, in addition to its storage locations. The protected storage 605 may include memory, such as RAM, ROM, flash memory, etc., or other storage media, such as hard drives, CDROM storage, etc. Although shown as a single unit, the protected storage is also contemplated as a sub-system that includes separate components for storage and logic, such as shown in FIG. 7D. According to FIG. 7D, a crypto-processor 305, including a secret 610A, is coupled in front of a protected storage 605B. Access to the protected storage 605B is through the crypto-processor 305. The protected storage 605B includes data storage 608A, access logic 609B, a lock register 606, and code storage 607, including a secret 610B.FIGS. 8A and 8B illustrates block diagrams of embodiments of a BIOS ROM 355 and an SMM ROM 550 for secure SMM operations, respectively, according to various aspects of the present invention. As shown in FIG. 8A, the BIOS ROM 355 may include data storage 608B, a secret 610C, and private memory 606.As shown in FIG. 8B, the SMM ROM 550 may be divided into a plurality of SMM ROM blocks 605-615, a stored secret 620, a plurality of public ROM blocks 625-630, one or more reserved ROM blocks 635, one or more registers 640, and a monotonic counter 435B.The plurality of SMM ROM blocks 605-615 may include an SMM ROM 0 block 605, an SMM ROM 1 block 610, and an SMM ROM 2 block 615. The plurality of public ROM blocks 625-630 may include a public ROM block 0625 and a public ROM block 1630. One embodiment of access rights, lock status, and 32-bit address ranges in the LPC bus 118 space are given here in table form.<tb>ROM<sep>READ<sep>WRITE<sep>ADDRESS<tb>BLOCK<sep>ACCESS<sep>LOCK<sep>RANGE<tb>SMM ROM 0<sep>SMM<sep>Write Once<sep>FFBx,1FFFh:FFBx,<tb>605<sep>Only<sep><sep>0000h<tb>SMM ROM 1<sep>SMM<sep>Never Erase<sep>FFBx,3FFFh:FFBx,<tb>610<sep>Only<sep><sep>2000h<tb>SMM ROM 2<sep>SMM<sep>None<sep>FFBx,5FFFh:FFBx,<tb>615<sep>Only<sep><sep>4000h<tb>SMM Counter<sep>SMM<sep>None<sep>FFBx,7FFFh:FFBx,<tb>620<sep>Only<sep><sep>6000h<tb>Public 0<sep>Unrestricted<sep>Write Once<sep>FFBx,9FFFh:FFBx,<tb>625<sep><sep>In SMM<sep>8000h<tb>Public 1<sep>Unrestricted<sep>Never Erase,<sep>FFBx,BFFFh:FFBx,<tb>630<sep><sep>Write in SMM<sep>A000h<tb>Reserved<sep>N/A<sep>N/A<sep>FFBx,DFFFh:FFBx,<tb>635<sep><sep><sep>C000h<tb>Registers<sep>N/A<sep>N/A<sep>FFBx,FFFFh:FFBx,<tb>640<sep><sep><sep>E000hThe 'x' in the address ranges given in the table may denote the strapping pin decode or their inverse. In one embodiment, the ROM blocks 605-615 and 625-630 in the table are each 64 kB in size. In one embodiment, the computer system may support up to 8 MB of extended BIOS ROM 555 storage, divided into sixteen pages of 512 kB each. In another embodiment, the memory address range from FFBx,FFFFh down to FFBx,0000h includes the plurality of SMM ROM blocks 605-615, the SMM counter 620, the plurality of public ROM blocks 625-630, the one or more registers 640, and the monotonic counter 435B.The one or more reserved ROM blocks 635 may be used for future expansion. The one or more registers 640 may store additional data, as needed.In one embodiment, the monotonic counter 435B is stored flat, such as a chain of 8-bit values in an 8K-byte ROM. This embodiment provides 8K bits that counted by noting the number of changed bits (or the most significant bit that is the different). It is noted that 8K bits stored flat translates into 13 bits binary (i.e. 8*1024=8192=2<13> ) The monotonic counter 435B is initially in the erased state, such as with all bits set to one. Any time the computer system is reset as a result of a power failure and there is an invalid RTC checksum, such as when the RTC battery 113 is not present, boot software inspects the monotonic counter 435B and updates it. The boot software may look for the most significant byte including at least one changed bit, such as zero. Initially, byte 0 (zero) is chosen when the monotonic counter 435B is in the erased state. Typically, the RTC checksum 127 is typically calculated by boot code from the BIOS whenever it updates the CMOS RAM 126A in the RTC battery well 125. The RTC checksum 127 is then stored in the RTC RAM 126B, also in the RTC battery well 125, which also holds date and time data. Typically, the RTC RAM 126B may be 256 bytes in size.Flat encoding of the monotonic counter 435B is preferred to other methods of encoding primarily when the monotonic counter 435B is stored in flash memory. Other methods of encoding may be preferred when other memory types are used to store the values or the monotonic counter 435B. One consideration in choosing the method of encoding is which method of encoding provides for a maximum use.Continuing with the above embodiment for updating the monotonic counter 435B, the next most significant bit, in the most significant byte including at least one zero, is set to zero. For example, if byte five of the monotonic counter 435B returns 0000,0000b and byte six of the monotonic counter 435B returns 1111,1000b, then the boot software will write byte six of the monotonic counter 435B as 1111,0000b. If byte five of the monotonic counter 435B returns 0000,0000b and byte six of the monotonic counter 435B returns 1111.1111b, then the boot software would write byte six of the monotonic counter 435B as 1111,1110b.Reading the monotonic counter 435B as the most significant bits and the monotonic counter 435A shown in FIG. 6 as the least significant bits, a 45-bit monotonic counter 435 may be read to obtain an always-increasing 48-bit value, when monotonic counter 435B provides 13 bits and monotonic counter 435A provides 32 bits. In this embodiment, the monotonic counter 435A provides bytes zero, one, two, and three, while the monotonic counter 435B provides bytes four and five of the six byte value. Numbers of bits other than 45 are likewise contemplated.Two special conditions are contemplated. If the monotonic counter 435A is read when storing the default or erased value, such as all ones, then the monotonic counter 435B in the SMM ROM 550 is updated. If the monotonic counter 435B in the SMM ROM 550 is updated a predetermined number of times, such as 65,536 times, then the boot software must erase the monotonic counter 435B in the SMM ROM 550 and start over with the default value, e.g. all ones.By way of example and not limitation, consider the monotonic counter 435A and the monotonic counter 435B each storing one byte of eight bits. For this example, the monotonic counter 435A, in the south bridge 330, returns with '00001111', while the monotonic counter 435B, in the SMM ROM 550, returns '11110000'. The value from the flat encoded monotonic counter 435B is converted to standard binary as '00000100b'. The 16-bit monotonic value becomes '00000100000011 b' when the binary value from monotonic counter 435B is combined with the binary value from monotonic counter 435A.A flat encoding may advantageously allow for increased reliability if the monotonic counter 435B is stored in flash memory. Updating the monotonic counter 435B has no cost, while erasing the flash memory does have a cost in long-term reliability. The monotonic counter 435B should be stored in non-volatile memory. Other memory types contemplated include encapsulated RAM with an included power supply.One use of the monotonic counters 435A and 435B is as a source for a nonce. Each nonce must be different. Differences may be predictable or unpredictable. Nonces may be used to help prevent replay attacks. When a message is encrypted, changing even one bit changes the encrypted message. Any strong encryption method distributes even a one-bit change extensively. A nonce may be used in a challenge-response method, such as described below.Providing the monotonic counters 435A and 435B as two counters, instead of one, may advantageously allow for larger values while minimizing the number of bits stored in the non-volatile memory. Access to the monotonic counter 435A is typically faster than access to the monotonic counters 435B, so monotonic counter 435A may be used independently when a fast access time is important, so long as the length of the monotonic value stored in the monotonic counter 435A is adequate for the desired purpose.FIGS. 9A and 9B illustrate block diagrams of embodiments of computer systems 800A and 800B that control the timing and duration of SMM, according to various aspects of the present invention. FIGS. 9A and 9B include a processor 805, a north bridge 810, memory 106, and the south bridge 330. The processor includes an SMM exit controller 807 and one or more SMM MSRs (machine specific registers) 807. The north bridge 810 includes a memory controller 815. The south bridge 330 includes the SMM timing controller 401 and the scratchpad RAM 440. The north bridge 810 is coupled between the processor 805 and the south bridge 330, to the processor 805 through a local bus 808 and to the south bridge 330 through the PCI bus 110. The north bridge 810 is coupled to receive the SMIACT# signal from the processor 805.In the embodiment of FIG. 9A, the computer system 800A signals that the processor 805 is in SMM using standard processor signals (e.g. SMIACT# to the north bridge 810) and/or bus cycles on the local bus 808 and PCI bus 110. In the embodiment of FIG. 9B, the computer system 800B signals that the processor 805 is in SMM using standard processor signals (e.g. SMIACT#) to both the north bridge 810 and the south bridge 330. An exit SMM signal 404 is also shown between the SMM timing controller 401 and the SMM exit controller 806.While the processor 805 is in SMM, the processor 805 knows that it is in SMM and asserts SMIACT# to either the north bridge 810 and/or the south bridge 330. The processor 805 may, for example, set and read one or more hardware flags or signals associated with SMM. These hardware flags or signals may be in the processor 805, or in the north bridge 810. In one embodiment, the north bridge 810 receives the SMIACT# signal and in response to receiving the SMIACT# signal, signals the south bridge 330 that the processor is in SMM by sending a special bus cycle or an encoded bus cycle over the PCI bus 110. In another embodiment, the SMIACT# signal is received directly by the south bridge 330.In one embodiment, an SMM-specific hardware flag at an internal memory interface in the processor 805 is set when the processor 805 enters SMM. Any address call by the processor 805 is routed through the internal memory interface. The internal memory interface determines where the address call should be routed. If the SMM-specific hardware flag is set, then memory calls to SMM memory addresses are recognized as valid SMM memory calls. If the SMM-specific hardware flag is not set, then memory calls to SMM memory addresses are not recognized as valid SMM memory calls.It is noted that other buses using other bus protocols may couple the processor 805, the north bridge 810, and the south bridge 330. These buses may use bus protocols that include a bus cycle that indicates that the computer system 800 is in SMM. It is also noted such as the SMI# signal or another dedicated signal.The SMM exit controller 806 in the processor 805 is configured to receive a request to the processor 805 to exit SMM. In one embodiment, the SMM exit controller 806 is operable to exit SMM prior to completing the task for which the SMI# was originally asserted that led to the processor 805 being in SMM. Upon receiving the request to exit SMM, the SMM exit controller 806 is configured to read the contents of the one or more SMM MSRs 807 to obtain a jump location for a clean-up routine, preferably stored in ROM, in SMM memory space. The SMM MSRs 807 may also store one or more bits to indicate that an SMM routine has been interrupted and/or a re-entry point (e.g. an address in SMM memory space) in the interrupted SMM routine. The SMM exit controller 806 may be configured to store the one or more bits indicating that the SMM routine has been interrupted and the re-entry point.FIG. 10A illustrates a block diagram of one embodiment of a flowchart of a method for forcing the processor 805 out of SMM early, according to one aspect of the present invention. The method includes checking if the computer system is in SMM in decision block 905. If the computer system is not in SMM in decision block 905, then the method continues checking to determine if the computer system is in SMM in decision block 905. If the computer system is in SMM in decision block 905, then the method initiates the kick-out timer 407 in block 910.The method next checks to determine if the kick-out timer 407 has expired in decision block 915. If the kick-out timer 407 has not expired, then the method continues checking to determine if the kick-out timer 407 has expired in decision block 915. If the kick-out timer 407 has expired in decision block 915, then the method transmits a request to the processor to exit SMM without completing the SMI request that invoked SMM, in block 920. The processor saves the state of the SMM session without finishing the SMM session and exits SMM, in block 925.The request to the processor to exit SMM, in block 920, may include submitting an RSM (Resume from System Management mode) instruction, or other control signal delivered over the system bus, to the processor. Upon executing the RSM instruction, or receiving the control signal through the interface logic to the system bus, the processor exits SMM and the processor's previous state is restored from system management memory. The processor then resumes any application that was interrupted by SMM. In another embodiment, the request to the processor to exit SMM includes another device in the computer system, such as the south bridge, asserting a control signal, such as the exit SMM signal, to the processor to exit SMM.The processor saving the SMM state, in block 925, may include setting a bit to indicate that the SMM session was not finished. If the SMM code has multiple entry points, then the processor may also save an indication of which entry point should be used upon re-entering SMM, to finish the unfinished SMM session. These indications may be saved in any of a number of places, such as the one or more SMM MSRs 807 or the scratchpad RAM 440. It is also contemplated that another specific storage location could be designed into or associated with the processor 805, the north bridge 810, the interrupt controller 365, and/or the south bridge 330.FIG. 10B illustrates a block diagram of an embodiment of a flowchart of a method for reinitiating SMM a preselected period of time after the early termination of SMM, according to one aspect of the present invention. It is noted that FIG. 10B may be a continuation of the method shown in FIG. 10A, or a stand-alone method. The method of FIG. 10B includes initiating the restart timer 408, in block 1010. The method checks if the restart timer 408 has expired, in decision block 1015. If the restart timer 408 has not expired, then the method continues checking to determine if the restart timer 408 has expired, in decision block 1015.If the restart timer 408 has expired in decision block 1015, then the method asserts an SMI request to the processor, in block 1020. The processor enters SMM and looks for an entry indicating that a previous SMM session has been ended prior to fulfilling the previous SMM request, in block 1025. The entry may be, as examples, a flag bit that has been set, or a stored jump location in a default location. The method checks for an unfinished SMM session in decision block 1030. If there is no unfinished SMM session in decision block 1030, then the method starts a new SMM session, in block 1035. If there is unfinished SMM session in decision block 1030, then the method reads the saved status information about the previous SMM session, in block 1040, and continues the previous SMM session, in block 1045. It is noted that the method may make use of the saved status information, from block 1040, when continuing the previous SMM session, in block 1045.FIGS. 11A and 11B illustrate flowcharts of embodiments of methods 1100A and 1100B for upgrading the monotonic counter 435B, which may be stored in the SMM ROM 550, according to various aspects of the present invention. The method 1100A, shown in FIG. 11A, includes checking the RTC checksum, in block 1105. In decision block 1110, if the RTC checksum is valid, then the method 1100A exits. In decision block 1110, if the RTC checksum is not valid, then the method 1100 inspects the monotonic counter 435B in the SMM ROM 550 in block 1115. In decision block 1120A, the method checks if the value stored in the monotonic counter 435B in the SMM ROM 550 is the default (e.g. reset or rollover) value.In decision block 1120A, if the value stored in the monotonic counter 435B in SMM ROM 550 is the default value, then the method 1100A updates the value stored in the monotonic counter 435B to an incremental value, in block 1130A, preferably the smallest possible incremental value. In decision block 1120A, if the value stored in the monotonic counter 435B in the SMM ROM 550 is not equal to the default value, then the method 1100A identifies the value stored in monotonic counter 435B in the SMM ROM 550, in block 1125A. After identifying the value stored, in block 1125A, the method 100A updates the value stored in the monotonic counter 435B in the SMM ROM 550 by the incremental value, in block 1135A.The method 1100B, shown in FIG. 11B, includes checking the RTC checksum, in block 1105. In decision block 1110, if the RTC checksum is valid, then the method 1100A exits. In decision block 1110, if the RTC checksum is not valid, then the method 1100 inspects the monotonic counter 435B in the SMM ROM 550 in block 1115. In decision block 1120B, the method checks if the values stored in the monotonic counter 435B in the SMM ROM 550 are all ones.In decision block 1120B, if all values in the monotonic counter 435B in SMM ROM 550 are equal to one (i.e. the reset value), then the method 1100B updates the first byte so that a zero is stored as the least significant bit in block 1130B. In decision block 1120B, if all values in the monotonic counter 435B in the SMM ROM 550 are not equal to one, then the method 1100B identifies the highest numbered byte with a zero in a most significant bit location, in block 1125B, or the first byte if no byte has a zero in the most significant bit position. After identifying a highest numbered byte with a zero in its most significant bit location or the first byte, in block 1125B, the method 1100B updates the next highest numbered byte or the first byte with a zero in its next most significant bit location without a zero, in block 1135B.FIGS. 12A and 12B illustrate flowcharts of embodiments methods 1200A and 1200B for updating a monotonic counter 435A in the south bridge 330, according to various aspects of the present invention. The method 1200A checks to see if the value stored in the monotonic counter 435A in the south bridge 330 is the maximum value that can be stored, in decision block 1205A. If the value stored in the monotonic counter 435A in the south bridge 330 is not the maximum value, in decision block 1205, then the method 1200A exits. If the value stored in the monotonic counter 435A in the south bridge 330 is the maximum value that can be stored, in decision block 1205, then the method 1200A inspects the monotonic counter 435B in the SMM ROM 550 in decision block 1210. The method 1200A checks to see if the value stored in the monotonic counter 435B in the SMM ROM 550 is the default (or reset) value, in decision block 1215A.If in decision block 1215A, the value stored in the monotonic counter 435B in the SMM ROM 550 is the default value, then the method 1200A updates the value stored in the monotonic counter 435B in the SMM ROM 550 with an incremental value, in block 1225A, preferably the smallest possible incremental value. If, in decision block 1215A, the value stored in the monotonic counter 435B in SMM ROM 550 is not the default value, then the method 1200A identifies the value stored in the monotonic counter 435B in the SMM ROM 550, in block 1220A. After the method 1200A identifies value stored, in block 1220, the method 1200A updates the value stored in the monotonic counter 435B in the SMM ROM 550 by the incremental value, in block 1230A.The method 1200B, shown in FIG. 12B, checks to see if all values in the monotonic counter 435A in the south bridge 330 are equal to one (i.e. the reset value), in decision block 1205B. If all values in the monotonic counter 435A in the south bridge 330 are not equal to one, in decision block 1205B, then the method 1200B exits. If all values in the monotonic counter 435A in the south bridge 330 are equal to one, in decision block 1205B, then the method 1200B inspects the monotonic counter 435B in the SMM ROM 550, in decision block 1210. The method 1200B checks to see if all values in the monotonic counter 435B in the SMM ROM 550 are equal to one, in decision block 1215B.If in decision block 1215B, all values in the monotonic counter 435B in the SMM ROM 550 are equal to one, then the method 1200B updates the first byte with a zero in its least significant bit, in block 1225B. If, in decision block 1215B, all values in the monotonic counter 435B in SMM ROM 550 are not equal to one, then the method 1200B identifies the highest numbered byte with a zero in its most significant bit location, in block 1220B, or the first byte if no byte has a zero in the most significant byte location. After the method 1200B identifies the highest numbered byte with a zero in its most significant bit location or the first byte, in block 1220B, the method 1200B upgrades the next highest numbered byte, or the first byte, with a zero in the next most significant bit location, in block 1230B.FIG. 13A and FIG. 13B illustrate block diagrams of flowcharts of embodiments of methods 1300A and 1300B for providing a value from a monotonic counter 435 in the computer system, according to various aspects of the present invention. The method 1300A receives a request for a value from the monotonic counter 435 in block 1305. The method 1300A requests the value from the monotonic counter 435A in the south bridge 330 in block 1310. The method 1300A updates the value in the monotonic counter 435A in south bridge 330 in block 1315. The method 1300A checks the updated value from monotonic counter 435A in the south bridge 330 for a rollover value, in block 1320.In decision block 1325, if the rollover value has been reached, then the method 1300A updates the value in the monotonic counter 435B in the SMM ROM 550 in block 1320. If the rollover value has not reached in decision block 1325, or if the method 1300A has updated the value in the monotonic counter 435A in the SMM ROM 550 in block 1330, then the method 1300A provides the updated value from the monotonic counter 435A in the south bridge 330 in block 1335.The method 1300B requests the value from the monotonic counter 435B in the SMM ROM 550, in block 1340. The method 1300B receives the value from the monotonic counter 435B in the SMM ROM 550 in block 1345. The value from the monotonic counter 435A in the south bridge 330 is combined with the value from the monotonic counter 435B in the SMM ROM 550 in block 1350. The method 1300B provides the combined value in response to the request for the value from the monotonic counter in block 1355.As noted above, the monotonic counter 435A in the south bridge 330 may include a 32-bit value, while the monotonic counter 435B in the SMM ROM 550 may include a 15-bit value. The returned value from the monotonic counter 435, provided in response to the request for the value of the monotonic counter, would then include a 45-bit value.It is noted that the 32-bit value from the monotonic counter 435A in the south bridge 330 may be provided by software instead of being read from the south bridge 330. In the software embodiment, the software itself provides a 32-bit, always increasing, i.e. monotonic value, which is combined with the value from the monotonic counter 435B in the SMM ROM 550 to provide a unique 45-bit value. It is also noted that the size of the monotonic counters 435A and 435B in the south bridge 330 and in the SMM ROM 550, respectively, may be designed with other bit sizes, as desired.Although the methods 1100A, 1100B, 1200A, and 1200B show updates to the monotonic counters 435A and 435B as being in-line with monotonic value requests, it is also contemplated that software or hardware may be used to update the monotonic counters 435A and 435B separately from the monotonic value requests. Such updates could occur, for example, after the monotonic value request that leads to the monotonic value reaching the rollover value.FIGS. 14A and 14B illustrate block diagrams of embodiments of processors 805A and 805B, including random number generators 455A and 455B using entropy registers 1410, according to one aspect of the present invention. The RNG 455 in FIG. 6 may also use an entropy register 1410, similar to what is shown here. FIG. 14A shows an embodiment of a processor 805A, which includes a plurality of performance registers 1405A-1405N coupled through a plurality of bit lines 1406 to a random number generator 455A. FIG. 14B shows another embodiment of a processor 805B, which includes the plurality of performance registers 1405A-1405N coupled through a plurality of bit lines 1406 to a random number generator 455B.Common to both FIGS. 14A and 14B, the performance registers 1405A through 1405N each store a value indicative of a different performance metric. Exemplary performance metrics may include first-level-cache hit rate, second-level-cache hit rate, third-level-cache hit rate, branch target cache, and/or other model specific registers (MSRs), such as those used for measuring performance. In one embodiment, the performance registers include any register that updates the least significant bit at a rate asynchronous to the local and/or system clock.In one embodiment, each of the plurality of bit lines 1406 couple between the least significant bit entry in one of the performance registers 1405 and an entry in an entropy register 1410 in the RNG 455. Each entry of the entropy register 1410 may couple to a different one of the performance registers 1405. In another embodiment, each entry of the entropy register 1410 may couple to one or more entries in one or more of the performance registers 1405 or other sources of single bits within the processor 805.FIG. 14A includes the RNG 455A, which also includes an entropy control unit 1415 coupled to receive a request over a request line (REQ) from the processor 805A for a random number over output lines (RN). The entropy control unit 1415 is configured to assert a control signal (C) to the entropy register 1410 and read out the value in the entropy register 1410 over the data lines (D). The entropy control unit 1415 is further configured to provide the random number from the entropy register 1410 over the output lines (RN) in response to the request line (REQ) being asserted by the processor 805A.FIG. 14B includes the RNG 455B, which includes the entropy register 1410. The entropy register 1410 of FIG. 14B may be read by the processor 805B. The entropy register 1410 latches the values received over plurality of bit lines 1406 upon receiving a clocking signal (CLK). The random number from the entropy register 1410 may then be read out over the output lines (RN) by the processor 805B.It is noted that the RNG 455A and the RNG 455B may be included in other devices in the computer system other than the processor 805. Contemplated locations for the RNG 455A and the RNG 455B include the north bridge 810 and the south bridge 330. It is also noted that the performance registers 1405 are not normally accessible to a user of the processor 805 once the processor 805 is in a computer system, as the performance registers 1405 are primarily used for testing during the design and engineering stages of the manufacturing process. This may advantageously allow for better randomness with less likelihood of tampering with the random number obtained from the entropy register 1410.FIG. 15 illustrates a block diagram of another embodiment of a random number generator 455C, according to one aspect of the present invention. The RNG 455C includes a plurality of ring oscillators (RO0-RO7) 1514A-1514H, a linear feedback shift register (LFSR) 1515, a digital to analog converter (D/A) 1520, a voltage controlled oscillator (VCO) 1525, a sample and hold circuit 1530, a cyclic redundancy code generator 1535 (CRC), a self test circuit 1511, a multiplexer (MUX) 1545, and a counter 1540.The CLK signal 1505 is received by the RNG 455C by the LFSR 1515, the sample and hold circuit 1530, the CRC 1535, and the counter 1540. Either a system reset signal (SYSTEM_RESET) 1507 or a read strobe (READ_STROBE) are received by the counter 1540 at the reset (RST) input port. The LFSR 1515 receives output signals of each of the ring oscillators (RO0-RO7) 1514A-1514H at one input port (RO[7:0]) and the output signals of the sample and hold circuit at another input (IN) terminal. A plurality of values are provided by the LFSR 1515 at the output (OUT) terminal. As shown, one of the plurality of values delivered by the LFSR 1515 is XORed with the CLK signal 1505 before all of the plurality of values provided by the LFSR 1515 are delivered to the D/A 1520. The analog output signal of the D/A 1520 is provided as a control signal to the VCO 1525.The output of the VCO 11525 is provided to the input (IN) terminal of the sample and hold circuit 1530 and clocked on the CLK signal 1505. The output (OUT) signal of the sample and hold circuit 1530 is provided to the input terminal of the CRC 1535 and clocked on the CLK signal 1505, as well as to the IN terminal of the LFSR 1515, as described above. A plurality of output values is provided to the MUX 1545 through the CRC output port (OUT). The MUX 1545 selects between the output values of the CRC 1535 and ground (GND). The MUX 1545 provides the random number over output lines (RN[31:0]).A request for a random number over the read strobe line (READ_STROBE) results in the counter 1540 counting a prerequisite number of clock cycles prior to asserting a signal (FULL) to the selection input (SEL) of the MUX 1545. The FULL signal may also be read by the requestor of the random number as a signal (DONE) that the requested random number is available over the RN[31:0] lines. The system reset signal 1507 also asserts a signal on the reset input terminal of the counter 1540. A self test circuit 1511 may be present to provide a known value to the MUX 1545 to be read on the RN[31:0] lines in place of a random number generated by the RNG 455C.The RNG 455C is preferably configured to meet all appropriate requirements for an RNG in Federal Information Processing Standards Publication FIPS-140-1, entitled SECURITY REQUIREMENTS FOR CRYPTOGRAPHIC MODULES, issued on Jan. 11, 1994, by the U.S. National Institute of Standards and Technology (NIST), which is hereby incorporated by reference. The Federal Information Processing Standards Publication Series of the NIST is the official series of publications relating to standards and guidelines adopted and promulgated under the provisions of Section 111(d) of the Federal Property and Administrative Services Act of 1949 as amended by the Computer Security Act of 1987, Public Law 100-235.It is noted that for increased randomness, the ring oscillators 1514A-1514H may be operated at frequencies and phases that do not correlate between or among the plurality of ring oscillators 1514. It is also noted that the RNG 455C may be included in locations other than the south bridge 330. Contemplated locations include the processor 805 and the north bridge 810.FIGS. 16A-16G illustrate flowcharts of embodiments of methods 1600A-1600G that attempt to access the security hardware 370, which may be locked, according to various aspects of the present invention. FIG. 16A shows a method 1600A of locking the security hardware 370 as a part of the boot (or cold reboot) process. FIG. 16B shows a method 1600B of unlocking and later locking the security hardware 370 as a part of a reboot (or warm boot) process. FIG. 16C shows a method 1600C of checking for rights to lock or unlock the security hardware 370 and checking a bit to disable changing the rights. FIG. 16D shows a method 1600D of attempting to use the security hardware 370 while the computer system 100 is not in SMM. FIG. 16E shows a method 1600E of checking and/or setting the lock on the OAR access locks 460 and checking the bit to disable changing the lock. FIG. 16F shows a method 1600F of unlocking and later locking the security hardware 370 while the computer system 100 is in SMM. FIG. 16G shows a method 1600G of checking for rights to unlock and later lock the security hardware 370 while the computer system 100 is in SMM.Referring now to FIG. 16A, the method 1600A includes the processor executing the BIOS code instructions from SMM space in the RAM memory, in block 1620. The BIOS code, executed by the processor, performs a power-on self test (POST), in block 1625. The method 1600A includes accessing the security hardware 370, in block 1630. The accesses to the computer hardware 370 may initiate an unlocking of the security hardware 370, if the security hardware 370 is not open-at-reset. The accesses to the security hardware 370 may be by the BIOS code or other device or subsystem in the computer system 100, or from outside the computer system 100, if allowed. The method 1600A may optionally include entering a BIOS management mode, in block 1632. The BIOS management mode could allow for, for example, remote booting instructions, remote or secure permission to continue the boot sequence, other remote operations or remote hardware accesses or set-ups, or choosing between or among boot choices, such as hardware configurations and/or operating systems or other software choices.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller, etc. and displays a start-up information screen, in block 1635. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 1640. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 1645.The method includes closing the access locks to the security hardware, in block 1650. The BIOS code or another device or agent in the computer system 100 may close the access locks. The BIOS code identifies the boot location, and the corresponding boot sector, in block 1655. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 1660.Referring now to FIG. 16B, the method 1600B includes opening the access locks to the security hardware, in block 1615. The processor executes the BIOS code instructions from SMM space in the RAM memory, in block 1620. The computer system accesses the security hardware 370 while in SMM, while booting, in block 1630. The method 1600B may optionally include entering a BIOS management mode, in block 1632.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller. etc. and displays a start-up information screen, in block 1635. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 1645.The BIOS code closes the access locks to the security hardware, in block 1650. The BIOS code identifies the boot location, and the corresponding boot sector, in block 1655. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 1660.Turning now to FIG. 16C, the method 1600C includes deciding whether to set the OAR-lock, in decision block 1646. The OAR-lock in decision block 1646 may correspond to the first indicator described above with respect to FIG. 6. The OAR-lock in decision block 1646 may also correspond to setting the OAR lock override bit 1750 described below with respect to FIG. 17D. If the decision is made to set the OAR-lock, then, according to one embodiment, all access to the security hardware 370 is blocked, in block 1647. If the decision is made not to set the OAR-lock, then the method 1600C moves to decision 1648. In decision block 1648, the method 1600C decides whether to set the OAR-lock change bit. The OAR-lock change bit in decision block 1648 may correspond to the second indicator described above with respect to FIG. 6. The OAR-lock change bit in decision block 1648 may also correspond to setting the change OAR lock override bit 1755 described below with respect to FIG. 17D. If the decision is made to set the OAR-lock change bit, in decision block 1648, then, according to one embodiment, the OAR-lock cannot be changed, thereafter, as changes to the OAR-lock are themselves locked out, in block 1649.Turning now to FIG. 16D, the method 1600D includes a processor, such as processors 102, 805, etc., operating in a mode that is not SMM, in block 1604. In block 1606, code being processed by the processor attempts to access any part of the security hardware 370, or other hardware whose access may require a check of an access lock similar to the access locks 460. The method checks, at decision block 1607, to see if the security hardware 370 is available. If the security hardware 370 is not available, at decision block 1607, then the method 1600D exits or returns. If the security hardware 370 is available, at decision block 1607, then the method 1660D accesses the security hardware 370, at block 1630. The method, optionally, closes the access locks to the security hardware, if necessary, at block 1650.Turning now to FIG. 16E, the method 1600E includes an embodiment of decision block 1607 from FIG. 16D. The method 1600E includes checking if access to all security hardware is locked out, i.e. forbidden, at decision block 1690. If access to all security hardware is locked out, then at decision block 1690 the method 1600E moves to decision block 1692. If access to all security hardware is not locked out, then the method 1600E moves to decision block 1691. In decision block 1691, the method 1600E checks if the requested security hardware is locked out (e.g. separately using one or more access locks).If the requested security hardware is locked out, then the method 1660E moves to decision block 1692. If the requested security hardware is not locked out, then the method 1660E moves directly to block 1693. In decision block 1692, the method 1660E checks if the access lock for the requested security hardware can be changed. e.g., unlocked. If the access lock for the requested security hardware cannot be changed, then in decision block 1692 the method 1600E aborts the access to the security hardware. If the access lock for the requested security hardware can be changed, then in decision block 1692 the method 1600E requests authorization, such as from a user, to change the access lock for the requested security hardware, in decision block 1693. If the authorization to change the access lock for the requested security hardware is not given, then the method 1600E aborts the access to the security hardware. If the authorization to change the access lock for the requested security hardware is not given, then the method 1600E moves to block 1694 and changes the lock to allow access to the requested security hardware.It is noted that any authorization method described herein may be used in decision block 1693. Any other authorization methods known in the art that have equivalent or better security properties in the presence of an observer may also be used.Turning now to FIG. 16F, the method 1600F includes the processor loading code instructions into SMM space in the RAM memory, in block 1605. For example, loading code instructions into SMM space may occur in response to an SMI#. The access locks to the security hardware are opened in block 1615. The opening of the access locks may be through the SMM code instructions or through a hardware mechanism, or both.The processor processes the code instructions from SMM space in the RAM memory, in block 1620. It is noted that the SMM timing controller 401, described above, may interrupt the processing of the code instructions. The method 1600F includes accessing the security hardware 370, in block 1630. As the computer system is in SMM and the access locks have been opened, in block 1615, the security hardware is available to most or all of the subsystems of the computer system 100 (or 800), as desired.The method 1600F includes closing the access locks to the security hardware 370, in block 1650. The processor reloads the previous state and continues operating, in block 1665. It is noted that the processing of the SMM code instructions, in block 1620, may continue while the actions described in block 1630 occurs. Preferably, the actions described in block 1650 occur after the processing of the SMM code instructions, in block 1620, has ceased. The processing may have finished or have been interrupted.Turning now to FIG. 16G, the method 1600G includes the processor loading code instructions into SMM space in the RAM memory, in block 1605. For example, the loading of code instructions into SMM space may occur in response to an SMI#. The method 1600G next checks if the security hardware is available, in decision block 1607. If the security hardware is not available, then in decision block 1607 the method 1600G aborts the access to the security hardware. If the security hardware is available, then the method 1600G continues with block 1620.The processor executes the code instructions from SMM space in the RAM memory, in block 1620. It is noted that the SMM timing controller 401, described above, may interrupt the processing of the code instructions. The method 1600F includes accessing the security hardware 370, in block 1630. As the computer system is in SMM and the access locks are open, as determined in decision block 1607, the security hardware is available to most or all of the subsystems of the computer system 100 (or 800), as desired.The method 1600G includes closing the access locks to the security hardware 370, in block 1650. The processor reloads the previous state and continues operating, in block 1665. It is noted that the executing of the SMM code instructions, in block 1620, may continue while the actions described in block 1630 occurs. Preferably, the actions described in block 1650 occur after the processing of the SMM code instructions, in block 1620, has ceased. The processing may have finished or have been interrupted.It is noted that other processes of locking and unlocking the security hardware 370, other than the access locks, may be used. The methods 1600A-1600G are intended to extend to those other processes.For the purposes of this disclosure, the computer system is considered to have two operating modes, normal and SMM. There are boot phases that are not in SMM, but they are, by definition, as trusted as SMM, and therefore considered equivalent to SMM herein. The boot code configures and arranges how SMM will work. SMM derives its trustworthiness from the trustworthiness of the boot code. It is contemplated that the standard boot sequence could be varied. Variations include a transition to a setup environment where the user may have the opportunity to input parameters. The input parameters may, for example, modify the BIOS code. Most setup environments return to reset before loading the operating system and operating in normal mode. This is a form of maintenance mode that is an alternative to loading the operating system and is not part of the normal mode. As contemplated, the access locks would not be set in this mode. It would be part of the boot process and as trusted as SMM, although security measures could be used if remote accesses are possible inside the setup environment.FIGS. 17A, 17B, and 17C illustrate block diagrams of embodiments 460A, 460B, and 460C of the access locks 460 shown in FIG. 6. In FIG. 17D, a block diagram of an embodiment of the OAR override register 455, from FIG. 6, is shown. In the embodiment 460A shown in FIG. 17A, the one or more access locks 460 include a sequester bit register 1705. The bit stored in the sequester bit register 1705 may be set or cleared as a flag. In the embodiment 460B shown in FIG. 17B, the one or more access locks 460 include two or more sequester registers configured to store two or more sequestering bits to lock or unlock all of the devices within the security hardware 370. The additional bits beyond the sequester bit stored in the sequester register 1705 allows for flag bits for locking and unlocking of privileges separately. For example, a write privilege could be locked, while a read privilege could be unlocked. In the embodiment of FIG. 17C, the one or more access locks 460 include one or more sequester registers 1715A-1715N for each device within the security hardware 370C.In FIG. 17D, the OAR override 445 includes an OAR-lock override register 1750 that stores at least one OAR-lock override bit, and a change OAR-lock override register 1755 that stores at least one change OAR-lock override bit. According to one embodiment of the present invention, if the OAR-lock override bit is not set, then access to the security hardware 370 is determined by the settings of the access locks 460. If the OAR-lock override bit is set, then the access locks 460 are ignored in favor of the security hardware 370 being either always available or never available, based on the implementation. Preferably, the security hardware is never available when the OAR-lock override bit is set. The setting of the OAR-lock override bit may be changed in SMM (or with authorization) unless the change OAR-lock override bit is set. Preferably, the change OAR-lock override bit is OAR, similar to one embodiment of the access locks 460, and may be set, in various embodiments, with the access locks 460 at boot time, such as in block 1650.FIG. 18A illustrates a prior art flowchart of an SMM program 1800A. The prior art SMM program 1800A starts at 1805, includes one or more instructions for execution in SMM, in block 1810A, and ends at 1895 without interruption. In other words, prior art SMM program 1800A is uninterruptible and has no other entry points than the start at 1805. There are also no reasonable exit points, barring processor failure, other than the stop at 1895.FIG. 18B illustrate a flowchart of an embodiment of operations of an interruptible and re-enterable SMM program 1800B, according to one aspect of the present invention. In contrast to the prior art SMM program 1800A, the interruptible and re-enterable SMM program 1800B includes a start at 1805, one or more instructions for execution in SMM, in block 1810B, an entry/exit point 1815, one or more instructions for execution in SMM, in block 1820, and the stop at 1895.Also in contrast to the prior art SMM program 1800A, FIG. 18C illustrates an embodiment of operation of a computer system running the interruptible and re-enterable SMM program 1800B, according to one aspect of the present invention. The operations 1800C of the computer system includes a start 1805. The operations also include receiving a request to enter SMM, at 1810 and saving the system state at 1815. The method checks, at 1820, for a saved SMM state, as could be found from exiting the SMM program 1800B at 1875. If no saved SMM state is found at 1820, then load the requested default SMM state at 1825. If a saved SMM state is found at 1820, then load the saved SMM state, at 1830.The method 1800C executes the loaded SMM state, at 1835, either the default state from 1825 or the saved state at 1830. If the SMM processing is finished, at 1840, then the method moves to 1855 and exits SMM. If the SMM processing is not finished, then the method continues execution of the SMM state, if no exit request is received at 1845. If the exit request is received at 1845, then the method saves the current SMM state at 1850 and exits SMM at 1855. The saved system state is reloaded at 1860, and the method ends at the stop 1895.While only one entry/exit point 1815 is shown in the embodiment of FIG. 18B, other embodiments may include two or more entry/exit points 1815 in an SMM program 1800B or the operations of the method 1800C shown in FIG. 18C. In these embodiments, each entry/exit point 1815 would have one or more instructions for execution in SMM, similar to blocks 1810B and 1820, both before and after the entry/exit point 1815.For example, in one embodiment, block 1810B includes one instruction for execution in SMM, followed by an entry/exit point 1815A. Entry/exit point 1815A is followed by another single instruction for execution in SMM, in block 1820A. Block 1820A is followed by another entry/exit point 1815B. Entry/exit point 1815B is followed by another single instruction for execution in SMM, in block 1820B. Block 1820B is followed by the stop 1895. While a single instruction in blocks 1810B, 1820A, and 1820B may be small, the concept of regularly spaced Entry/exit points 1815 is illustrated. In other embodiments, two, three or more instructions for execution in SMM may be substituted for the single instructions. In still other embodiments, there may be a variable number of instructions for execution in SMM in blocks 1810B, and 1820. The number of instructions may depend on the execution times for each set of instructions, so that SMM may be interruptible every so often during execution.It is noted that forced exits from SMM, as are taught herein in one aspect of the present invention, for example, with respect to FIG. 10A, and re-entry into SMM, as is also taught herein in another aspect of the present invention, for example, with respect to FIG. 10B, are but two examples of how interruptible, re-enterable SMM code could be implemented or used. Those of skill in the art of computer programming with full appreciation of this disclosure will appreciate that many programming techniques used with non-SMM code that used interruptible, re-enterable code flow will now be available in SMM code.FIGS. 19A, 19B, and 19C illustrate block diagrams of embodiments 3000A, 3000B, and 3000C of computer systems with the BIOS ROM 355 accessible to the processor 805 at boot time and to the south bridge 330 at other times. Common to all three figures are a processor 805, a south bridge 330, control logic 3010, a boot switch 3005, a crypto-processor 305, and BIOS ROM 355. The processor 805 is coupled to the south bridge 330 in a usual fashion at times other than at boot time. At boot time, the control logic 3010 is operable to change the boot switch 3005 such that the processor 805 has access to the BIOS 355 without going through the south bridge 330 in the usual fashion.In FIG. 19A, embodiment 3000A shows the processor 805 coupled to one part (A) of the boot switch 3005. Part A of the boot switch 3005 is open, as would occur after booting. The control logic 3010 is coupled to the boot switch 3005 to control the operations of the boot switch 3005. The south bridge 330 is coupled to Part B of the boot switch 3005. Part B of the boot switch 3005 is closed, again as would occur after booting. The south bridge 330 is shown coupled to the bus to which the BIOS is coupled, shown as being through the crypto-processor 305. Other hardware 3015A and 3015B are also shown coupled to the bus, which may be an LPC bus 118, or another bus.In FIG. 19B, embodiment 3000B shows the processor 805 coupled to one part (A) of the boot switch 3005 through an instance of LPC bus interface logic (BIL) 134D. Part A of the boot switch 3005 is closed, as would occur during booting. The processor 805 is coupled to a north bridge 810 through a local bus 808. The north bridge 810 includes the control logic 3010, coupled to the boot switch 3005 to control the operations of the boot switch 3005. The north bridge 808 is further coupled to the south bridge 330 through a PCI bus 110. The south bridge 330 is coupled to Part B of the boot switch 3005 through another instance of for the local bus 808 and the LPC bus 118, or the crypto-processor 305 may be configured to translate the bus protocols as necessary to pass bus cycles to the BIOS ROM 355. Other hardware 3015A and 3015B are not shown in this embodiment, but may be present.As illustrated, during the booting process, the processor 805 is operable to use the local bus protocol to access the BIOS 355 directly, without going through the north bridge 810 or the south bridge 330. By providing a more direct connection between the processor 805 and the BIOS ROM 355, the computer system 3000C may advantageously boot or reboot faster than using more usual methods of accessing the BIOS ROM 355. After booting, accesses to the BIOS ROM 355 are through the south bridge 330 using the LPC bus 118.It is noted that the control logic 3010 may be signaled to or configured to operate the boot switch 3005 at times other than booting to allow for faster access to the BIOS ROM 355, the crypto-processor 305 (when present), or, for example, other hardware 3015 on the LPC bus.In various embodiments of the present invention, the security of SMM is assumed. It is noted that one or more so-called "backdoors" may exist that could be exploited to compromise the security of SMM. The issues contemplated include misuse of the hardware debug test (HDT) mode of the processor 805 as well as the ability of the processor 805 to load and replace microcode. Illustrated in FIGS. 20A-D are various embodiments 805A, 805B, 805C, 805D of the processor 805, each of which includes various security protections against one or more backdoor attacks.In FIG. 20A, the processor 805A includes HDT control logic 3110A, HDT reset logic 3120A, and one or more registers, including an HDT enable register 3115 and non-volatile random access memory (NVRAM) 3130. As shown, the HDT control logic 3110A is coupled to receive a plurality of input signals through a plurality of HDT pins 3105. The HDT control logic 3110A is further coupled to the HDT enable register 3115. The HDT reset logic 3120A is coupled to receive a RESET signal over a line 3125 and to access (i.e. read and write) the HDT enable register 3115 and the NVRAM 3130.In FIG. 20B, the processor 805B of FIG. 20B includes HDT control logic 3110B, HDT reset logic 3120B, and two registers, including the HDT enable register 3115 and an HDT enable lock register 3135. As shown, the HDT control logic 3110B is coupled to receive a plurality of input signals through the plurality of HDT pins 3105. The HDT control logic 3110B is further coupled to the HDT enable register 3115 and the HDT enable lock register 3135. The HDT reset logic 3120B is coupled to receive the RESET signal over the line 3125 and a signal, such as over a line 3140, through a pull-up (or pull-down) resistor 3145.In FIG. 20C, the processor 805C includes microcode control logic 3155, microcode loader enable reset logic 3165, and one or more registers, including a microcode loader enable register 3160. As shown, the microcode control logic 3155 is coupled to receive a plurality of input signals through a plurality of microcode input pins 3150. The microcode control logic 3155 is further coupled to the microcode loader enable register 3160. The microcode loader enable reset logic 3165 is coupled to receive the RESET signal and to access the microcode loader enable register 3160.In FIG. 20D, the processor 805D includes HDT control logic 3110 integrated with the microcode control logic 3155, the HDT reset logic 3120, and the MLE reset logic 3165 to form control/reset logic 3175. The HDT enable register 3115 and the microcode loader enable register 3160 are integrated into a multibit lock register 3180. A plurality of inputs 3170 are shown to the control/reset logic 3175. The plurality of inputs 3170 may include the HDT inputs 3105, the microcode inputs 3150, and/or the reset signaling means. Other embodiments (not shown) integrate only the HDT control logic 3110 and the microcode control logic 3155, or just the HDT reset logic 3120 and the MLE reset logic 3165.According to various embodiments of the present invention, the registers 3115, 3135, and 3160, as well as the NVRAM 3130 include storage space for one or more bits. In one embodiment, each register is configured to store a single bit. It is noted that the enable registers 3115 and 3160 may also be integrated into a single lock register, and the HDT enable lock register 3135 may be used as a microcode enable lock register. It is contemplated that the registers 3115, 3135, 3160, and/or 3180 could be included in the SMM MSRs 807.In various embodiments, the HDT enable register 3115 is configured to store one or more HDT enable bits signifying whether HDT mode is enabled or disabled. The HDT reset logic 3120 is configured to set the one or more HDT enable bits to a default state upon a reset of the processor 805.Multiple embodiments for controlling the HDT modes are contemplated, such as those illustrated in FIGS. 20A and 20B. In one embodiment, the HDT mode is enabled as the default on non-production processors 805 used for engineering and testing. The HDT mode may be disabled as the default in standard production processors 805. In another embodiment, illustrated in FIG. 20A, the default state may be stored in and read from the NVRAM 3130. In this embodiment, the default state may be changeable, but in the illustrated embodiment, the default state is set to disabled. In still another embodiment, illustrated in FIG. 20B, the default state is set using a strapping option. The default value is provided to the HDT reset logic 3120B through the pull-up (or pull-down) resistor 3145.Multiple embodiments for controlling the microcode loader modes are also contemplated, such as those illustrated in FIGS. 20C and 20D. In one embodiment, not illustrated, the microcode update mode is enabled as the default on non-production processors 805 used for engineering and testing. The microcode update mode may be disabled as the default in standard production processors 805. In another embodiment, similar to that illustrated in FIG. 20A, the default state may be stored in and read from the NVRAM 3130. In this embodiment, the default state may be changeable, but in the illustrated embodiment the default state is set to disabled. In still another embodiment, illustrated in FIG. 20B, the default state is using a strapping option. The default value is provided to the MLE reset logic 3165 through the pull-up (or pull-down) resistor 3145.Turning now to FIG. 21, a method 3200 for initiating the HDT mode is shown. In response to receiving a request to enter the HDT mode (step 3205), the HDT control logic 3110 checks the status of the one or more HDT enable bits to see if the HDT mode is enabled or disabled (step 3210). If the HDT mode is enabled (step 3215), then the HDT control logic 3110 initiates the HDT mode (step 3220). If the HDT mode is disabled (step 3215), then the HDT control logic 3110 will not initiate the HDT mode.Turning now to FIG. 22, a method 3300 for changing the HDT mode enable status, which includes an HDT mode lock, is shown. In response to receiving a request to enter the HDT mode (step 3305), the HDT control logic 3110 checks the status of the one or more HDT enable lock bits to determine if the HDT lock mode is locked or unlocked (step 3310). If the HDT lock mode is unlocked (step 3315), then the HDT control logic 3110 initiates HDT mode (step 3335). If the HDT lock mode is locked (step 3315), then the HDT control logic 3110 requests authorization to change the HDT lock mode status (step 3320). If the change is authorized (step 3325), then the HDT control logic 3110 changes the HDT mode lock bit to unlocked (step 3330). If the change is not authorized (step 3325), then the HDT control logic 3110 does not change the HDT mode lock bit.In various embodiments, the HDT enable status may be changed by setting or resetting the one or more HDT enable status bits. For example, the HDT mode may be disabled, but inside SMM, a predetermined input to the HDT control logic 3110 may signal the HDT control logic 3110 to change the HDT mode status to enabled. In the embodiment of FIG. 20A, for example, once signaled, the HDT control logic 3110 would change the status of the HDT enable bit from disabled to enabled.Referring back to the embodiment of FIG. 20B, for example, in response to receiving a request to change the HDT mode status, the HDT control logic 3110 checks the status of the one or more HDT enable lock bits to see if the HDT lock mode is enabled or disabled. If the HDT lock mode is disabled, then the HDT control logic 3110 may change the HDT mode status. If the HDT lock mode is enabled, then the HDT control logic 3110 will not change the HDT mode status.It is noted that the method 3300 may alternatively terminate if the microcode update lock status is locked (step 3315), instead of requesting authorization to change the microcode update lock status (step 3320). The method 3300 may also include receiving a request to change the microcode update lock status (not shown) prior to the method 3300 requesting authorization (step 3320).Turning now to FIG. 23, a method 3400 for initiating the microcode loader is shown. In response to receiving a request to initiate the microcode update mode (step 3405), the microcode control logic 3155 checks the status of the one or more microcode enable bits to see if microcode update mode is enabled or disabled (step 3410). If the microcode update mode is enabled (step 3215), then the microcode control logic 3110 initiates the microcode update mode (step 3220). If the microcode update mode is disabled (step 3215), then the microcode control logic 3110 will not initiate the microcode update mode.Turning now to FIG. 24, a method 3500 for changing the microcode update mode enable status, which includes a microcode mode lock, is shown. In response to receiving a request to enter the microcode mode (step 3505), the microcode control logic 3110 checks the status of the one or more microcode enable lock bits to see if the microcode mode is locked or unlocked (step 3510). If the microcode lock mode is unlocked (step 3515), then the microcode control logic 3110 initiates the microcode mode (step 3535). If the microcode lock mode is locked (step 3515), then the microcode control logic 3110 requests authorization to change the microcode mode lock status (step 3520). If the change is authorized (step 3525), then the microcode control logic 3110 changes the microcode mode lock bit to unlocked (step 3530). If the change is not authorized (step 3525), then the microcode control logic 3110 does not change the microcode mode lock bit.In various embodiments, the microcode enable status may be changed by setting or resetting the one or more microcode enable status bits. For example, the microcode mode may be disabled, but inside SMM, a predetermined input to the microcode control logic 3110 may signal the microcode control logic 3110to change the microcode mode status to enabled. In the embodiment of FIG. 20C, for example, once signaled, the microcode control logic 3110 will change the status of the one or more microcode enable bits from disabled to enabled.In response to receiving a request to change the microcode mode status, the microcode control logic 3110 may check the status of the one or more microcode enable lock bits to determine if the microcode lock mode is enabled or disabled. If the microcode lock mode is disabled, then the microcode control logic 3110 may change the microcode mode status. If the microcode lock mode is enabled, then the microcode control logic 3110 will not change the microcode mode status.It is noted that the method 3500 may alternatively terminate if the microcode update lock status is locked (step 3515), instead of requesting authorization to change the microcode update lock status (step 3520). The method 3500 may also include receiving a request to change the microcode update lock status (not shown) prior to the method 3500 requesting authorization (step 3520).FIGS. 25A, 25B, 26, and 27 illustrate flowcharts of embodiments of methods 3600A, 3600B, 3610A, and 3620 for secure access to storage, according to various aspects of the present invention. FIG. 25A shows a flowchart of the method 3600A where a security device maintains secure access to a storage device, according to one aspect of the present invention. FIG. 25B shows a flowchart of the method 3600B where a crypto processor maintains secure access to a memory, according to one aspect of the present invention. FIG. 26 shows a flowchart of the method 3610A where a security device provides secure access control to a storage device using a challenge-response authentication protocol, according to one aspect of the present invention. FIG. 27 shows a flowchart of the method 3620 where a secret is used to unlock data access to a secure storage device.Turning to FIG. 25A, the method 3600A includes the security device receiving a transaction request for a storage location associated with the storage device connected to the security device (block 3605A). The security device provides access control for the storage device (block 3610A). One embodiment of the access control shown in block 3610A is illustrated by the method 3600B shown in FIG. 26.According to the method 3600A, the security device maps the storage location in the transaction request according to the address mapping of the storage device (block 3615A). The security device provides the transaction request to the storage device (block 3620A). Under normal circumstances, the storage device will perform the requested transaction (block 3625A).In various embodiments, the security device associated with the method 3600A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests.It is noted that in various embodiments, the memory (or the storage device) may include further security hardware of its own. The further security hardware may include access logic, a random number generator, and a secret, such as is illustrated above in FIG. 7C or 7D.Turning to FIG. 25B, the method 3600B includes the crypto-processor receiving a transaction request for a memory location associated with the memory connected to the crypto-processor (block 3605B). The crypto-processor provides access control for the memory (block 3610B). One embodiment of the access control shown in block 3610B is illustrated in FIG. 26.According to the method 3600B, the crypto-processor maps the memory location in the transaction request according to the address mapping of the memory (block 3615B). The crypto-processor provides the transaction request to the memory (block 3620B). Under normal circumstances, the memory will perform the requested transaction (block 3625B).Turning to FIG. 26, the method 3610A includes the security device determining if a lock is in place for the storage location (block 3705). A transaction request may have been received for the storage location. If the lock is not in place (block 3710), then the method 3610A moves past the authentication portion. If the lock is in place (block 3710), then the security device provides a challenge for the storage location (block 3715). The challenge may be associated with the storage location or with the storage device that includes the storage location. The challenge may be in response to the transaction request. Next, the security device receives a response to the challenge (block 3720). The security device evaluates the response by comparing the response to an expected response (block 3725). If the evaluation is not correct (block 3730), then the method ends. If the evaluation is correct (block 3730), then the method proceeds with the security device providing the transaction request to the storage device (block 3735).In various embodiments, the security device associated with the method 3610A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests.Turning to FIG. 27, the method 3620 includes storing a secret in a storage device (block 3805). The storage device may include only a portion of a physical device. The storage device itself may be embodied as any storage device known in the art. The method 3620 may also include storing data in the storage device (block 3810) and storing code in the storage device (block 3815). The method 3620 may also include providing a lock (e.g. a lock bit or bits) to secure data stored in the storage device or the storage device itself (block 3815). Note that the above steps of method 3620 (blocks 3805-3820) may be performed relatively proximate in time, such as when the storage device is manufactured, installed, or initialized.The method 3620 also includes reading the secret from the storage device (block 3825), such as, for example, when the computer system including the storage device or coupled to communicate with the storage device is booted. For the secret to remain secure, the reading of the secret preferably occurs when the storage device is in a secure or trusted configuration. The method 3620 may also read the code from the storage device (block 3830). The method 3620 stores the secret in a secure location (block 3825) and also may store the code in the secure location (block 3830). The secure location may be in the SMM memory space previously described, or in a secure memory, register, or other storage location in the computer system 100, such as in the processor 805 or in the south bridge 330.In various embodiments, the storage device associated with the method 3620 may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. A read in method 3620 may describe any transaction request, such as, for example, a read request, a write request, or a combination of read and write requests.FIG. 28 illustrates a prior art challenge-response method 3900 for authentication. The method has a requestor making an access request, in block 3905. In block 3910, a gatekeeper receives the access request and provides a challenge to the requestor to authenticate the requestor's authority to make the access request. In block 3915, the requester receives the challenge and provides a response to the challenge to authenticate the requestor's authority to make the access request. In block 3920, the gatekeeper receives the response to the challenge and compares the response to an expected response.In decision block 3925, the gatekeeper determines if the response is equal to the expected response. If the response is not equal to the expected response, in decision block 3925, then the method ends, preventing the requestor from completing the access request. If the response is equal to the expected response, in decision block 3925, then the method continues with block 3930. In block 3930, the gatekeeper approves the access request. Typically, a shal hash, well known in the art, of the secret and a number known to both the gatekeeper and the requestor is used to demonstrate knowledge of the secret.Turning to FIGS. 29A, 29B, 29C, 29D, and 29E, an embodiment of computer subsystem 4000A, including a south bridge 330D and I/O devices, an embodiment of a processor 805E, an embodiment of a processor 805F, an embodiment of a computer subsystem 4000B, including a processor 805 and other system devices, and an embodiment of a computer system 4000C, including an embodiment of a processor 805 and various devices are shown, including Globally Unique IDentifiers (GUIDs) 4099 and/or a stored secret 4095 and/or a system GUID 4085.In FIG. 29A, the south bridge 330D includes an embodiment of the security hardware 370 coupled to the LPC BIL 134D and the USB interface logic 134C. The embodiment of the security hardware 370 includes the random number generator (RNG) 455, a storage location storing a secret 4095, and storage locations for storing a GUID table 4098. The GUID table 4098 may include a GUID for the south bridge 330D itself. The south bridge 330D is coupled through the USB interface logic 134C to a USB hub 4015 including a GUID 4099B. Coupled to the USB hub 4015 are a biometric device 4020 and a smart card reader 4025. The biometric device 4020 includes the secret 4095 and a storage location for storing a GUID 4099A. The smart card reader 4025 includes the secret 4095 and a storage location for storing a GUID 4099D. Coupled through the LPC bus 118 to the LPC BIL 134D are the Super I/O chip 120 and a keyboard 4019, including a GUID 4099C.In FIG. 29B, the processor 805E includes a GUID 4099E. In FIG. 29C, the processor 805F includes the GUID table 4098, either in place of or in addition to the GUID table 4098 shown in the south bridge 330D, shown in FIG. 29A. The GUID table 4098 of the processor 805F may include a GUID for the processor 805F itself.In FIG. 29D, the computer subsystem 4000B includes the processor 805, which may represent any of the embodiments of the processor 805, such as the processor 805E shown in FIG. 29B or the processor 805F shown in FIG. 29C, coupled to a north bridge 810 including a GUID 4099F through the local bus 808. The north bridge 810 is shown coupled to an AGP device 4008 including a secret 4095 (could also include a GUID 4099G) and a memory 4006 including a plurality of memory modules, shown as DIMMs (Dual In-line Memory Modules) 4060A-4060C. Each of the DIMMs 4060A-4060C includes a GUID 4099H-4099K, respectively. In alternative embodiments, the GUIDs 4099 may be replaced by a storage location to store the secret 4095 (such as shown the AGP 4008 and as in FIG. 29A) or augmented by the storage location to store the secret 4095 and the GUID 4099. Note that the computer system 4000A and 4000B may connect between the north bridge 810 and the south bridge 330D.According to one embodiment of the present invention, at boot time or during some other trusted set-up, the south bridge 330D and/or the processor 805F or other master device transmits the secret 4095 to each of the devices coupled to the master device capable of storing the secret 4095. Thus, in the illustrated embodiment of FIG. 29A, the USB hub 4015, the biometric device 4020, and the smart card reader 4025 would each store the secret 4095. In other words, during the trusted set-up, the device or devices become known to the master device through an authentication routine, and the master device communicates the secret 4095 to those devices that authenticate properly as a trusted component of the computer subsystem 4000 or some part of the computer system. During data requests or transfers, the master device transmits a random number (or at least a nonce, a number that is used only once) to the device along with the data request. The device may encrypt the data using the random number (or the nonce) and the secret before transmitting the data to the master device. Whether or not the data is encrypted, the device returns the random number (or the nonce) with the data as an authenticator of the data.As an example of this embodiment, consider the biometric device 4020 of FIG. 29A as a fingerprint scanner 4020. Placing a finger on the fingerprint scanner 4020 may cause the fingerprint scanner 4020 to send an interrupt to the system. The fingerprint scanner 4020 scans the fingerprint of the finger on the fingerprint scanner 4020 to create fingerprint data. The system notifies the south bridge 330D, which sends the nonce to the fingerprint scanner 4020. The fingerprint scanner 4020 receives the nonce and returns the fingerprint data and the nonce to the south bridge 330D in response to receiving the nonce. The fingerprint scanner 4020 may also encrypt the fingerprint data using the nonce in lieu of sending the fingerprint data in the clear (i.e. not encrypted).According to another embodiment of the present invention, at boot time or during some other trusted set-up, the south bridge 330D and/or the processor 805F or other master device reads the GUIDs from each device coupled to the south bridge 330D (i.e. the master device) capable of storing or actually storing a GUID 4099. Thus, in the illustrated embodiment of FIG. 29A, the USB hub 4015, the biometric device 4020, the smart card reader 4025, and the keyboard 4019 each have GUIDs 4099B, 4099A, 4099D, and 4099C, respectively. The south bridge 330D stores the GUIDs for each device in the GUID table 4098. In other words, during the trusted set-up, the device or devices become known to the south bridge 330D through an authentication routine, and the devices communicate their respective GUIDs 4099 to the south bridge 330D that authenticates them as a trusted component of the computer subsystem 4000 or some part of the computer system.During data requests or transfers, the south bridge 330D or other master device (e.g. the processor 805E or 805F) transmits a random number (or at least a nonce) to the device along with the data request. The device may encrypt the data using the random number (or the nonce) and the GUID before transmitting the data to the south bridge 330D. Whether or not the data is encrypted, the device returns the random number (or the nonce) with the data as an authenticator of the data.As an example of this embodiment, consider a request from the system (e.g. the master device) to the keyboard 4019 for data. The system may request the keyboard 4019 to submit the GUID 4099C with the data. The GUID 4099C in this case may be combined with the data using a hash function (i.e. a one way function well known in the art). The data are transmitted from the keyboard 4019 to the system along with the GUID 4099C. The master device, such as the security hardware 370 (alternatively the crypto-processor 305, such as shown in FIG. 4) authenticates the data.In another embodiment of the present invention, one or more devices (such as 4035 shown in FIG. 29E) include both the GUID 4099 and the storage location for the secret 4095. In this embodiment, the system master, e.g. the south bridge 330D, and the devices 4120 use the GUID 4099, the secret 4095, or both to authenticate data transmissions.It is noted that other I/O buses besides the USB 116 and the LPC bus 118 may be used in various embodiments of the present invention. For example, a hard drive (not shown) including a GUID 4099 and/or storage locations for the secret 4095 may be coupled to the IDE interface 114 (shown in FIG. 1A). In another example, the biometric device 4020 may couple to the computer subsystem 4000 through the PCI bus 110 or a serial port or a parallel port, such as through the Super I/O chip 120. Other I/O buses and connections are contemplated.As currently implemented by some manufacturers, using 128 bits for the GUID 4099, up to 10<36 > possible values are available for any GUID 4099. The sheer number of possible values allows for a device without a GUID 4099 to be assigned a random GUID 4099 with a very low possibility of duplication. The use of the random number or the nonce may prevent a replay attack using a device, such as the biometric device 4020. Note that devices without GUIDs 4099 established during manufacturing may create a random GUID 4099, either for each boot or reset or for each data transmission.It is contemplated that, for example, a part of the memory, such as a memory controller (e.g. see memory 4006 in FIG. 29D) could include a GUID table 4098 and be the master device for the memory modules, such as DIMMs 4060A-4060C. The memory controller could register the GUIDs 4099 for the DIMMs 4060. The memory controller could then give its own GUID 4099 to another master device (e.g. north bridge 810 or processor 805). In this way, transmissions between and among system devices could be registered as being from known devices. Other subsystem master device arrangements are also contemplated, such as the north bridge 810 and the south bridge 330D as local masters, with the processor 805 being the system master. Additional master devices could include the USB hub 4015 for the other USB devices and a drive controller for its attached storage drives (e.g. hard drives or optical drives).Turning now to FIG. 29E, an embodiment of the computer system 4000C is illustrated with a further embodiment of system components that are recognized by the computer system. As shown, an embodiment of the processor 805 is coupled to an embodiment of the north bridge 810. A memory subsystem 4006 and an embodiment of a south bridge 330E are also coupled to the north bridge 810. A generic device 4035 and an embodiment of the crypto-processor 305 are coupled to the south bridge 330E. The south bridge 330E includes security hardware 370, including a storage location for a system GUID 4085 and the GUID table 4098 described above. In the illustrated embodiment of the computer system 4000C, each of the processor 805, memory 4006, the north bridge 810, the device 4035, and the crypto-processor 305 includes logic 4080, a storage location for the system GUID 4085, a storage location for an introduced bit 4090, and a respective GUID 4099, such as GUIDs 4099P, 4099F, 4099M, or 4099L. Note that the logic 4080 of FIG. 29E may be implied in FIGS. 29A-29D.In one embodiment, upon first being placed in the computer system 4000C, a system master introduces each device 4035 to the computer system 4000C. For the purposes of this aspect of the present invention, a "device" may be any component or subsystem or master device that may be a part of the computer system 4000C. Examples include the processor 805, the north bridge 810, the memory controller 4006 or memory modules (not shown), the south bridge 330, USB devices (shown elsewhere), other I/O devices, and the crypto-processor 305. For the purposes of this discussion, reference will be made to device 4035, but device 4035 is intended to be generic. In particular, the device 4035 may be removable from the computer system 4000C and normally usable in another computer system (not shown) other than computer system 4000C, including data drives and I/O devices. The system master shown in FIG. 29E is the south bridge 330E. The processor 805 may alternatively be the system master. A logic circuit (not shown) on or a part of a motherboard (not shown) for the computer system 4000C, or on a daughter card (not shown), may also be the system master.As each device 4035, 805, 4006, 330E; 305, etc. is introduced to the computer system 4000C, the system master provides the system GUID 4085 to the device 4035. The device 4035 stores the system GUID 4085. The device 4035 provides the system master with its GUID 4099M and the system master stores the GUID 4085M of the device in the GUID table 4098. Upon exchanging GUIDs, the device 4035 sets the introduced bit 4090. While the introduced bit 4090 is set, the device 4035 is "married" to the computer system 4000C and will only exchange data with the computer system 4000C. The device 4035 and the computer system 4000C may also "divorce by mutual consent" by authenticating their respective GUIDs and having the device 4035 reset the introduced bit.Each data transfer in the computer system 4000C may involve the exchange of the GUID 4099 and/or the system GUID 4085. A failure to authenticate the system GUID 4085 results in the device 4035 not responding with the requested data or simply not responding to the data request. Should the device 4035 request data from another device in the computer system 4000C without providing or authenticating its own GUID 4099M, the computer system 4000C will not respond with the requested data or simply does not respond to the data request from the device 4035.To prevent complete loss of data or use of the device 4035 and the computer system 4000C, a maintenance mode or "divorce court" may be available to force the introduced bit 4090 to be reset. For example, a manufacturer may place a master ID value in each of a batch of components to allow for a repair facility to reset the introduced bit 4090.In various embodiments, the logic 4080 may be configured to provide requested data using a hash function on the GUID 4099M and either a nonce, a random number, or the requested data. For example, the processor 805 may request data from the memory 4006. The processor 805 may provide a random number and the result of a hash of the random number and either the GUID 4099N for the memory 4006 or the system GUID 4085. The memory 1406 compares the result of the hash from the processor 805 with its own calculation of the hash value before responding to the data request from the processor 805.In another embodiment, the device 4035 (as well as other system devices) does not store the system GUID 4085. In this embodiment, the device 4035 only responds to a data transaction when its GUID 4099M is provided with the data transaction. To initiate a data transaction, the device 4035 demonstrates its own GUID 4085 to the system master 330E, which authenticates the device 4035 as being introduced to the computer system 4000C and thus trusted. Note that the secret 4095 may be substituted for the system GUID 4085 and used in place of the respective GUIDs 4099. Note also that the device 4035 may be used in other computer systems other than computer system 4000C so long as the device 4035 has not been introduced to the computer system 4000C. After the device 4035 has been introduced to the computer system 4000C and the introduced bit 4090 has been set, the device is only usable in the computer system 4000C until the introduced bit 4090 has been reset. Note that the introduced bit 4090 is preferably stored in non-volatile memory.Turning now to FIGS. 30A and 30B, flowcharts of embodiments of methods 4100A and 4100B for operating a computer system including a biometric device, such as the biometric device 4020 shown in FIG. 29A. In FIG. 30A, the method 4100A includes the biometric data being sent in the clear along with the result of a hash program using a secret and a nonce or random number. In FIG. 30B, the method 3100B includes the biometric data being sent in encrypted form and an indication of the nonce or random number is sent as the result of the hash using the secret and the nonce or random number. The nonce or random number may be sent in the clear in all or only some of transmissions in the data transaction. Note that the secret may be an individual secret, such as a GUID of a device, or a group secret, such as a system GUID, a sub-system GUID, or both the individual secret and the group secret. The secret may be programmed at manufacture, established at boot time, or a random number picked during a trusted set-up, or a combination thereof.In FIG. 30A, the method 4100A includes a biometric data transaction being requested involving a biometric device, in step 4110. A nonce or random number is provided to the biometric device, in step 4115. The biometric device responds to the biometric data transaction request with the requested biometric data and the result of the hash function using the secret and the nonce or random number, in step 4120A. The result of the hash function is compared to an expected value for the hash function, in step 4125A. If the result of the hash function is not the same as the expected value, in the decision block 4130, then the transmitted biometric data are rejected, in step 4135. If the result of the hash function is the same as the expected value, in the decision block 4130, then the transmitted biometric data are accepted as the requested biometric data, in step 4140.In FIG. 30B, the method 4100B includes a biometric data transaction being requested involving a biometric device, in step 4110. A nonce or random number is provided to the biometric device, in step 4115. The biometric device responds to the biometric data transaction request with the requested biometric data in encrypted form and the result of the hash using a secret and the nonce or random number, in step 4120B. The result of the hash is compared to an expected value for the hash of the secret and the nonce or random number, in step 4125B. If the result of the hash for is not the same as the expected value for the result of the hash, in the decision block 4130, then the transmitted biometric data are rejected, in step 4135. If the result of the hash is the same as the expected value for the result of the hash, in the decision block 4130, then the transmitted biometric data in encrypted form are accepted as the requested biometric data, in step 4140.Another embodiment of the method 4100 includes providing a nonce or random number, receiving biometric data, transmitting the biometric data and the nonce or random number or the random number, and authenticating the biometric data using the nonce or random number. In still another embodiment, the method 4100 may further include encrypting the biometric data, receiving the encrypted biometric data and the nonce or random number, and decrypting the encrypted biometric data. This embodiment may only transmit the encrypted biometric data and the nonce or random number. In still another embodiment, the method 4100 may include encrypting the biometric data using the nonce or random number and decrypting the encrypted biometric data using the nonce or random number.The method 4100 may also include receiving a secret, storing the secret, transmitting at least an indication of the secret with the biometric data, receiving at least the indication of the secret, and authenticating the biometric data using at least the indication of the secret. In a further embodiment, the method 4100 may include encrypting the biometric data using the secret, and decrypting the encrypted biometric data using the secret. In still another embodiment, the method 4100 may include encrypting the biometric data using the secret and the nonce or random number, and decrypting the encrypted biometric data using the secret and the nonce or random number. In one embodiment, the secret may include a system GUID. The method 4100 may also include providing a GUID, encrypting the biometric data using the GUID, the secret, and the nonce or random number, and decrypting the encrypted biometric data using the GUID, the secret, and the nonce or random number.It is noted that in various embodiments, receiving the biometric data may occur in response to providing the nonce or random number. In other embodiments, receiving the biometric data may occur only in response to providing the nonce or random number. Various steps of various embodiments of the method may be performed by different entities, including, but not limited to, the biometric device, the master device, and the system master.Turning now to FIGS. 31A, 31B, 32A, 32B, 32C, and 33, flowcharts of embodiments of methods 4200A, 4200B, 4300A, 4300B, 4300C, and 4400 for authenticating a device in a computer system, such as computer systems including computer subsystems 4000A, 4200B, and 4000C of FIGS. 29A, 29D, and 29E, are illustrated. In the method of FIG. 31A, a secret is passed in encrypted form for authentication, but the data are transmitted in the clear. In the method of FIG. 31B, the secret and data are both passed in encrypted form. In the method of FIG. 32A, a device GUID is passed in encrypted form for authentication, but the data are transmitted in the clear. In the method of FIG. 32B, the device GUID and data are both passed in encrypted form. In the method of FIG. 32C, the secret, the device GUID, and the data are passed in encrypted form. In the method of FIG. 33, the device and the computer system are authenticated to each other as the device is united to the computer system using the introduced bit 4090 shown in FIG. 29E.In the method 4200A of FIG. 31A, a master device in the computer system transmits a secret to a device in the computer system during a trusted set-up, in block 4205. As noted elsewhere, the trusted set-up may occur, as examples, when the device is first introduced to the computer system or during a boot sequence of the computer system. A data transaction is requested involving the device in the computer system that knows the secret, in block 4210. It is contemplated that one or more or all of the devices in the computer system will follow the method 4200A and know the secret. A nonce or random number is provided to the device in the computer system that knows the secret, in block 4215.If the data transaction request is a read of data from the device, in block 4220A, the device responds to the data transaction request with the requested data and a result of a hash using the secret and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4220A, the device responds to the data transaction request with the result of the hash using the secret and the nonce or random number. Thus, in block 4220A, the device responds to the data transaction request and verifies its authorization to complete the data transaction request.The method 4200A continues with the result of the hash using the secret and the nonce or random number being compared to an expected value for the result of the hash using the secret and the nonce or random number, in block 4225. If the comparison results are not the same, in decision block 4230, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4235. If the comparison results are the same, in decision block 4230, then the method continues by accepting the transmitted data from the read or by sending the data for the write, in block 4240A.In the method 4200B of FIG. 31B, a master device in the computer system transmits a secret to a device in the computer system during a trusted set-up, in block 4205. A data transaction is requested involving the device in the computer system that knows the secret, in block 4210. It is contemplated that one or more or all of the devices in the computer system will follow the method 4200B and know the secret. A nonce or random number is provided to the device in the computer system that knows the secret, in block 4215.If the data transaction request is a read of data from the device, in block 4220B, the device responds to the data transaction request by encrypting the requested data using the secret and the nonce or random number and a result of a hash using the secret and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4220B, the device responds to the data transaction request with the result of the hash using the secret and the nonce or random number. Thus, in block 4220B, the device responds to the data transaction request and verifies its authorization to complete the data transaction request.The method 4200B continues with the result of the hash using the secret and the nonce or random number being compared to an expected value for the result of the hash using the secret and the nonce or random number, in block 4225. If the comparison results are not the same, in decision block 4230, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4235. If the comparison results are the same, in decision block 4230, then the method continues by accepting the transmitted data from the read or by encrypting the data using the secret and the nonce or random number and sending the encrypted data for the write, in block 4240B.In the method 4300A of FIG. 32A, a master device in the computer system reads the GUID for a device in the computer system during a trusted set-up, in block 4305. A data transaction is requested involving the device in the computer system with the known GUID, in block 4310. It is contemplated that one or more or all of the devices in the computer system will follow the method 4300A and have their GUIDs known to the computer system. A nonce or random number is provided to the device in the computer system with the known GUID, in block 4315.If the data transaction request is a read of data from the device, in block 4320A, the device responds to the data transaction request with the requested data and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320A, the device responds to the data transaction request with the result of the hash using the GUID and the nonce or random number. Thus, in block 4320A, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300A continues with the result of the hash using the GUID and the nonce or random number being compared to an expected value for the result of the hash using the GUID and the nonce or random number, in block 4325. If the comparison results are not the same, in decision block 4330, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method continues by accepting the transmitted data from the read or by sending the data for the write, in block 4340A.In the method 4300B of FIG. 32B, a master device in the computer system reads the GUID for a device in the computer system during a trusted set-up, in block 4305. A data transaction is requested involving the device in the computer system with the known GUID, in block 4310. It is contemplated that one, more than one, or all of the devices in the computer system will follow the method 4300B and have their GUIDs known to the computer system. A nonce or random number is provided to the device in the computer system with the known GUID, in block 4315.If the data transaction request is a read of data from the device, in block 4320B, the device responds to the data transaction request by encrypting the requested data using the GUID and the nonce or random number and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320B, the device responds to the data transaction request with the result of the hash using the GUID and the nonce or random number. Thus, in block 4320B, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300B continues with the result of the hash using the GUID and the nonce or random number being compared to an expected value for the result of the hash using the GUID and the nonce or random number, in block 4325. If the comparison results are not the same, in decision block 4330, then the method 4300B continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method 4300B continues by accepting the transmitted data from the read or by encrypting the data using the GUID and the nonce or random number and sending the encrypted data for the write, in block 4340B.In the method 4300C of FIG. 32C, a master device in the computer system reads the GUID for a device in the computer system and transmits a secret to the device during a trusted set-up, in block 4306. A data transaction is requested involving the device in the computer system with the known GUID that knows the secret, in block 4311. It is contemplated that one or more or all of the devices in the computer system will follow the method 4300C and have their GUIDs known to the computer system and know the secret. A nonce or random number is provided to the device in the computer system with the known GUID that knows the secret, in block 4316.If the data transaction request is a read of data from the device, in block 4320C, the device responds to the data transaction request by encrypting the requested data using the secret, the GUID, and the nonce or random number and a result of a hash using the secret, the GUID, and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320C, the device responds to the data transaction request with the result of the hash using the secret, the GUID, and the nonce or random number. Thus, in block 4320C, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300C continues with the result of the hash using the secret, the GUID, and the nonce or random number being compared to an expected value for the result of the hash using the secret, the GUID, and the nonce or random number, in block 4326. If the comparison results are not the same, in decision block 4330, then the method 4300C continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method 4300C continues by accepting the transmitted data from the read or by encrypting the data using the secret, the GUID, and the nonce or random number and sending the encrypted data for the write, in block 4340C.In the method 4400 of FIG. 33, a master device in the computer system reads the GUID for a device in the computer system and records the GUID in a GUID table during a trusted set-up where the device joins the computer system, in block 4405. The device may receive a system GUID from the master device and store the system GUID, in block 4410. The device sets an introduced bit in response to joining the computer system, in block 4415. The device is now considered to be "married" to the computer system. It is contemplated that one, more than one, or all of the devices in the computer system will follow the method 4400 and be "married" to the computer system.The device receives a transaction request from the computer system, and the device checks if the introduced bit is set, in block 4420. If the introduced bit is not set, in decision block 4425, then the method 4400 continues by not fulfilling the transaction request or by not responding to the transaction request, in block 4430. If the introduced bit is set, in decision block 4425, then the method 4400 may continue with the device requesting authentication from the computer system using the GUID before responding to the transaction request, in block 4435.If the device requests authorization, or if the computer system authenticates directly, a nonce or random number may be provided to the device. If the transaction request is a read of data from the device, the device may respond to the transaction request by encrypting the requested data using the GUID and the nonce or random number and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, the device may respond to the data transaction request with the result of the hash using the GUID and the nonce or random number.The method 4400 continues with the result of the authentication, in decision block 4440. If the authentication is not successful, in decision block 4440, then the method 4400 continues by not fulfilling the transaction request, in block 4430. If the authentication is successful, in decision block 4440, or if authentication is not used for the transaction request, then the method 4400 continues by fulfilling the transaction request, in block 4445.In alternative embodiments, the authentication may be performed by different methods. As an example, the master device may authenticate itself to the device by providing at least an indication of the system GUID to the device. Additional authentication methods, known in the art, may also be used other than challenge-response.Turning now to FIGS. 34 and 35, flowcharts of embodiments of methods 4500 and 4600 for removing the device from the computer system once the device has been united with ("married to") the computer system using the introduced bit 4090 shown in FIG. 29E are illustrated. In the method 4500 of FIG. 34, the removal of the device from the computer system is by joint consent, a "no-fault divorce." In the method 4600 of FIG. 35, the removal of the device from the computer system is forced in a maintenance mode using a maintenance (backdoor) key, a "court-ordered divorce."The method 4500 of FIG. 34 includes the device or the master device initiating a request for the device to leave the computer system, in block 4505. The device and the master device authenticate themselves to each other using the GUID and/or the system GUID, in response to the request for the device to leave the computer system, in block 4510. The device resets the introduced bit in response to the device and the master device successfully authenticating each other, in block 4515.The method 4500 of FIG. 34 may advantageously allow for easy removal of a device married to the computer system while maintaining system security. Authentication between the device and the master device may include any combination of the device providing at least an indication of the GUID to the master device, the device providing at least an indication of the system GUID to the master device, the master device providing at least an indication of the GUID to the device, and the master device providing at least an indication of the system GUID to the device. Any appropriate mechanism may be used for providing at least the indication, including the challenge-response method or other authentication method known in the art.The method 4600 of FIG. 35 includes the device receiving a command for the device to leave the computer system, in block 4605. The device also receives at least an indication of a maintenance key that the device can successfully authenticate, in block 4610. The device resets the introduced bit in response to the device receiving at least the indication of the maintenance key that the device can successfully authenticate, in block 4615.The method 4600 of FIG. 35 may advantageously allow for easy removal of a device married to the computer system when the computer system is unresponsive or the device must be removed from the computer system for repair, while maintaining system security. The maintenance key may be programmed by the manufacturer of the device for each device, or for a class of devices. Authorized, trusted repair facilities are preferably the only ones with access to the maintenance key. A purchaser of a large number of similar devices could request a single maintenance key for all devices purchased.Turning now to FIG. 36, a block diagram of an embodiment of a computer subsystem 4700 including bus interface logics 134B, 134C, 134D, and 134E with master mode capabilities in an embodiment of the south bridge 330F, according to one aspect of the present invention, is illustrated. In the embodiment shown, the south bridge 330F is coupled through the LPC bus 118 to an embodiment of a crypto-processor 305, including master mode logic 4790. The crypto-processor 305 is coupled to secure a protected storage 605. The bus interface logics 134B, 134C, 134D, and 134E of the south bridge 330F include IDE interface logic 134B, USB interface logic 134C, LPC bus interface logic 134D, and SMBus bus interface logic 134E. Each bus interface logic 134B, 134C, 134D, and 134E include a master mode register 4799 including a master mode bit. Coupled to the USB interface logic 134C are the USB hub 315, the biometric device 320, and the smart card reader 325.Master mode operations of the computer subsystem 4700 may advantageously allow for secure input of data, such as biometric data or smart card data, without the unencrypted data being accessible to the operating system. Master mode creates a secure communications channel between the master mode logic 4790 and the data input device.Although the illustrated embodiment of FIG. 36 shows the master mode logic 4790 in the crypto-processor 305, it is contemplated that the master mode logic 4790 may also be incorporated into other devices in the computer system, such as in the security hardware 370 shown above. It is also contemplated that other devices, such as the USB hub 315, that pass-through data may also include the master mode register 4799. In various embodiments, secure data input devices; such as the biometric device 320, the smart card reader 325, or a keyboard, also include the master mode register 4799.Note that the storage location or locations for storing the master mode bit may also include space for storing one or more addresses in an appropriate format for the bus interface logic. The one or more addresses may be used by the bus interface logics to provide data to and from only those addresses, only within the address range defined by those addresses, or to exclude data from or to those addresses or the address range the addresses define. The crypto-processor or security hardware may store the one or more addresses or the crypto-processor or security hardware may indicate to the bus interface logic or logics to store the addresses themselves.Turning now to FIG. 37, a flowchart of an embodiment of a method 4800 for operating in a master mode outside the operating system is illustrated. The master mode operation may advantageously allow for user authentication, such as via a biometric device or a smart card reader, without the operating system or a program running under the operating system from snooping on the authentication data stream.The method 4800 shown in FIG. 37 includes transmitting a master mode signal to one or more bus interface logics or other devices that include a master mode register, in block 4805. The method 4800 also includes setting a master mode bit in the master mode register of each of the one or more bus interface logics or other devices that include the master mode register to establish a secure transmission channel between the master mode logic and the data input device, in block 4810. The master mode logic and the data input device exchange data outside the operating system of the computer system through the bus interface logics or other devices that include the master mode register, in block 4815.The master mode logic flushes, or signals the bus interface logics or other devices that include the master mode register to flush, the buffers of the bus interface logics or other devices that include the master mode register after concluding the data transmissions, in block 4820. The master mode logic finally signals the bus interface logics or other devices that include the master mode register to reset the master mode bits after flushing the buffers of the bus interface logics or other devices that include the master mode register so that the operating system can again access the bus interface logics or other devices that include the master mode register, in block 4825.As used herein, operating outside the operating system means that programs running under the operating system are unable to access the bus interface logics or other devices including a master mode register when the master mode bit is set. This may advantageously allow for a program running under the operating system to request the crypto-processor or other master device including the master mode logic to perform a secure data read. The master mode logic is configured to read secure data from an input device such as a biometric device, a smart card reader, a signature verification reader, or a keyboard. As described herein, the biometric device may measure any one or more of any number of physiological and/or behavioral features, including but not limited to fingerprints, hand geometry, voice prints, retinal scans, facial scans, body odor, ear shape, DNA profile, keystroke dynamics, and vein checking.Turning now to FIGS. 38A and 38B, flowcharts of embodiments of methods 4900A and 4900B for booting a computer system including authentication via the master mode logic are shown. In FIG. 38A, the crypto-processor is used to control the master mode logic, while in FIG. 38B, the security hardware is used to control the master mode logic.In FIG. 38A, the processor executes BIOS code instructions from SMM space, in 4920. After optionally accessing the security hardware, in 4930, the method 4900A requests authentication from the crypto-processor, preferably using the master mode logic, in 4835A. The method 4900A places the bus interface logics in master mode, in 4938. The bus interface logics would typically be between the crypto-processor and the authentication device. The method 4900A receives the authentication data while the bus interface logics are in master mode, in 4940. The method 4900A exits master mode and flushes the buffers of the bus interface logics, in 4942. The method 4900A next verifies the authentication data, in 4944. Verifying the authentication data may include the crypto-processor providing an indication of the authentication data to a remote security device. If the authentication data are verified in 4948, then the method 4900A continues the boot process, in 4990. If the authentication data are not verified in 4948, then the method 4900A returns to 4935A and again requests authentication.In FIG. 38B, the processor executes BIOS code instructions from SMM space, in 4920. After optionally accessing the security hardware, in 4930, and optionally entering a BIOS management mode, in 4932, the method 4900B requests authentication from the security hardware, using the master mode logic, in 4935B. The method 4900B places the bus interface logics in master mode, in 4938. The bus interface logics would typically be between the security hardware, e.g. the south bridge, and the authentication device. The method 4900B receives the authentication data while the bus interface logics are in master mode, in 4940. The method 4900B exits master mode and flushes the buffers of the bus interface logics, in 4942. The method 4900B next verifies the authentication data, in 4944. Verifying the authentication data may include the security hardware providing an indication of the authentication data to a remote security device. If the authentication data are verified in 4948, then the method 4900B continues the boot process, in 4990. If the authentication data are not verified in 4948, then the method 4900B returns to 4935A and again requests authentication.Note that the relative position of steps of the methods 4900A and 4900B in the boot process (or sequence), such as shown in FIG. 1A would typically be prior to step 152. The relative position of various steps of the methods 4900A and 4900B in the boot process may also be between steps 1632 and 1650 of FIGS. 16A and 16B. Various BIOS code segments may be necessary for correct response of various devices in the computer system, such as the south bridge and authentication devices coupled thereto.Turning now to FIGS. 39A, 39B, and 39C, block diagram of embodiments of systems 5000A, 5000B, and 5000C for securing a device, a computer subsystem, and/or a computer system using timers to enforce periodic authentication. In FIG. 39A, the system 5000A includes each of a computer system 5005, a computer subsystem 5020, and a device 5040 as well as a network security authenticator 5070. In FIG. 39B, the system 5000B includes a portable computer 5003 coupled to a server 5004 for authentication. In FIG. 39C, the system 500C includes two computer systems 5003A and 5003B coupled to the server 5004 including the network security authenticator 5070.In FIG. 39A, the system 5000A, as shown, includes the computer system 5005 coupled to the network security authenticator 5070 through a network 5065. The computer system 5005 includes logic 5007, a timer 5009, a security authenticator 5010, and the computer system 5020. The computer subsystem 5020 includes logic 5027, a timer 5029, a security authenticator 5030, and the device 5040. The device 5040 includes logic 5047 and a timer 5049.In one embodiment, the device 5040 authenticates to the computer subsystem 5020, using the security authenticator 5030, and the logic 5047 sets and monitors the timer 5049. In another embodiment, the device 5040 authenticates to the computer system 5005, using the security authenticator 5010, and the logic 5047 sets and monitors the timer 5049. In still another embodiment, the device 5040 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5047 sets and monitors the timer 5049.In one embodiment, the computer subsystem 5020 authenticates to the computer system, using the security authenticator 5010, and the logic 5027 sets and monitors the timer 5029. In another embodiment, the computer subsystem 5020 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5027 sets and monitors the timer 5029. In another embodiment, the computer system 5005 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5007 sets and monitors the timer 5009. Note that not all of these embodiments are mutually exclusive.In FIG. 39B, the system 5000B includes the portable computer coupled over a remote connection to the server 5004. The operations of the system 5000B may be given in FIG. 40B below. The portable computer 5003 may include the logic 5007 and the timer 5009 shown in FIG. 39A. The server 5004 may include the network security authenticator 5070.In FIG. 39C, the system 500C includes two computer systems 5003A and 5003B coupled over the network 5065 to the server 5004 including the network security authenticator 5070. The computer system 5003A includes a south bridge 330G that includes security hardware 370. The security hardware 370, as shown, includes the logic 5047 and the timer 5049. The computer system 5003B includes a crypto-processor 370, in place of the logic 5047, coupled to the timer 5049. FIG. 39C illustrates that the security hardware 370 or the crypto-processor 370 may control the timer 5049 and the interactions with the network security authenticator 5070.Turning now to FIGS. 40A and 40B, flowcharts of embodiments of methods 5100A portable computer, by limiting use to finite periods of time between successive authorizations are illustrated. The methods 5100A and 5100B may advantageously discourage theft of the device, the computer subsystem, or the computer system as its usefulness is limited outside of or without its authorizing computer subsystem, computer system, or network security connections. While the method 5100A of FIG. 40A is a general method applicable to any of device, computer subsystem, or computer system, the method 5100B of FIG. 40B is an example of a specific method applicable to a portable computer adapted to communicate over a computer network.In FIG. 40A, the method 5100A authenticates the device, the computer subsystem, or the computer system to the computer subsystem, the computer system, or the network security device, in 5105. Typically, the device will authenticate to the computer subsystem or the computer system, while the computer subsystem will authenticate to the computer system or the network security device, and the computer system will authenticate to the network security device. Deviations from this typical behavior may include a device authenticating to the network security device, or the computer system authenticating to another computer system.The method 5100A sets a starting value on a timer in response to successfully authenticating the device, the computer subsystem, or the computer system, in 5110. The timer is updated in a periodic fashion, in 5115. The method 5100A checks in 5120 if the timer has expired. If the timer has not expired, in 5120, then the method 5100A continues the normal operation of the device, the computer subsystem, or the computer system in 5125, and returns to 5115. If the timer has expired, in 5120, then the method 5100A attempts to re-authenticate the device, the computer subsystem, or the computer system to the appropriate master, in 5130. If the re-authentication in 5130 is successful, in 5135, then the method 5100A returns to 5110 and resets the starting value on the timer. If the re-authentication in 5130 is not successful, in 5135, then the method 5100A shuts down the device, the computer subsystem, or the computer system until the device, the computer subsystem, or the computer system can be re-authenticated, such as during the boot process.Note that the timer may be implemented as a count down timer running from a set value down to the expired value of zero or a counting timer running from zero up to a predetermined value as the expired value. The set value or the predetermined value may be a constant or may be randomly selected. The set value or the predetermined value may also vary according to a predetermined algorithm, if desired. Updating the timer may occur with each increment of the system clock or a local clock, or only while the device, the computer subsystem or the computer system is operating.The method 5100B established a network connection to the network security device (or system) in 5104. The method 5100B authenticates a portable computer to the network security system, in 5106. The authentication may occur during the boot process. The method computer, in 5110. The timer is updated in a periodic fashion, in 5115. The method 5100B checks in 5120 if the timer has expired. If the timer has not expired, in 5120, then the method 5100B continues the normal operation of the device, the computer subsystem, or the computer system in 5126, and returns to 5115. If the timer has expired, in 5120, then the method 5100B attempts to establish network connection to the network security system, in 5129, and to re-authenticate the portable computer to the network security system, in 5131. If the re-authentication, in 5131, is successful, in 5135, then the method 5100B returns to 5110 and resets the starting value on the timer. If the re-authentication, in 5131, is not successful, in 5135, then the method 5100B shuts down the portable computer and requires authentication during the boot process, in 5141, before normal operations of the portable computer are allowed to resume.Note that the device 5040 may represent any device 5040 in the computer system 5003 or 5005. The computer subsystem 5020 may represent any computer subsystem 5020 in the computer system 5003 or 5005. Also note that code for the authentication and timer settings may be stored in the security hardware 370 or the secure storage shown elsewhere in this disclosure, such as the BIOS ROM 365, the SMM ROM 520, the extended BIOS 555, or the protected storage 605.Turning now to FIG. 41, a flowchart of an embodiment of a method 5200 for booting a computer system including initializing a timer to enforce periodic authentication and authorization is shown. The method includes the processor executing BIOS code instructions from SMM space, in 5220. The method 5200 may also access the security hardware, in 5230. The method 5200 may also optionally enter BIOS management mode, in 5232. The method 5200 authenticates the computer system through the security hardware, in 5235. Authentication data are provided to the security hardware, in 5240. If the authentication is not successful, in 5248, then the method 5200 shuts down the computer system until successful authentication is provided, in 5195. If the authentication is successful, in 5248, then the method 5200 sets a starting value on the timer, in response to successfully authenticating, in 5280. The method 5200 then continues the boot process, in 5290.Turning now to FIGS. 42A and 42B, block diagrams of embodiments of the system management registers 470A and 470B are illustrated. In the embodiment shown in FIG. 42A, the secure system management registers 470A include one or more ACPI lock bits 5310A through 5310N to secure various ACPI or related functions against unauthorized changes. The ACPI lock bits 5310, once set, prevent changes to the ACPI or related functions. A request to change one of the ACPI or related functions requires that a respective ACPI lock bit 5310N be released before the respective one of the ACPI or related functions is changed.In the embodiment shown in FIG. 42B, the secure system management registers 470 include one or more ACPI range registers 5320 and/or one or more ACPI rule registers 5330. Each of the one or more ACPI range registers 5120 may be configured to store a value or values that define allowable or preferred values for a specific ACPI or related function. Each of the one or more ACPI rule registers 5330 may be configured to store part or all of a rule for determining if a change to one of the ACPI or related functions should be allowed. Each of the one or more ACPI rule registers 5330 may also be configured to store code for evaluating the rules for determining if a change to one of the ACPI or related functions should be allowed or comparing a requested value or change to the value or values that define allowable or preferred values for a specific ACPI or related function stored in one of the ACPI range registers 5320.Examples of ACPI or related functions include changing a voltage, changing a frequency, turning on or off a cooling fan, and a remote reset of the computer system. It is contemplated that other ACPI or related functions may also be used. It is noted that the voltage may be a processor voltage, the frequency may be a processor operating frequency or a bus or interface frequency, the cooling fan may be operable or intended to cool any component in the computer system, including devices or subsystems not described herein, such as a power supply. It is noted that in various embodiments, the SMM access filters 410, such as shown in FIG. 5A, may include address range traps for directing access requests to evaluate the contents of the ACPI management registers 470A or 470B.For the purposes of this disclosure, references to ROM are to be construed as also applying to flash memory and other substantially non-volatile memory types. Note that while the methods of the present invention disclosed herein have been illustrated as flowcharts, various elements of the flowcharts may be omitted or performed in different order in various embodiments. Note also that the methods of the present invention disclosed herein admit to variations in implementation.Some aspects of the invention as disclosed above may be implemented in hardware or software. Thus, some portions of the detailed descriptions herein are consequently presented in terms of a hardware implemented process and some portions of the detailed descriptions herein are consequently presented in terms of a software-implemented process involving symbolic representations of operations on data bits within a memory of a computing system or computing device. These descriptions and representations are the means used by those in the art to convey most effectively the substance of their work to others skilled in the art using both hardware and software. The process and operation of both require physical manipulations of physical quantities. In software, usually, though not necessarily, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantifies. Unless specifically stated or otherwise as may be apparent, throughout the present disclosure, these descriptions refer to the action and processes of an electronic device, that manipulates and transforms data represented as physical (electronic, magnetic, or optical) quantities within some electronic device's storage into other data similarly represented as physical quantities within the storage, or in transmission or display devices. Exemplary of the terms denoting such a description are, without limitation, the terms "processing," "computing," "calculating," "determining," "displaying," and the like.Note also that the software-implemented aspects of the invention are typically encoded on some form of program storage medium or implemented over some type of transmission medium. The program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or "CD ROM"), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The invention is not limited by these aspects of any given implementation.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
System, methods and apparatus are described that facilitate transmission of data, particularly between two or more devices within an electronic apparatus. Embodiments disclosed herein relate to scanning for slave identifiers (SIDs) on a CCIe bus. A disclosed method includes transmitting a first inquiry on a control data bus, where the first inquiry includes a first configuration of bits, determining presence of a slave device that has a slave identifier that includes a second configuration of bits that matches the first configuration of bits, and repetitively transmitting additional inquiries on the control data bus with different configurations of bits until all bits of the slave identifier are determined. The slave device may assert a response to each inquiry that includes a configuration of bits that matches a corresponding configuration of bits in the slave identifier.
1.A method comprising:Transmitting a first query on a control data bus, wherein the first query comprises a first bit configuration;Determining the presence of a slave device having a slave identifier including a second bit configuration that matches the first bit configuration;Additional queries having different bit configurations are repeatedly transmitted on the control data bus until all bits of the slave identifier are determined,Wherein the slave device asserts a response for each query that includes a bit configuration that matches a corresponding one of the slave identifiers.2.The method of claim 1 wherein said slave device transmits in said first query by passing a word transmitted in said first query with a passed application of said slave identifier The mask performs a comparison of the masked copies to identify a match between the first bit configuration and the second bit configuration.3.The method of claim 2 wherein said additional query comprises a second query, and said method further comprising:Modifying the mask to obtain a modified mask that exposes additional bits of the slave identifier for comparison;Transmitting the second query on the control data bus, wherein the second query includes the first bit configuration and the modified mask.4.The method of claim 2, wherein the additional query comprises a third query transmitted when no response to the previous query is received, and the method further comprises:Modifying the first bit configuration to obtain a third bit configuration by flipping a value of an effective least significant bit (MSB) of the first bit configuration, wherein the active MSB is defined as the slave identity a bit corresponding to the highest value bit to which the mask transmitted in the previous query is not suppressed is applied;Modifying the mask to obtain a modified mask that exposes additional bits of the slave identifier for comparison;The third query is transmitted on the control data bus, wherein the third query includes the third bit configuration and the mask transmitted in the previous query.5.The method of claim 2, wherein the additional query comprises a fourth query transmitted after all bits of the slave identifier have been determined, and the method further comprises:Restoring the mask to obtain a restored mask having a value transmitted in a prior query causing at least one slave device to assert the response;Modifying a bit configuration transmitted in the prior query to obtain a fourth bit configuration;The fourth query is transmitted on the control data bus, wherein the fourth query includes the fourth bit configuration and the restored mask.6.The method of claim 5 wherein different slave devices are responsive to said fourth query, wherein said different slave devices are associated with said different slave device in said fourth bit configuration match The response is asserted when the corresponding bit in the different slave identifiers is configured.7.The method of claim 1 wherein a plurality of slave devices are responsive to said first query, wherein said plurality of slave devices match said plurality of slave devices in said first bit configuration The same response is asserted when the corresponding bit in the respective slave identifier is configured.8.The method of claim 7 wherein said response is asserted using a first line of said control data bus.9.The method of claim 1 wherein said control data bus is a two-wire bus, wherein both lines of said two-wire bus are used to communicate said first query.10.The method of claim 1 wherein after all bits of the slave identifier are determined, the method further comprises:Additional queries having different bit configurations are repeatedly transmitted on the control data bus until all slave identifiers of all slave devices coupled to the control data bus have been determined.11.The method of claim 1 wherein said first query is directed to all slave devices coupled to said control data bus.12.The method of claim 1 wherein said first query is directed to a slave device that was previously unidentified coupled to said control data bus.13.The method of claim 1, wherein the first query defines a response period, wherein if there is a match between the second bit configuration and the first bit configuration, the slave device must be The response period is responsive to the control data bus.14.The method of claim 1 wherein said response is temporarily pulled down by said slave device by a match between said second bit configuration and said first bit configuration Control the first line of the data bus to assert.15.The method of claim 14 wherein the other device coupled to said control data bus masks its input to said first line of said control data bus during a response period.16.A device that includes:a slave device coupled to the control data bus;A master device coupled to the control data bus and adapted to manage communications on the control data bus and configured to:Transmitting a first query on a control data bus, wherein the first query comprises a first bit configuration;Determining the presence of a slave device having a slave identifier comprising a second bit configuration that matches the first bit configuration;Additional queries having different bit configurations are repeatedly transmitted on the control data bus until all bits of the slave identifier are determined,Wherein the slave device asserts a response for each query that includes a bit configuration that matches a corresponding one of the slave identifiers.17.The device of claim 16 wherein said slave device transmits in said first query by passing a word transmitted in said first query with a passed application of said slave identifier The mask performs a comparison of the masked copies to identify a match between the first bit configuration and the second bit configuration.18.The device of claim 17, wherein the additional query comprises a second query, wherein the master device is further configured to:Modifying the mask to obtain a modified mask that exposes additional bits of the slave identifier for comparison;Transmitting the second query on the control data bus, wherein the second query includes the first bit configuration and the modified mask.19.The device of claim 17 wherein said additional query comprises a third query transmitted upon receipt of a response to a previous query, and wherein said master device is further configured to:Modifying the first bit configuration to obtain a third bit configuration by flipping a value of an effective least significant bit (MSB) of the first bit configuration, wherein the active MSB is defined as the slave identity a bit corresponding to the highest value bit to which the mask transmitted in the previous query is not suppressed is applied;Modifying the mask to obtain a modified mask that exposes additional bits of the slave identifier for comparison;The third query is transmitted on the control data bus, wherein the third query includes the third bit configuration and the mask transmitted in the previous query.20.The device of claim 17, wherein the additional query comprises a fourth query transmitted after all bits of the slave identifier have been determined, and wherein the master device is further configured to :Restoring the mask to obtain a restored mask having a value transmitted in a prior query causing at least one slave device to assert the response;Modifying a bit configuration transmitted in the prior query to obtain a fourth bit configuration;Transmitting the fourth query on the control data bus, wherein the fourth query includes the fourth bit configuration and the restored mask,Wherein the different slave devices are responsive to the fourth query, wherein the different slave devices match respective bits in different slave identifiers associated with the different slave devices in the fourth bit configuration The response is asserted in response to the configuration.21.The device of claim 16 wherein a plurality of slave devices are responsive to said first query, wherein said plurality of slave devices match said plurality of slaves by said first bit configuration The response is asserted by asserting the corresponding bit in each of the respective slave identifiers of the device.22.The device of claim 16, wherein the first query defines a response period, wherein if there is a match between the second bit configuration and the first bit configuration, the slave device must be The response period is responsive to the control data bus.23.The device of claim 16 wherein said response is said to be temporarily pulled down by said slave device by a match between said second bit configuration and said first bit configuration Control the first line of the data bus to assert.24.The apparatus of claim 23 wherein the other device coupled to said control data bus masks its input to said first line of said control data bus during a response period.25.A device that includes:Means for transmitting a first query on a control data bus, wherein the first query comprises a first bit configuration;Means for determining the presence of a slave device having a slave identifier comprising a second bit configuration that matches the first bit configuration,Wherein the means for transmitting is configured to repeatedly transmit additional queries having different bit configurations on the control data bus until all bits of the slave identifier are determined,Wherein the slave device asserts a response for each query that includes a bit configuration that matches a corresponding one of the slave identifiers,Wherein the slave device identifies the word by transmitting a word transmitted in the first query against a copy of the slave identifier that has been masked by a mask transmitted in the first query by an application. A match between the first bit configuration and the second bit configuration is described.26.The device of claim 25, wherein the additional query comprises a second query, and the device further comprises:Means for modifying the mask to obtain a modified mask exposing additional bits of the slave identifier for comparison;Means for transmitting the second query on the control data bus, wherein the second query comprises the first bit configuration and the modified mask.27.The device of claim 25, wherein the additional query comprises a third query transmitted when no response to the previous query is received, and the device further comprises:Means for modifying the first bit configuration to obtain a third bit configuration by flipping a value of an effective least significant bit (MSB) of the first bit configuration, wherein the active MSB is defined as a bit corresponding to the highest value bit in which the mask transmitted in the previous query is not suppressed is applied in the slave identifier;Means for modifying the mask to obtain a modified mask exposing additional bits of the slave identifier for comparison;Means for transmitting the third query on the control data bus, wherein the third query includes the third bit configuration and the mask transmitted in the previous query.28.The device of claim 25, wherein a plurality of slave devices are responsive to said first query, said plurality of slave devices matching said plurality of slave devices in said first bit configuration The response is asserted when the corresponding bit in the corresponding slave identifier is configured, and the response is on the control data bus shared by the plurality of slave devices and in the response period defined by the first query The inside is asserted.29.A machine readable storage medium having stored thereon one or more instructions that, when executed by at least one processor, cause the at least one processor to:Transmitting a first query on a control data bus, wherein the first query comprises a first bit configuration;Determining the presence of a slave device having a slave identifier including a second bit configuration that matches the first bit configuration;Additional queries having different bit configurations are repeatedly transmitted on the control data bus until all bits of the slave identifier are determined,Wherein the slave device asserts a response for each query that includes a bit configuration that matches a corresponding one of the slave identifiers,Wherein the slave device identifies the word by transmitting a word transmitted in the first query against a copy of the slave identifier that has been masked by a mask transmitted in the first query by an application. A match between the first bit configuration and the second bit configuration is described.30.A machine-readable storage medium as recited in claim 29, wherein a plurality of slave devices are responsive to said first query, wherein said plurality of slave devices match said plurality of slave configurations The response is asserted when the respective bit of each of the respective slave identifiers is configured, and wherein the response is on the control data bus shared by the plurality of slave devices and The response time period defined by the first query is asserted.
Slave identifier scanning and hot plug capability on the CCIe busCross-reference to related applicationsThe present application claims priority to and the benefit of U.S. Provisional Patent Application Serial No. 61/889, the entire disclosure of which is incorporated herein by reference.fieldThe present disclosure relates to enabling efficient operation on a shared bus, and more particularly to techniques for efficiently identifying slave devices coupled to a shared bus and facilitating hot plugging of devices to a shared bus.backgroundI2C (also known as I 2 C) is a multi-master serial single-ended bus that is used to attach low-speed peripherals to a motherboard, embedded system, cellular phone, or other electronic device. The I2C bus includes a clock (SCL) and data (SDA) line with 7-bit addressing. The bus has two roles for the device/node: the master and the slave. The master device is a device that generates a clock and initiates communication with the slave device. A slave device is a device that receives a clock and responds when it is addressed by the master. The I2C bus is a multi-master bus, which means that any number of master devices can be present. In addition, the master role and the slave role can be changed between messages (after STOP is sent). I2C defines the basic message types, where each message type starts with START and ends with STOP.In this context of camera implementation, one-way transmission can be used to capture images from sensors and transfer such image data to memory in a baseband processor, while control data can be between the baseband processor and sensors and other peripherals Being exchanged. In one example, a Camera Control Interface (CCI) protocol can be used for such control data between a baseband processor and an image sensor (and/or one or more slave devices). In one example, the CCI protocol can be implemented on an I2C serial bus between the image sensor and the baseband processor.There is a need for techniques that allow a master device to identify slave devices and/or other devices that are coupled to a shared bus.OverviewEmbodiments disclosed herein provide systems, methods and apparatus for data communication. In particular, certain aspects of the present disclosure relate to scanning a slave identifier (SID) on a CCIe bus.In certain aspects of the present disclosure, a method for scanning a SID includes transmitting a first query on a control data bus, wherein the first query includes a first bit configuration; determining to have a portion that includes a match with the first bit configuration The presence of a slave device of the two-bit configured slave identifier; and an additional query with a different bit configuration is repeatedly transmitted on the control data bus until all bits of the slave identifier are determined. The slave device can assert the response for each query that includes a bit configuration that matches the corresponding bit configuration in the slave identifier.In one aspect, the slave device identifies the first bit configuration by comparing a word transmitted in the first query with a copy of the slave identifier that has been masked by a mask transmitted in the first query by the application. A match with the second bit configuration.In another aspect, the additional query can include a second query. The method can also include modifying the mask to obtain a modified mask that exposes additional bits of the slave identifier for comparison; and transmitting the second query on the control data bus. The second query can include a first bit configuration and a modified mask.In another aspect, the additional query can include a third query that is transmitted when no response to the previous query is received. The method can further include modifying the first bit configuration to obtain a third bit configuration by flipping a value of an active most significant bit (MSB) of the first bit configuration, wherein the active MSB is defined with the slave identifier The bit corresponding to the highest value bit that is not suppressed by the mask transmitted in the previous query is applied. The third query may include a third bit configuration and a mask transmitted in the previous query.In another aspect, the additional query includes a fourth query transmitted after all bits of the slave identifier have been determined. The method can further include: restoring the mask to obtain a restored mask having a value transmitted in a prior query causing the at least one slave device to assert the response; modifying the bit configuration transmitted in the prior query to obtain a fourth bit configuration; and transmitting a fourth query on the control data bus. The fourth query includes a fourth bit configuration and a restored mask. Different slave devices can respond to the fourth query. The different slave device can assert the response when the fourth bit configuration matches a corresponding one of the different slave identifiers associated with the different slave device.In another aspect, the plurality of slave devices respond to the first query. The plurality of slave devices can assert the same response when the first bit configuration matches a respective one of the respective slave identifiers of the plurality of slave devices. The response can be asserted using the first line of the control data bus.In another aspect, the control data bus is a two-wire bus. Both lines of the two-wire bus can be used to pass the first query.In another aspect, and after all bits of the slave identifier are determined, the method further includes repeatedly transmitting additional queries having different bit configurations on the control data bus until coupled to the control data bus All slave identifiers for all slave devices have been determined.In another aspect, the first query is directed to all of the slave devices coupled to the control data bus. The first query can be directed to a slave device that has not been previously identified that is coupled to the control data bus. The first query may define a response period in which the slave device must respond on the control data bus during the response period if there is a match between the second bit configuration and the first bit configuration. The response may be asserted by the slave device by temporarily pulling down the first line of the control data bus if there is a match between the second bit configuration and the first bit configuration. Other devices coupled to the control data bus mask their input to the first line of the control data bus during the response period.In certain aspects of the present disclosure, an apparatus adapted to scan a SID includes: a slave device coupled to a control data bus; a master device coupled to the control data bus and adapted to manage the control data bus Communication. The master device can be configured to: transmit a first query on a control data bus, wherein the first query includes a first bit configuration; determining to have a slave identifier comprising a second bit configuration that matches the first bit configuration The presence of the slave device; and repeated transmission of additional queries having different bit configurations on the control data bus until all bits of the slave identifier are determined. The slave device can assert the response for each query that includes a bit configuration that matches the corresponding bit configuration in the slave identifier.In certain aspects of the present disclosure, an apparatus adapted to scan a SID includes: means for transmitting a first query on a control data bus, wherein the first query comprises a first bit configuration; and The first bit configures a device that matches the presence of the slave device of the slave identifier of the second bit configuration. The means for transmitting can be configured to repeatedly transmit additional queries having different bit configurations on the control data bus until all bits of the slave identifier are determined. The slave device can assert the response for each query that includes a bit configuration that matches the corresponding bit configuration in the slave identifier. The slave device may identify the first bit configuration and the second bit by comparing the word transmitted in the first query with a copy of the slave identifier that has been masked by the application transmitted in the first query Match between configurations.In some aspects of the disclosure, one or more instructions are stored on a machine readable storage medium. The one or more instructions, when executed by the at least one processor, cause the at least one processor to scan the SID by transmitting a first query on the control data bus, wherein the first query includes a first bit configuration; An existence of a slave device including a slave identifier of a second bit configuration that matches the first bit configuration; and repeatedly transmitting an additional query having a different bit configuration on the control data bus until the slave identifier All bits of the character are determined. The slave device can assert the response for each query that includes a bit configuration that matches the corresponding bit configuration in the slave identifier. The slave device may identify the first bit configuration and the second bit by comparing the word transmitted in the first query with a copy of the slave identifier that has been masked by the application transmitted in the first query Match between configurations.DrawingThe features, nature, and advantages of the invention will be apparent from the description of the appended claims.1 is a block diagram illustrating an apparatus having a baseband processor and an image sensor and implementing an image data bus and a multimode control data bus.Figure 2 illustrates how the clock can be embedded in the symbol-to-symbol transition in CCIe mode, thereby allowing two lines in the I2C bus (i.e., SDA line and SCL line) to be used for data transmission.3 is a block diagram illustrating an exemplary method for transcoding data bits into transcoded symbols at a transmitter to embed clock signals within the transcoded symbols.Figure 4 illustrates an exemplary transition between transition numbers and sequential symbols.Figure 5 illustrates the transition between the number of transitions and the sequential symbols.Figure 6 illustrates a method for converting binary bits to ternary numbers from the most significant bit to the least significant bit.Figure 7 illustrates transmitter side logic for converting binary bits to ternary numbers from the most significant bit to the least significant bit.Figure 8 illustrates a method for converting a ternary number to a binary bit from the most significant bit to the least significant bit.Figure 9 illustrates a receiver side logic circuit for converting a 12-digit ternary number into 20 bits.Figure 10 conceptually illustrates that bit 19 (i.e., the 20th bit when the bit count begins with the first bit of bit 0) is not used in the CCIe protocol in most cases and can be used on a shared bus. Command between devices.11 illustrates an exemplary general call to a CCIe mode entry indicator that can be transmitted by a master device on a shared bus to indicate to the slave device that the shared bus is switching from I2C mode to CCIe mode.12 illustrates an exemplary CCIe call that may be issued by a CCIe master device (e.g., the master device in I2C mode of FIG. 1) to indicate a transition from CCIe mode to I2C mode to all CCIe capable devices.Figure 13 illustrates an exemplary CCIe Follower Identifier (SID) word format. This illustrates the use of a 16-bit slave identifier (SID) as part of the CCIeSID word format.Figure 14 illustrates an exemplary CCIe address word format.Figure 15 illustrates an exemplary write data word format.Figure 16 illustrates an exemplary read specification word format.Figure 17 illustrates an exemplary read data word format.Figure 18 illustrates an exemplary timing diagram of an I2C one byte write data operation.Figure 19 illustrates an exemplary CCIe transmission in which data bits have been transcoded into twelve symbols for transmission on the SDA line and the SCL line.Figure 20 illustrates an exemplary mapping of the 20th bit (bit 19) resulting from the encoding scheme illustrated in Figures 2-10.Figure 21 illustrates details of sub-regions within an exemplary mapping of the 20th bit (bit 19) region of Figure 20 .22 illustrates one example of a "SID Scan All" command that may be issued by a master device in accordance with certain aspects disclosed herein.23 illustrates an example of an algorithm that can be used to scan a SID in accordance with certain aspects disclosed herein.Figure 24 illustrates a timing diagram of SID scanning on a shared bus including SDA lines and SCL lines.Figure 25 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.26 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.27 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.28 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.29 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 30 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 31 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.32 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 33 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 34 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 35 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.36 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 37 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 38 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 39 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 40 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 41 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 42 illustrates an example of various steps of a SID scan performed in accordance with certain aspects disclosed herein.Figure 43 illustrates an example of a "SID Scan New" command that can be issued by a master device.44 is a block diagram illustrating an example of an apparatus employing a processing system that can be adapted in accordance with certain aspects disclosed herein.45 is a flow diagram illustrating a method for performing slave identifier scanning over a communication link.Figure 46 is a conceptual diagram illustrating an example of a hardware implementation of an apparatus employing processing circuitry configured to perform slave identifier scanning over a communication link.Detailed DescriptionIn the following description, specific details are set forth to provide a thorough understanding of the embodiments. However, those of ordinary skill in the art will understand that these embodiments can be practiced without these specific details. For example, the circuits may be shown in block diagrams in order to avoid obscuring the embodiments in unnecessary detail. In other instances, well-known circuits, structures, and techniques may not be shown in detail to avoid obscuring the embodiments.Exemplary operating environment1 is a block diagram illustrating an apparatus 102 having a baseband processor 104 and an image sensor 106 and implementing an image data bus 116 and a multimode control data bus 108. Although FIG. 1 illustrates a multimode control data bus 108 within a camera device, it should be apparent that the control data bus 108 can be implemented in a variety of different devices and/or systems. Image data may be transmitted from image sensor 106 to baseband processor 104 over image data bus 116 (e.g., a high speed differential DPHY link).In one example, control data bus 108 can be an I2C bus that includes two wires: a clock line (SCL) and a serial data line (SDA). The clock line SCL can be used to transmit a clock that is used to synchronize all data transfers on the I2C bus (control data bus 108). Data line SDA and clock line SCL are coupled to all devices 112, 114, and 118 on the I2C bus (control data bus 108). In this example, control data can be exchanged between the baseband processor 104 and the image sensor 106 and other peripherals 118, 122, and/or 124 via the control data bus 108. The I2C's standard clock (SCL) speed is up to 100KHz. The standard clock in I2C fast mode has a SCL speed of up to 400KHz and up to 1MHz in I2C fast mode plus (Fm+). These modes of operation on the I2C bus can be referred to as Camera Control Interface (CCI) mode when used in camera applications.According to one aspect, an improved mode of operation (i.e., a control data bus transmission frequency greater than 1 MHz) can be implemented on the multimode control data bus 108 to support camera operation. This improved mode of operation on the I2C bus can be referred to as Camera Control Interface Extension (CCIe) mode when used in camera applications. In CCIe mode, both the SCL line and the SDA line can be used to transmit data, and the clock is embedded in the symbol-to-symbol transitions of the two lines. In this example, baseband processor 104 includes master device 112 and image sensor 106 includes slave device 114, both master device 112 and slave device 114 can control data bus 108 in accordance with camera control interface extension (CCIe) mode. The operation is performed without affecting the proper operation of other legacy I2C devices coupled to control data bus 108. According to one aspect, this improved mode on control data bus 108 can be implemented without any bridge devices between the CCIe device and the legacy I2C slave device.A protocol is provided that permits an I2C compatible device and a CCIe compatible device to be concurrently coupled to the shared control data bus 108. Control data bus 108 can dynamically switch between operations in accordance with different communication protocols (e.g., I2C mode and CCIe mode). As mentioned previously, multimode master device 112 manages communication and/or access to shared control data bus 108. The master device transmits an incoming call to instruct the control data bus 108 to switch its communication protocol from a first protocol mode (e.g., I2C mode) to a second protocol mode (e.g., CCIe mode). Similarly, the master device transmits an exit call indicating that the control data bus 108 is to switch its communication protocol from a second protocol mode (e.g., CCIe mode) to a first protocol mode (e.g., I2C mode). The slave devices coupled to the shared bus 108 monitor these incoming and outgoing calls to ascertain when they can operate on the shared bus 108.Exemplary CCIe coding techniqueFigure 2 illustrates how the clock can be embedded in the symbol-to-symbol transition in CCIe mode, thereby allowing two lines in the I2C bus (i.e., SDA line and SCL line) to be used for data transmission. In one example, this embedding of the clock can be achieved by shifting the clock transcoding. For example, data 204 to be transmitted over a physical link (wire) is transcoded to ensure that the transmitted symbols change state at each symbol cycle or transition of transmitted symbols 206. In one example, the bit sequence is converted to a ternary number and each digit of the ternary number is converted to a symbol for transmission. Even if the two sequential digits of the ternary number are the same, the sequential symbols are guaranteed to be different. Thus, the original clock 202 can be embedded in the symbol state change for each symbol cycle. The receiver recovers the clock information 208 from a state transition at each symbol (in the transmitted symbol 206) and then reverses the transcoding of the transmitted symbol 206 to obtain the original data 210. In one example, each symbol is converted to a digit, and the plurality of digits constitute a ternary number, wherein the ternary number is then converted into a plurality of bits. Thus, the original clock 202 can be embedded in the symbol state change for each symbol cycle. This allows the two wires of the I2C bus (control data bus 108, SDA line and SCL line in Figure 1) to be used to transmit data information. In addition, the symbol rate can be doubled because there is no longer a need to establish and hold time between the clock signal and the data signal.3 is a block diagram illustrating an exemplary method for transcoding data bits into transcoded symbols at a transmitter to embed clock signals within the transcoded symbols. At transmitter 302, data bit sequence 304 is converted to a ternary (base 3) number (ie, "transition number"), and these ternary numbers are then converted to be uploaded on clock line SCL 312 and data line SDA 314. The (sequence) symbol sent.In one example, the original 20-bit binary data is input to a bit-to-transition number converter block 308 for conversion to a 12-digit ternary number. Each digit of the 12-digit ternary number represents a "transition number". The two consecutive transition numbers can have the same number (i.e., the consecutive digits of the ternary number can be the same). Each transition number is converted to a sequential symbol at transition to symbol block 310 such that any two consecutive sequential symbols do not have the same value. Such sequential symbol transitions can be used to embed a clock signal since there is guaranteed a transition at each sequential symbol. Each sequential symbol 316 is then transmitted on a two-wire physical link (e.g., an I2C bus including SCL line 312 and SDA line 314).FIG. 4 illustrates an exemplary transition between transition number 402 and sequential symbol 404. Individual digits (also referred to as transition numbers) of ternary numbers (numbers with a base of 3) may have three (3) possible digits or one of states 0, 1, or 2. Although the same value may appear in two consecutive digits of a ternary number, no two consecutive sequential symbols have the same value. The transition between the number of transitions and the sequential symbols ensures that the sequential symbols always change (from sequential symbols to sequential symbols) even if the number of consecutive transitions is the same.The conversion function is illustrated illustratively in FIG. At the transmitter side (TX: T to S) 502, the number of transitions (T) can be converted into sequential symbols (S). For example, the current sequential symbol (Cs) can be obtained based on the previous sequential symbol (Ps) and the temporary transition number (T tmp ) as a function of the current transition number (T). The temporary transition number (T tmp ) can be obtained by comparing the current transition number T with 0, and when T=0, the temporary transition number (T tmp ) becomes equal to 3, otherwise (when T is not equal to 0) T Tmp becomes equal to T (ie, T tmp =T=0?3:T). The current sequential symbol can be obtained as the sum of the current sequential symbol (C s ) plus the previous sequential symbol (P s ) plus the temporary transition number (T tmp ) (ie, C s =P s +T tmp ).At the receiver side (RX: S to T) 504, the conversion operation is inverted to obtain the number of transitions from the current sequential symbol (Cs) and the previous sequential symbol (Ps). The temporary transition number (T tmp ) can be obtained as the sum of the current sequential symbol (Cs) plus 4 minus the previous symbol (Ps) (i.e., T tmp = C s + 4-P s ). The current transition number (T) is equal to the temporary transition number (T tmp ), but the temporary transition number (T tmp ) is compared with three (3), and when T tmp = 3, the temporary transition number (T tmp ) becomes equal to zero ( 0), otherwise (when T tmp is not equal to 3) T becomes equal to T tmp (ie, T=T tmp =3?0:T).Table 506 illustrates the transition between the number of transitions and the sequential symbols.Referring again to Figure 4, an example of a transition between a transition number and a sequential symbol is illustrated herein. For example, in the first loop 406, the current transition number (Ta) is 2, so T tmp is also 2, and in the case where the previous sequential symbol P s is 1, the new current sequential symbol C s is now 3 .In the second loop 408, the number of transitions (Tb) is one. Since the number of transitions (Tb) is not equal to 0, the temporary transition number T tmp is equal to the number of transitions (Tb) of 1. The current sequential symbol (Cs) is obtained by adding the previous sequential symbol (Ps) value 3 to the temporary transition number T tmp of 1. Since the result of this addition is equal to 4, i.e., greater than 3, the flip number 0 becomes the current sequential symbol (Cs).In the third loop 410, the current number of transitions (T) is one. Since the transition number T is 1, the temporary transition number T tmp is also 1. The current sequential symbol (Cs) is obtained by adding the previous sequential symbol (Ps) value 0 to the temporary transition number T tmp of 1. Since the result of the addition operation is equal to 1, i.e., not greater than 3, the current symbol (Cs) is equal to 1.In the fourth cycle 412, the current number of transitions (T) is zero. Since the transition number T is 0, the temporary transition number T tmp is 3.The current sequential symbol (Cs) is obtained by adding the previous sequential symbol (Ps) value 1 to the temporary transition number T tmp of 3. Since the result of the addition operation is 4, i.e., greater than 3, the flip count 0 becomes the current sequential symbol (Cs).Note that even if two consecutive ternary digits Tb and Tc have the same number, the conversion ensures that two consecutive sequential symbols have different state values. As such, the guaranteed transitions in the sequence symbol 404 can be used to embed the clock signal, thereby freeing the clock line SCL in the I2C bus for data transfer.Referring again to Figure 3, at receiver 320, the process is reversed to convert the transcoded symbols back to bits, and in the process, the clock signal is extracted from the symbol transitions. Receiver 320 receives sequential symbol sequence 322 on a two-wire physical link (e.g., an I2C bus including SCL line 324 and SDA line 326). The received sequential symbol 322 is input into a clock data recovery (CDR) block 328 to recover the clock timing and sample the transcoded symbols (S). The symbol to transition number converter block 330 then converts the transcoded (sequential) symbols into transition numbers (i.e., one ternary number of bits). Next, the transition number to bit converter 332 converts 12 transition numbers to recover 20 bits of raw data from the 12 digit ternary number.Examples of the two-conductor bus and 12 transition number diagrams in Figures 3 and 4 can be generalized to an n-conductor system and m number of transitions. If there are r possible symbol transition states for each T (T0 to Tm-1), then m transitions can send r m different states (i.e., r = 2 n -1). Therefore, the transition T0...Tm-1 contains data which can have (2 n -1) m different states.This technique illustrated herein can be used to increase the link rate of the control bus 108 (Fig. 1) beyond the link rate provided by the I2C standard bus, and is referred to herein as the CCIe mode. In one example, a master device and/or a slave device coupled to control data bus 108 may implement a transmitter and/or receiver that embeds a clock signal within a symbol transmission (as in Figures 2, 3, 4, and 5) Illustrated) to achieve a higher bit rate than is possible with the standard I2C bus on the same control data bus.Figure 6 illustrates a method for converting binary bits to ternary numbers from the most significant bit to the least significant bit. Each digit of the ternary number can be transcoded (converted) into a symbol that is transmitted to the recipient device. For a 12-digit ternary number 602, where T0, T1...T11 represent the ternary number, T0 represents a 3 0 digit (and is the least significant digit), and T11 represents a 3 11 digit (and is the most significant digit). Starting with the received bit (e.g., a 20-bit sequence), the most significant digit T11 of the ternary number 602 is first obtained. Subsequently, the next most significant digit T10 is next obtained. This process continues until the least significant digit T0 is obtained. Each digit of the ternary number 602 can also be referred to as a "transition number."Figure 7 illustrates transmitter side logic for converting binary bits to ternary numbers from the most significant bit to the least significant bit. Figures 6 and 7 illustrate a 12-digit ternary number 602 transmitted in the order of T11, T10, T9, ..., T0. By first obtaining and transmitting the most significant bits, the complexity of the logic and circuitry involved is simplified. In the approach of Figures 6 and 7, the most significant sequential symbol is first transmitted to the recipient device and is therefore referred to as the first MSS (first most significant symbol). As used herein, "least significant symbol" refers to a transcoded symbol that corresponds to the least significant digit of ternary number 602. For example and referring to the description of Figures 4 and 5, when T0 is transcoded into a sequential symbol, the symbol is the least significant symbol because it originates from the least significant ternary digit. Similarly, as used herein, "most significant symbol" refers to a transcoded symbol that corresponds to the most significant digit of ternary number 602. For example and referring to the description of Figures 4 and 5, when T11 is transcoded into a sequential symbol, the symbol is the most significant symbol since it originates from the most significant ternary digit. And when the symbol-to-transformer converter block 330 (FIG. 3) subsequently receives and converts the transcoded (sequential) symbols into transition numbers (ie, digits of a ternary number), it will be the most efficient first. The digit T11 and finally the least significant digit T0.Referring back to 3, the 20-bit raw data is converted to a ternary number in reverse order (ie, the most significant bit is first supplied to the converter), and each digit of the ternary number (for example, each number of transitions) is reversed. The conversion (i.e., transcoding) is into sequential symbols, and the transcoded symbols are transmitted on the bus in reverse order (i.e., the most significant symbol first).Figure 8 illustrates a method for converting a ternary number to a binary bit from the most significant bit to the least significant bit. That is, this receiver side conversion reverses the operations performed in the transmitter side transitions illustrated in Figs. 6 and 7. The receiving device (eg, the slave device) receives the reverse transmission and performs clock recovery and symbol sampling to convert the transcoded symbols back to a ternary number, which is then provided in reverse order to the A logic circuit that converts the ternary number back to 20-bit binary raw data. Figure 7 illustrates a multiplexer having 12 inputs coupled to a single output to a logic device.Figure 9 illustrates a receiver side logic circuit for converting a 12-digit ternary number into 20 bits.Figure 10 conceptually illustrates that bit 19 (i.e., the 20th bit when the bit count begins with the first bit of bit 0) is not used in the CCIe protocol in most cases and can be used on a shared bus. Command between devices. That is, as a result of the coding scheme illustrated in Figures 3-9, the extra bits (i.e., bit 19) in the transmitted symbols are now available. More specifically, FIG. 10 illustrates bit 19 (ie, the 20th bit). In other words, as is typical in computer science, counting bit by bit from zero, and bit 19 is the 20th bit. Here, bits 0-18 are represented in the ternary number range 0000_0000_0000 3 to 2221_2201_2001 3 . The ternary number in range 2221_2201_2002 3 to 2222_2222_2222 3 is not used. Therefore, the trigonometric range 2221_2201_2002 3 to 2222_2222_2222 3 can be used to represent the bit 19 (i.e., the 20th bit). In other words, ternary 2221, 2201, 2002 3 is binary 10,000,000,000,000,000,000 (hex 0x80000), and ternary 2222_2222_2222 3 (0x81BF0) is the largest possible 12-digit ternary number.Exemplary protocol for CCIe mode11 illustrates an exemplary general call to a CCIe mode entry indicator that can be transmitted by a master device on a shared bus to indicate to the slave device that the shared bus is switching from I2C mode to CCIe mode. The general call 1102 can be issued by the I2C master device on the shared bus (eg, the master device 112 in I2C mode on the SDA line and the SCL line in FIG. 1) to indicate the transition from the I2C mode to the CCIe mode to all I2C compatible devices. .In I2C mode, the CCIe master device issues this I2C general call 1102 with a "CCIe mode" byte or indicator 1104. The CCIe compatible slave device acknowledges receipt of the general call 1102. The CCIe compatible slave device can insert a wait loop (if necessary) by keeping the SCL line (of the control data bus 108) low during the general call.Once in CCIe mode, all CCIe compliant devices are able to respond to requests from CCIe master devices. The operational state or any functionality of an I2C-compatible legacy slave device that does not support CCIe mode on the shared control data bus is not affected by any CCIe transaction.12 illustrates an exemplary CCIe call 1202 that may be issued by a CCIe master device (e.g., master device 112 in I2C mode in FIG. 1) to indicate a transition from CCIe mode to I2C mode to all CCIe capable devices. The CCIe master device can issue this exit call 1202 instead of the CCIeSID.In CCIe mode, after the last data of S in the CCIe mode, the CCIe master device sends a special CCIeSID code ("exit" code/indicator 1204) to indicate (eg, to the CCIe compatible device) The CCIe mode ends and transitions back to the I2C mode. Additionally, after "exit" the code/indicator 1204, the CCIe master device transmits an S (start bit) followed by a "general call" 1206 according to the I2C protocol, where the "exit" code 1208 is the second within the I2C protocol. Bytes. All CCIe-capable slave devices must acknowledge the universal call 1204.Figure 13 illustrates an exemplary CCIe Follower Identifier (SID) word format. It illustrates the use of a 16-bit slave identifier (SID) 1304 as part of the CCIeSID word format 1302. Such a SID word format will be used to identify a particular slave device when the word is placed on the control data bus.FIG. 14 illustrates an exemplary CCIe address word format 1402. It illustrates that each address word 1406 includes a 16-bit address 1404. The address word 1406 also includes a 2-bit control code 1408 and a 1-bit error detection constant 1410. Table 1412 illustrates various possible values for the control code.Multiple address words can be sent sequentially. If the current control word is '00', this means that the address word will follow. If the control code is '01', the next data word is a write data word. If the control code is '01', the next data word is a word read data word. Control code '11' is disabled.FIG. 15 illustrates an exemplary write data word format 1502. It illustrates that each data word 1500 includes a 16-bit close-up data portion 1502. The write data word 1500 also includes a 2-bit control code 1504 and a 1-bit error detection constant 1510. Table 1514 illustrates various possible values for the control code.Multiple write data words can be sent sequentially. If the control code of the currently written codeword is '00' (symbol C0), the data is to be written to the previous address. If the control code of the currently written codeword is '01' (symbol C1), then the data is to be written to the previous address +1. If the control code is '10' (symbol E), the next word will be the SID or the exit code.FIG. 16 illustrates an exemplary read specification word format 1600. The read specification data word 1600 can include a 16-bit read data value portion 1604, a 2-bit control code 1608, and a 3-bit error detection constant 1610.Following the last address word 1607, the "Read Specification" (RS) word 1612 is followed. The read specification (RS) word 1612 specifies the number of read data words that follow. As illustrated in Table 1616, control code '00' is used to indicate a read word from the same address. Control code '01' is used to indicate the read word from the delta address. The slave device (data is being read from) should not transmit more data words (excluding CHK words) than the data word specified by the "Read Specification" (RS) word 1604. The slave device shall send at least one read word (excluding CHK words). The slave device can end the read transfer before transmitting the number of words specified by the "Read Specification" (RS) 1604 word.FIG. 17 illustrates an exemplary read data word format 1702. The read data word 1702 can include a 16-bit read data value portion 1704, a 2-bit control code 1706, and a 1-bit error detection constant 1708. The slave device addressed by SID 1707 determines the number of words to return to the requesting master device. As illustrated in Table 1716, if the read word continues from the same address, the control code is "00" (symbol R0). If the read word continues from the incremental address, the control code is "01" (symbol R1). If the word is the last read word and there is no CHK after the word, the control code is "10" (symbol E). The control code "00" is disabled.Exemplary I2C transmission on a shared bus to CCIe transmissionFigure 18 illustrates an exemplary timing diagram of an I2C one byte write data operation. In this example, shared control data bus 108 (Fig. 1) includes serial data line SDA 1802 and serial clock line SCL 1804. The transmission scheme illustrated in FIG. 18 may be referred to as an "I2C mode." The SCL line 1804 is used to send a clock from the master device to all slave devices, while the SDA line 1802 transmits data bits. The I2C master device transmits a 7-bit slave ID 1808 in the SDA line 1802 to indicate which of the slave devices on the I2C bus the master device wishes to access, and then transmits 1 bit indicating the write operation. Only a slave device whose ID matches the 7-bit slave ID 1808 can cause the expected action. In order for the I2C slave device to detect its own ID, the master device must transmit at least 8 bits on the SDA line (or 8 clock pulses on the SCL line 2204).The I2C standard requires all I2C-compatible slave devices to reset their bus logic upon receipt of a START condition 1806 (eg, indicated by a high-to-low transition on the SDA line when the SCL line is high).The CCIe protocol uses both SDA line 1802 and SCL line 1804 for data transmission while embedding the clock signal within the data transmission. For example, data bits can be transcoded into multiple symbols that are subsequently transmitted over the lines. Both the SDA line 1802 and the SCL line 1804 can be used for data transmission by embedding a clock signal (the SCL line of the I2C bus in Figure 18) within the symbol transition.19 illustrates an exemplary CCIe transmission in which data bits have been transcoded into 12 symbols for transmission on SDA line 1902 and SCL line 1904. The transmission scheme illustrated in Fig. 19 may be referred to as "CCIe mode". The CCIe mode is source synchronous and is driven by a push-pull drive. Any device that issues data on the shared control data bus also issues clock information embedded in the data (eg, embedded in the symbol-to-symbol transition). Therefore, only one device on the control data bus is allowed to drive the shared control data bus at any one time.In order to support both legacy I2C devices and CCIe devices on the same bus, CCIe mode operation uses the same START condition 1906, 1908, 1910, which prevents legacy I2C slave devices from reacting to any CCIe operation (eg, CCIe) The start condition during the mode resets the legacy I2C slave device). In this example, the START condition 1906, 1908, 1910 is detected prior to transmitting the full slave ID (ie, the full 7 bits) (ie, by the high to low transition on the SDA line 1902 when the SCL line 1904 is high). Indicates), so this is an incomplete slave ID (less than 7 bits). If the master device sends 6 SCL pulses and then issues START conditions 1906, 1908, 1910, then all legacy I2C slave devices reset their bus logic before they recognize the data as an I2C slave ID. Since these 6-bit sequences (e.g., corresponding to every two symbols) are transmitted between two START conditions 1906, 1908, 1910, these 6-bit sequences are not decoded by any I2C slave device into a valid slave. Party ID. Therefore, the legacy I2C slave device will not take action on the incomplete slave ID.In this system, the master device controls access to the bus. Therefore, any device wishing to transmit on the control data bus typically must request such access from the master device, for example by issuing an interrupt request. Prior art mechanisms for issuing interrupts rely on dedicated interrupt lines or dedicated interrupt buses. However, such a dedicated interrupt line or interrupt bus means that the device must include at least one additional pin to accommodate such interrupt lines or interrupt buses. In order to eliminate the need for such dedicated interrupt pins and interrupt lines/buses, a mechanism for in-band interrupts within CCIe is required.The use of in-band interrupts should also avoid bus contention or collisions. For example, to avoid collisions, when the master device is driving the control data bus, the slave device should not be allowed to drive the control data bus (e.g., SDA line 1802 or SCL line 1904) to assert IRQ.Exemplary bit 19 region and checksumFigure 20 illustrates an exemplary mapping of the 20th bit (bit 19) resulting from the encoding scheme illustrated in Figures 2-10. As can be appreciated, the available ternary numbers can be used to extend the features and capabilities between the master and slave devices. For example, the ternary space available in bit 19 (i.e., the data region whose bit 19 is "1") can be used to facilitate or indicate: (a) slave device to slave device, (b) transmitted Checksum, (c) switching from master operation to slave device, (d) heartbeat clock, etc.Figure 21 illustrates details of sub-regions within an exemplary mapping of the 20th bit (bit 19) region of Figure 20 .SID scanning functionalityAccording to one aspect, it may be desirable for the master device to efficiently scan all devices coupled to the control data bus. For example, the master device can scan all slave devices coupled to the control bus when the master device boots.FIG. 22 illustrates an example 2200 of CCIe transmission including a SID "Scan All" command 2202 and its corresponding payload 2204. The SID "Scan All" command 2202 (identified by the "0x4" code) can be issued by the master device. The payload 2204 can include a plurality of unit scan IDs 2210. Each unit scan ID 2210 includes a SID mask pair 2208 and a response period 2206. The SID mask pair 2208 can define a mask that identifies a single bit position within the SID to be queried.As illustrated in Table 2220, the 32-bit SID mask pair 2208 (spread over the two 16-bit data D0 and D1) is used to identify whether a particular bit position of the 16-bit SID is being queried, and if so, Which value is being queried for it (ie, 0 or 1). For example, the bit [1] of the SID mask pair 2208 can define whether the bit [0] of the SID will be checked or masked (i.e., not checked). If bit [1] indicates "check", the bit [0] of the SID mask pair 2208 defines whether the query is for "0" or "1".The time period defined by response 2206 allows the slave device to respond in-band to the SID query on the shared bus. For each unit SID query 2210, each of the slave devices whose unmasked SID bits match the query bit (ie, the SID of the slave device has a match with the query bit at the queried bit position) Bits) send query responses in-band on at least one line of the shared bus. This allows the master device to ascertain whether any slave devices on the bus have a partially matched SID (i.e., a SID having a bit matching the query bit at the queried bit position).A plurality of unit SID queries 2210 are sent by the master device to fully identify the SIDs of all devices coupled to the shared bus.The "Scan All" command 2202 or variations thereof may be issued at a timing that is not directly related to the boot of the master. In one example, the master device can scan all slave devices coupled to the control bus to check if all slave devices are in sync. In this example, the master device does not necessarily need to perform a complete "blind scan" (see, for example, Figure 26-42), and the master can issue no mask and/or with no SID bits excluded from the comparison. The mask is queried because the master device may already know which slave devices are coupled to the bus. In another example, the master device can scan all slave devices coupled to the control bus to check if one or more particular devices are in sync. In this example, the master device may only send one unit SID query for each slave device to be scanned.FIG. 23 illustrates an example of an algorithm 2300 that can be used to scan a SID. Algorithm 2300 can operate by iteratively masking the SID bits and requesting each slave device having a bit that matches the unmasked bits to send a response. In one example, the SID bit can be iteratively masked from the least significant bit to the most significant bit, although other masking sequences can be employed as well.Figure 24 illustrates a timing diagram of the SID scan response word "RESP" 2206 on the shared bus including the SDA line and the SCL line. In this example, the SID scan response 2430 is identified by a ternary number 2222_2221_2101 3 or a hexadecimal 0x81B8F (which is equal to the 12-symbol sequence 3131_3130_2323). These symbols are transmitted on SDA line 2426 and SCL line 2427. In order to allow the slave device to respond to the SID scan query using the SDA line 2426 during the response period 2406, the master device releases the SDA line. Each receiver device then masks the SDA line input to its clock data recovery circuit (CDR) for the response period 2406. The master flips the SCL line (changing its state) so that each receiver device can recover the clock from such flips on the SCL line while the SDA line is in use.According to the CCIe protocol, the receiving slave device can detect, for example, the nth RXCLK 2414 after the start of the S indicator 2412. The nth RXCLK 2414 can trigger the internal SDA mask 2424 to mask the SDA line 2426 internally (eg, within the receiving slave device).At n+1RXCLK2416, the slave device can assert/issue the response by pulling the SDA line 2426 down. The SDA line 2426 is weakly pulled up by the master device, which is used to indicate a positive response to the SID scan query when it is pulled low by the (slave device). By pulling the SDA line 2426 weakly, this allows the slave device to pull the SDA line 2426 low to assert the response to the SID scan query.Between n+1RXCLK2416 and n+2RXCLK2418 but before n+2RXCLK2418, instead of waiting until the next clock cycle, the master device can monitor SDA line 2426 to see if and/or when it goes low (meaning the response has Is asserted/released). Note that such monitoring of the SDA line 2426 by the master device can be performed only during the response period 2406 to asynchronously detect any asserted/posted responses from the slave device.At n+2RXCLK2418, the slave device can release the SDA line 2426.Between n+2RXCLK2418 and n+3RXCLK2420, the master can re-enable its SDA line driver and begin driving SDA line 2426 high. Thus, the receiver device (eg, the assertion slave device) can safely release the SDA mask 2424 at the n+3RXCLK 2420.At n+3RXCLK2420, the slave device can release the SDA mask 2424. In this manner, the SID scan response may be transmitted during the response period 2406 defined by the slave device on the SDA line 2426.25 is a flow diagram 2500 illustrating a scan sequence that can be iteratively performed to find SIDs of a plurality of devices coupled to a serial bus. The scan includes transmitting a query including SID bit 2530 and mask 2532 on the serial bus. The slave device coupled to the serial bus applies a mask 2532 to its own unique SID such that, for example, the masked bits are forced to a binary zero. The slave device can then compare its masked SID to the SID bit 2530 transmitted in the query. If the comparison produces a match, the slave device drives the SDA line 2426 low during the response period 2406 to signal to the master device that the match occurred. The master device can then transmit a new query after modifying the mask to reduce the number of bits removed from the comparison at the slave device. The master device can discover the SID of the slave device bit by bit. In the example illustrated in flowchart 2400, the master looks for a SID bit position having a value of 0, and if no response occurs, the master scans the slave device having a binary one value in the SID bit position.It will be appreciated that when the masking operation is used to mask at least one bit position, multiple devices can respond to the query. When the mask allows all bits in the SID to be tested, these SID bits correspond to a unique SID and it is expected that only the slave device associated with the unique SID will respond to the query.When the SDA line 2426 is driven low in response to any query, the master device can notice that at least one of the slave devices is responding to its SID scan query. The master device can then test the remaining SID bits until the unique SID of the first slave device is found. The master device can then "backtrack" to find other slave devices that may have responded to the same query as the first slave device responded to. For example, the second slave device can have a SID having a certain number of least significant bits that are the same as the same number of bits in the SID of the first slave device. The master device can return to each previous query in which the most significant bits (MSBs) of the SID bits that are not masked (the active MSB) are set to 0, and the master device can transmit the active MSB to be set to the value 1 query. The approach can discover one or more slave devices that respond to one or more queries before and after the discovered slave device. In other words, backtracking back and searching for paths that were not covered during the previous scan.When the master device does not receive a response to the query, it can avoid sending further queries including the SID bit configuration in the query.Flowchart 2500 illustrates the scanning process following a given point in the scan sequence. Initially, all SID bits can be set to zero and the mask can be set to prevent the slave device from comparing all bits except the least significant bit (LSB) of its unique SID.The process begins at block 2502 where the master device determines the starting point of the scan. The starting point can be the initial starting point or the recovery of the scan after the SID has been found. The starting point can be defined as the current mask 2532 and the current SID bit 2530. At block 2504, the master device sends a query with a current mask 2532 and a current SID bit 2530. The slave device receives the query and masks its respective unique SID using the mask 2532 provided in the query. In one example, masking leaves a certain number of least significant bits as a masked SID. If the masked SID matches the SID bit 2530 received in the query, the slave device can drive the SDA line 2426 low.The master device asserts that the assertion of the SDA line 2426 is that the unique SID possessed by the slave device has an acknowledgment of at least some bits that match the SID bit 2530 transmitted in the query. Accordingly, the master device may determine at block 2506 that the SDA line 2426 has been asserted and may proceed to block 2516. At block 2516, the master device determines if mask 2532 causes any of the slave SIDs to be ignored during the comparison. If not, then at block 2518, the master device determines that the slave device has a SID that exactly matches the SID bit sent in the query. If it is determined at block 2516 that the mask 2532 prevents one or more bits of the slave SID from being considered, then the process continues at block 2520. At block 2520, the master device may modify the mask such that the next more significant bit of the slave SID is compared to the SID bit 2530 to be transmitted in the next query. The master device may clear the current MSB of the SID bit 2530 at block 2522. The scan can then resume at block 2504 with the updated mask 2532 and SID bit 2530. In some examples, the master device may skip sending a query for the bit configuration generated at step 2524. At block 2526, the master device can determine if a previous positive response was received. If it is determined that the previous response was not received, then the scan continues at block 2504. If it is determined at block 2526 that a previous response has been received, then at least one of the slave devices has the bit configuration generated at step 2524 because the replacement value (=0) of the MSB does not trigger a response from the at least one slave device. Accordingly, the scan can continue at block 2512.If at block 2506 the master device determines that the SDA line 2426 is not asserted in response to the query, then the process continues at block 2508. At block 2508, the master device can optionally mark the current SID bit 2530 as a bit pattern that does not occur in the SID of any slave device coupled to the serial bus. The master device can then avoid scanning any devices that have these current SID bits in the bit positions corresponding to mask 2532. The process then continues at block 2510 where the master device determines whether the current MSB defined by the operation of mask 2532 in these SID bits 2530 is set to zero. If it is determined at block 2510 that the current MSB of the SID bit 2530 is set to 1, it can be assumed that no device has responded to the query that the current MSB is set to 1 or 0, and the scan can be terminated at block 2528. It will be appreciated that block 2528 may be reached when no slave device responds to any of the queries in the scan sequence.If it is determined at block 2510 that the current MSB of these SID bits 2530 is set to zero, then at block 2524, the current MSB of these SID bits 2530 is set to a binary value of one. The scan then proceeds to block 2526 with the updated SID bit 2530.If it is determined at block 2526 that no previous response was received in the current scan, then the scan continues at block 2504 where a query is generated in which the current MSB is set to one. If a previous response is received at block 2526, it can be assumed that the slave device that has not responded to the query for which the current MSB is set to 0 will respond to the query that the current MSB is set to one. Accordingly, there is no need to send a query that the current MSB is set to 1, and the scan proceeds to block 2512. At block 2512, the master device determines if mask 2532 causes any of the slave SIDs to be ignored during the comparison. If not, it is determined that the SID has been identified and the current scan can be terminated at block 2514.If it is determined at block 2512 that the mask 2532 prevents one or more bits of the slave SID from being considered, then the process continues at block 2520. At block 2520, the master device may modify the mask such that the next current MSB of the slave SID is compared to the SID bit 2530 to be transmitted in the next query. The master device may clear the current MSB of these SID bits 2530 at block 2522. The scan then resumes at block 2504 with the updated mask 2532 and SID bit 2530.The master device can track any mask 2532 and SID bit 2530 combinations that result in assertion of SDA line 2426, and can initiate a new scan based on one or more of these mask 2532 and SID bit 2530 combinations. In one example, the master device may initiate a new scan after discovering the first SID, or upon receiving a response from a query with the MSB bit set to 1 realizing that no slave device has the subset of SID modes ( For example, flow 2500). The new scan begins at the last SDA line 2426 asserted in response to the SID bit 2530 having the MSB value of 0, which is determined by the action of the corresponding mask 2532.26-41 illustrate various results of an iterative SID scan in accordance with certain aspects disclosed herein. The SID scan can be performed using, for example, the process illustrated in FIG. 25, and the scan sequence is depicted in FIGS. 26-41, which illustrates the progress of the scan sequence with respect to the SID bit index 2602. For the purposes of this description, scanning is performed when three slave devices are coupled to the serial bus. The first slave device may have a device identifier SID0 equal to a hexadecimal number 0x402A or a binary number 0100_0000_0010_1010. The second slave device may have a device identifier SID1 equal to a hexadecimal number 0x113E or a binary number 0001_0001_0011_1110. The third slave device may have a device identifier SID2 equal to the hexadecimal number 0x0908 or the binary number 0000_1001_0000_1000.As illustrated in Figure 26, starting from the least significant bit 2604 of the 16-bit SID, the master device sends a request to all slave devices having a '0' in the least significant bit (bit 0) of their SID, for example by placing the SDA line 2426 keeps the query as low to respond. In the example of the three SIDs illustrated in Figure 26, all three slave devices respond.As illustrated in Figure 27, the query process is repeated for bit 12704 and for a query requesting all slave devices having '0' in the bit position of their SID. Only slave devices with SID 0x0908 (SID2) respond by keeping SDA line 2426 low.As illustrated in Figure 28, the query process is repeated for bit 22804 and for a query requesting all slave devices having '0' in the bit position of their SID. The slave device with SID 0x0908 (SID2) responds.As illustrated in Figure 29, the query process is repeated for bit 32904 and for a query requesting all slave devices having '0' in the bit position of their SID. There is no response from the slave device.As illustrated in Figure 30, the query process is repeated for bit 32904 and for a query requesting all slave devices having '1' in the bit position of their SID. A slave device with SID 0x0908 (SID2) will respond. However, since the master device knows that a positive response will be made here, the master device does not need to make a query. Instead, the master device can simply set a "1" value in the SID bit index 2602 and continue to the bit 43104 (Fig. 31).As illustrated in Figure 31, the query process iteratively repeats for bits 43104 through 73106 and queries that request all slave devices having '0' in their respective bit positions of their SIDs. A slave device with SID0x0908 (SID2) responds to each of these separate queries.As illustrated in Figure 32, the query process is repeated for bit 83204 and for a query requesting all slave devices having '0' in the bit position of their SID. There is no response from the slave device.As illustrated in Figure 33, since the master device knows that a query for "1" at bit 83204 will make a positive response, the master device does not need to make a query. Instead, the master device can simply set a "1" value in the SID bit index 2602 and continue to the bit 93204.In general, the query process repeats using bits equal to '0' until no response is received and then switches to a bit equal to '1' until the full SID of the first device is found.As illustrated in Figure 34, the query process iteratively repeats for bits 93404 through 153406. A slave device with SID0x0908 (SID2) responds to each of these separate queries. Since all 16 bits of the SID have been checked, the slave device associated with SID20x0908 is positively identified.In the event that the slave device SID2 has been identified, Figure 35 illustrates that the query process is now being traced back to attempt to identify another slave device along the path. This backtracking process is repeated until no slave device responds. At each bit position being traced back, the master device performs the query with a previously untested value. For example, at bit 15, the query uses '1' for its backtracking query. Subsequently, at Bit 14, the query uses '1' for its backtracking query.Figure 36 illustrates that in the event that all possible paths down to bit 12704 have been traced back and eliminated, the query process continues for bit 12704, which is the first query that "backtracks" to get a response. The transfer requests all queries that have a response from the slave device having '1' in the bit position of its SID. A slave device with SID0x402A (SID0) and a slave device with SID0x113E (SID1) respond by keeping SDA line 2426 low.Figures 37 and 38 illustrate that the query process continues until all 16 bits of the SID have been checked, and the slave device associated with SID00x 402A is positively identified.Figure 39 again illustrates the retrospective process of searching for more matching SIDs.40 illustrates a query that responds to bit 22804 and requests all slave devices having '1' in the bit position of their SID, in the event that all possible paths down to bit 22804 have been traced back and eliminated. Come on. A slave device with only SID0x113E (SID1) responds by keeping SDA line 2426 low.Figure 41 illustrates that the query process continues until all 16 bits of the SID have been checked, and the slave device associated with SID 10x 113E is positively identified.Figure 42 again illustrates the retrospective process of searching for more matching SIDs until all replacement bits have been tested for the path.The master device can map or otherwise track the SID value used in the query that results in a positive response and track the SID value used in the query that did not generate a response.The master device can employ the "SID Scan All" command to scan all SIDs of newly inserted and pre-existing devices coupled to the shared control data bus. It will be appreciated that the master device is not limited to the particular order in which it traverses the SID bit index 2602. In this example, SID bit index 2602 is traversed from bit 0 to bit 15 as described herein. In another example, SID bit index 2602 is traversed from bit 15 to bit 0. In other examples, as described herein, the SID bit index 2602 can be traversed in a non-linear and/or random or pseudo-random order, wherein, for example, the slave device is expected to be provided with a SID that has been assigned according to a particular order or scheme.Support for hot plugging with SID scanningAccording to one aspect, it may be desirable to "hot plug" a device (e.g., a slave device) to a shared control data bus after the shared bus and/or the master device is operational. However, allowing the device to be coupled to the control data bus after initial booting of the master device controlling the data bus requires some way to allow the master device to detect the newly inserted slave device (or inactive master device). To accomplish such "hot plug" functionality, the newly inserted slave device can transmit an IRQ signal to the master device (e.g., via a dedicated IRQN) and use the longest possible signal period. The master device can then issue a "SID Scan New" command to scan all slave devices that were previously not scanned that are coupled to the shared control data bus and identify the newly added slave device.According to one aspect, a slave device that supports SID scanning can store information indicating whether its SID has been scanned after power up. For example, once a device has been scanned for its SID, the slave device can set an internal register indicating that it has been scanned. This register allows the previously scanned slave device to ignore this "SID Scan New" command so that only newly inserted devices respond.Devices that are newly coupled to the shared control data bus (including devices with SIDs that have not been previously scanned) are typically required and/or expected to respond to the "SID Scan New" command. A device with a previously scanned SID does not need to respond to the "SID Scan New" command.FIG. 43 illustrates an example 4300 of CCIe transmission including a SID "Scan New" command 4302 and its corresponding payload 4304. The SID "Scan New" command 4302 (identified by the "0x5" code) can be issued by the master device. The payload 4304 can include a plurality of unit scan IDs 4310. Each unit scan ID 4310 includes a SID mask pair 4308 and a response period 4306. The SID mask pair 4308 can define a mask that identifies a single bit position within the SID to be queried. Since this is the SID "Scan New" command 4302, all previously scanned devices can ignore it. Therefore, only devices newly inserted into the shared bus are identified.As illustrated in Table 4320, a 32-bit SID mask pair 4308 (spread over the two 20-bit words D0 and D1) is used to identify whether a particular bit position of the 16-bit SID is being queried, and if so, also identify Which value is being queried for it (ie, 0 or 1). For example, the bit [1] of the SID mask pair 4308 can define whether the bit [0] of the SID will be checked or masked (ie, not checked). If bit [1] indicates "check", the bit [0] of the SID mask pair 4308 defines whether the query is for "0" or "1".The time period defined by response 4306 allows the slave device to respond in-band to the SID query on the shared bus. For each unit SID query 4310, each of the slave devices whose unmasked SID bits match the query bit (ie, the SID of the slave device has a match with the query bit at the queried bit position) Bits) send query responses in-band on at least one line of the shared bus. This allows the master device to ascertain whether any slave devices on the bus have partially matched SIDs (i.e., SIDs with bits matching the query bits at the queried bit position).A plurality of unit SID queries 4310 are sent by the master device to fully identify the SIDs of all devices coupled to the shared bus.Scanning or discovering the SID of the newly hot-inserted slave device can be performed as illustrated in Figures 25-42.Figure 44 is a conceptual diagram 4400 illustrating a simplified example of a hardware implementation of an apparatus employing a processing circuit 4402 that can be configured to perform one or more of the functions disclosed herein. In accordance with various aspects of the present disclosure, the elements disclosed herein, or any portion of the elements, or any combination of the elements, can be implemented using processing circuitry 4402. Processing circuit 4402 can include one or more processors 4404 that are controlled by some combination of hardware and software modules. Examples of processor 4404 include: a microprocessor, a microcontroller, a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic device (PLD), a state machine, a sequencer, a gated logic Discrete hardware circuitry, and other suitable hardware configured to perform the various functionalities described throughout this disclosure. The one or more processors 4404 can include a special purpose processor that performs particular functions and can be configured, enhanced, or controlled by one of the software modules 4416. The one or more processors 4404 can be configured by a combination of software modules 4416 loaded during initialization and further configured by loading or unloading one or more software modules 4416 during operation.In the illustrated example, processing circuit 4402 can be implemented using a bus architecture that is generally represented by bus 4410. Depending on the particular application and overall design constraints of processing circuit 4402, bus 4410 can include any number of interconnecting buses and bridges. Bus 4410 links the various circuits together, including one or more processors 4404, and storage 4406. Storage 4406 can include both a memory device and a mass storage device, and can be referred to herein as a computer readable medium and/or a processor readable medium. Bus 4410 can also link various other circuits such as timing sources, timers, peripherals, voltage regulators, and power management circuits. Bus interface 4408 can provide an interface between bus 4410 and one or more transceivers 4412. Transceiver 4412 can be provided for each networking technology supported by the processing circuitry. In some examples, multiple networking technologies may share some or all of the circuitry or processing modules found in transceiver 4412. Each transceiver 4412 provides means for communicating with various other devices over a transmission medium. User interface 4418 (e.g., keypad, display, speaker, microphone, joystick) may also be provided depending on the nature of the device, and the user interface 4418 may be communicatively coupled to bus 4410 either directly or through bus interface 4408.Processor 4404 can be responsible for managing bus 4410 and general processing, including execution of software stored in a computer readable medium (which can include storage 4406). In this regard, processing circuit 4402 (including processor 4404) can be utilized to implement any of the methods, functions, and techniques disclosed herein. Storage 4406 can be used to store data that processor 4404 manipulates while executing software, and the software can be configured to implement any of the methods disclosed herein.One or more processors 4404 in processing circuit 4402 can execute software. Software should be interpreted broadly to mean instructions, instruction sets, code, code segments, program code, programs, subroutines, software modules, applications, software applications, software packages, routines, subroutines, objects, executables. Threads, procedures, functions, algorithms, etc., whether they are written in software, firmware, middleware, microcode, hardware description language, or other terms. The software may reside in storage 4406 in a computer readable form or reside in an external computer readable medium. The external computer readable medium and/or storage 4406 can comprise a non-transitory computer readable medium. By way of example, a non-transitory computer readable medium includes: a magnetic storage device (eg, a hard disk, a floppy disk, a magnetic strip), an optical disk (eg, a compact disk (CD) or a digital versatile disk (DVD)), a smart card, a flash memory device ( For example, "flash drive", card, stick, or key drive), random access memory (RAM), read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), registers, removable disks, and any other suitable medium for storing software and/or instructions that are accessible and readable by a computer. By way of example, computer readable media and/or storage 4406 may also include carrier waves, transmission lines, and any other suitable medium for transmitting software and/or instructions that can be accessed and read by a computer. The computer readable medium and/or storage 4406 can reside in the processing circuit 4402, in the processor 4404, external to the processing circuit 4402, or distributed across multiple entities including the processing circuit 4402. Computer readable media and/or storage 4406 can be implemented in a computer program product. As an example, a computer program product can include a computer readable medium in a packaging material. Those skilled in the art will recognize how to best implement the described functionality as presented throughout this disclosure, depending on the particular application and the overall design constraints imposed on the overall system.Storage 4406 can maintain software maintained and/or organized in a loadable code segment, module, application, program, etc., which can be referred to herein as software module 4416. Each of the software modules 4416 can include instructions and data that facilitates the runtime image 4414 when installed or loaded onto the processing circuitry 4402 and executed by one or more processors 4404, the runtime image 4414 controlling one or more The operation of the processor 4404. When executed, certain instructions may cause processing circuit 4402 to perform functions in accordance with certain methods, algorithms, and processes described herein.Some of the software modules 4416 may be loaded during initialization of the processing circuit 4402, and the software modules 4416 may configure the processing circuit 4402 to perform the various functions disclosed herein. For example, some software modules 4416 may configure internal devices and/or logic circuits 4422 of processor 4404 and may manage external devices (such as transceiver 4412, bus interface 4408, user interface 4418, timers, math coprocessors, etc.) )Access. Software module 4416 can include a control program and/or operating system that interacts with the interrupt handler and device driver and controls access to various resources provided by processing circuit 4402. These resources may include memory, processing time, access to transceiver 4412, user interface 4418, and the like.One or more processors 4404 of processing circuit 4402 may be multi-functional, whereby some of software modules 4416 are loaded and configured to perform different functions or different instances of the same functionality. The one or more processors 4404 can additionally be adapted to manage background tasks initiated in response to input from, for example, user interface 4418, transceiver 4412, and device drivers. To support execution of multiple functions, the one or more processors 4404 can be configured to provide a multitasking environment whereby each of the plurality of functions is implemented as needed or desired by one or more processors 4404 The set of tasks for the service. In one example, the multitasking environment can be implemented using time-sharing program 4420, which passes control of processor 4404 between different tasks, whereby each task completes any pending operations and/or Control of one or more processors 4404 is returned to time-sharing program 4420 in response to an input, such as an interrupt. When a task has control over one or more processors 4404, the processing circuitry is effectively dedicated to the purpose for which the functionality associated with the controller task is targeted. The time sharing program 4420 may include an operating system, a main loop that transfers control rights on a round-robin basis, a function to assign control of one or more processors 4404 according to prioritization of functions, and/or Control of one or more processors 4404 is provided to an interrupt-driven main loop that handles the function to respond to external events.45 is a flow diagram 4500 illustrating a method for data communication over a communication link. The method can be performed by a master device in a control data bus, such as a CCIe bus. At block 4502, a first query can be transmitted on the control data bus. The first query can include a first bit configuration.At block 4504, the presence of the slave device can be determined, wherein the slave device has a slave identifier that includes a second bit configuration that matches the first bit configuration.At block 4506, it may be determined if the SID of the slave device has been identified. If the SID has been identified, the method can be terminated. If the SID has not been identified, then the method continues at block 4508.At block 4508, additional queries having different bit configurations can be repeatedly transmitted on the control data bus until all bits of the slave identifier are determined.In one example, the slave device asserts the response for each query that includes a bit configuration that matches the corresponding bit configuration in the slave identifier.The slave device may identify the first bit configuration and the second bit by comparing the word transmitted in the first query with a copy of the slave identifier that has been masked by the mask transmitted in the first query by the application. Match between configurations. The additional query can include a second query. The mask may be modified to obtain a modified mask exposing additional bits of the slave identifier for comparison, and a second query may be transmitted on the control data bus, wherein the second query includes the first bit configuration and Modify the mask. The additional query may include a third query that is transmitted when no response to the previous query is received. The first bit configuration can be modified to obtain a third bit configuration and a third query can be transmitted on the control data bus, wherein the third query includes a third bit configuration and a mask transmitted in the previous query.The additional query may include a fourth query transmitted after all bits of the slave identifier have been determined. The mask can be restored to have a restored mask of values transmitted in the prior query that caused the at least one slave device to assert the response. The bit configuration transmitted in the prior query can be modified to obtain a fourth bit configuration. A fourth query can be transmitted on the control data bus, wherein the fourth query includes a fourth bit configuration and a restored mask.Different slave devices can respond to the fourth query. The different slave device can assert the response when the fourth bit configuration matches a corresponding bit configuration in a different slave identifier associated with the different slave device.In some examples, modifying the first bit configuration includes flipping the value of the active MSB of the first bit configuration. The initiating MSB can be defined as a bit corresponding to the highest value bit in the follower identifier that is not suppressed by applying the mask.In some examples, the plurality of slave devices can respond to the query within a response time period defined by the first query. The plurality of slave devices can respond by asserting a response when the first bit configuration matches a respective one of the respective slave identifiers of the plurality of slave devices. The response can be asserted, for example, on a control data bus shared by the plurality of slave devices.In one example, the control data bus is a two-wire bus. The first query may be transmitted in response to a power up/reset event or an interrupt generated by the slave device when it is first coupled to the control data bus. The slave device can operate according to the CCIe protocol. The query can be transmitted in a scan command. The scan command can be directed to all slave devices coupled to the control data bus (e.g., "SID Scan All" command). The scan command can be directed to a previously un-identified slave device (e.g., "SID Scan New" command) coupled to the control data bus.Figure 46 is a conceptual diagram illustrating an example of a hardware implementation of apparatus 4600 employing processing circuitry 4602. In this example, processing circuit 4602 can be implemented with a bus architecture that is generally represented by bus 4616. Depending on the particular application and overall design constraints of processing circuit 4602, bus 4616 can include any number of interconnecting buses and bridges. Bus 4616 links together various circuits including one or more processors (generally represented by processor 4612) and computer readable media (generally represented by processor readable storage medium 4614). Bus 4616 can also link various other circuits such as timing sources, timers, peripherals, voltage regulators, and power management circuitry. Bus interface 4618 provides an interface between bus 4616 and transceiver 4620. Transceiver 4620 can include a bus interface that provides means for communicating with various other devices over a transmission medium. Depending on the nature of the device, a user interface 4622 (e.g., keypad, display, speaker, microphone, joystick) may also be provided. One or more clock generation circuits or modules 4624 may be provided within or be controlled by processing circuit 4602 and/or one or more processors 4612. In one example, clock generation circuitry or module 4624 can include one or more crystal oscillators, one or more phase locked loop devices, and/or one or more configurable clock trees.Processor 4612 is responsible for managing bus 4616 and general processing, including execution of software stored on processor readable storage medium 4614. The software, when executed by processor 4612, causes processing circuitry 4602 to perform the various functions described above for any particular device. The processor readable storage medium 4614 can also be used to store data that is manipulated by the processor 4612 when executing software.In one configuration, the processing circuitry can include modules and/or circuitry 4604 that communicate information on a control data bus 4620, such as a CCIe bus. The processing circuitry can include one or more modules and/or circuitry 4606 configured to modify the mask and/or SID bits transmitted in the query. The processing circuitry can include modules and/or circuitry 4608 for configuring the query and using the sequence of queries to scan the SID. Processing circuitry may include modules and/or circuitry 4610 that detect and handle IRQ assertions, such as on the IRQ bus. In one example, modules and/or circuits 4604, 4606, 4608, 4610 and bus interface 4618 can cooperate to: transmit a first query on a control data bus, the first query including a first bit configuration; determining to have include and first The bit configuration matches the presence of the slave device of the slave identifier of the second bit configuration; and the additional query with the different bit configuration is repeatedly transmitted on the control data bus until all bits of the slave identifier Have been identified. The slave device can assert the interrupt in response to each query including a bit configuration that matches the corresponding bit configuration in the slave identifier. The slave device may identify the first bit configuration and the second bit by comparing the word transmitted in the first query with a copy of the slave identifier that has been masked by the application transmitted in the first query Match between configurations.It is understood that the specific order or hierarchy of steps in the processes disclosed is illustrative. It should be understood that the specific order or hierarchy of steps in these processes may be rearranged based on design preferences. The appended method claims present elements of the various steps in the exemplary embodiments and are not intended toThe previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Therefore, the claims are not intended to be limited to the scope of the invention, and the scope of the claims is to be accorded to the full scope of the claims. Indicates "has one and only one" but "one or more". Unless specifically stated otherwise, the term "some" refers to one or more. All of the structural and functional equivalents of the present invention will be apparent to those of ordinary skill in the art. Moreover, nothing disclosed herein is intended to be dedicated to the public, whether or not such disclosure is explicitly recited in the claims. No claim element should be construed as a device plus function unless the element is explicitly recited using the phrase "means for."One or more of the components, steps, features, and/or functions illustrated in the figures may be rearranged and/or combined into a single component, step, feature, or function, or may be implemented in several components, steps or In function. Additional elements, components, steps, and/or functions may be added without departing from the novel features disclosed herein. The apparatus, devices, and/or components illustrated in the Figures may be configured to perform one or more of the methods, features, or steps described in the Figures. The novel algorithms described herein can also be implemented efficiently in software and/or embedded in hardware.Additionally, it should be noted that these embodiments may be described as a process depicted as a flowchart, a flow diagram, a structural diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of these operations can be performed in parallel or concurrently. In addition, the order of these operations can be rearranged. The process terminates when its operation is complete. Processes may correspond to methods, functions, procedures, subroutines, subroutines, and the like. When a procedure corresponds to a function, its termination corresponds to the function returning the caller function or the main function.Furthermore, a storage medium may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and/or the like. A machine readable medium that stores information. The term "machine readable medium" includes, but is not limited to, portable or fixed storage devices, optical storage devices, wireless channels, and various other media capable of storing, containing, or carrying instructions and/or data.Moreover, embodiments can be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks can be stored in a machine readable medium, such as a storage medium, or other storage. The processor can perform these necessary tasks. A code segment can represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment can be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory. Information, arguments, parameters, data, etc. may be communicated, forwarded, or transmitted via any suitable means including memory sharing, messaging, token passing, network transmission, and the like.The various illustrative logical blocks, modules, circuits, components and/or components described in connection with the examples disclosed herein may be a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC) designed to perform the functions described herein. ), Field Programmable Gate Array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof, implemented or executed. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor can also be implemented as a combination of computing components, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The methods or algorithms described in connection with the examples disclosed herein can be implemented directly in hardware, in a software module capable of being executed by a processor, or in a combination of the two in the form of processing units, programming instructions, or other instructions. And can be included in a single device or distributed across multiple devices. The software modules can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium can be coupled to the processor to enable the processor to read and write information to/from the storage medium. Alternatively, the storage medium can be integrated into the processor.Those skilled in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps are described above generally in the form of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system.The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the above embodiments are merely examples and should not be construed as limiting the invention. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. Thus, the teachings of the present invention can be readily applied to other types of devices, and many alternatives, modifications, and variations will be apparent to those skilled in the art.
A multi-function input interface for an electronic device. The multi-function input interface including a conductive portion to transceive a signal through the input interface. The input interface includes a positional element to detect a user input to the input interface.
CLAIMSWhat is claimed is:1. A multi-function input interface for an electronic device, the input interface comprising:a conductive portion to transceive a signal through the input interface; anda positional element to detect a user input to the input interface.2. The multi-function input interface of claim 1, further comprising a transceiver communicatively coupled to the conductive portion.3. The multi-function input interface of claim 2, wherein the transceiver is communicatively coupled to the conductive portion by an antenna feed, the antenna feed slidably coupled between the conductive portion and the transceiver.4. The multi-function input interface of claim 3, wherein the conductive portion includes a plurality of electrically isolated segments, a plurality of antenna feeds communicatively couple each segment respectively to the transceiver.5. The multi-function input interface of any of the claims 1-3, wherein the input interface is a Planar Inverted-F antenna.6. The multi-function input interface of claim 1, wherein the positional element is to detect a position of the input interface relative to the electronic device.7. The multi-function input interface of claim 1, wherein the positional element includes a sensor to detect the user input to the input interface.8. The multi-function input interface of claim 1, wherein the positional element is magnetic and movement of the positional element is identifiable by a sensor to detect the user input. 9. The multi-function input interface of claim 1, further comprising a sensor to identify a location of the positional element, wherein the positional element is a fiducial marker and the sensor is an optical sensor.10. The multi-function input interface of claim 1, wherein the positional element is a capacitive electrode to detect the user input to the input interface.11. A method for a multi-function input interface for an electronic device, wherein the input interface includes a conductive portion, the method comprising:transceiving a wireless signal through the conductive portion of the input interface, wherein the conductive portion is an antenna; anddetecting a user input to the input interface based on a parameter from a sensor. 12. The method of claim 11, wherein detecting the user input includes detecting movement of the input interface with the sensor.13. The method of claim 12, wherein detecting the user input includes identifying movement of a magnetic element of the input interface with the sensor.14. The method of claim 12, wherein detecting the user input includes identifying movement of a fiducial marker with an optical sensor to detect movement of the input interface.15. The method of claim 11, wherein detecting the user input includes sensing the user input with a capacitive electrode.16. The method of claim 11, wherein transceiving the wireless signal though the conductive portion includes transceiving the signal from the conductive portion, wherein the conductive portion is a loop antenna. 17. The method of claim 11, wherein transceiving a wireless signal through the conductive portion includes transceiving a plurality of different wireless signals through a plurality of electrically isolated segments of the conductive portion, a plurality of antenna feeds corresponding to the plurality of signals respectively communicatively couple each segment respectively to the transceiver.18. The method of claim 11, further comprising changing a setting of an electronic device based on the parameter. 19. The method of claim 11, further comprising changing a menu based on the parameter.20. At least one machine-readable medium including instructions, which when executed by a machine, cause the machine to perform operations of any of the methods of claims 11-19.21. An apparatus comprising means for performing any of the methods of claims 11-19. 22. A method for a multi-function input interface for an electronic device, the method comprising:tuning a conductive portion of an input interface to transceive a signal through the input interface; anddisposing a positional element on the input interface to detect a user input to the input interface.23. The method of claim 22, further comprising movably coupling the input interface to the electronic device.24. The method of claim 22, wherein disposing a positional element includes disposing a magnet on the input interface, the magnet detectable by a sensor to identify movement of the positional element. 25. The method of claim 22, wherein disposing a positional element includes disposing a fiducial marker on the input interface, the fiducial marker detectable by an optical sensor to identify a location of the fiducial marker.
MULTI-FUNCTION ANTENNA AND INPUT INTERFACEPRIORITY [0001] This Application claims the benefit of priority to U.S. ApplicationSerial No.15/280,060, filed September 29, 2016, which is incorporated by reference herein in its entirety.TECHNICAL FIELD[0002] This document pertains generally, but not by way of limitation, to electronic devices, such as electronic devices for wireless communication.BACKGROUND[0003] Electronic apparatuses for wireless communication may include one or more antennas. Examples of electronic apparatuses may include mobile phones, smart watches, computers (e.g., laptops, tablets, or other), radios, music players, Internet of Things (IOT) devices, activity trackers, digital cameras, electronic entertainment devices, home security or smart home devices, remote controls for appliances (e.g., televisions), heating and cooling systems, or the like. Antennas may be used for wireless communication with other electronic apparatuses or systems. For instance, the antennas are often used for sending and receiving cellular, wireless local access network, Wi-Fi, or other wireless signals. The antennas may be internal or external to the electronic apparatus. For instance, antennas are sometimes located within a housing of the electronic apparatus or other times may be fastened or extendable from an exterior of the housing. The antennas may be used to send and receive messages for voice data, commands, change settings, or other commands. Separate mechanical or electrical controls may be used by the user to provide commands to the electronic apparatus. For instance, the mechanical or electrical controls may include buttons, switches, knobs, touch screens, or other controls. The mechanical or electrical controls may be located on the exterior of electronic apparatuses for the user to touch or grasp in order to enter one or more commands. In response, the user may instruct the electronic apparatus to conduct certain operations, such as making a voice call, sending a text message, browsing the internet, taking photos, or other operations.BRIEF DESCRIPTION OF THE DRAWINGS [0004] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.[0005] FIG. 1 illustrates an example of an electronic device including a multi-function input interface, according to an embodiment.[0006] FIG. 2 is a system diagram of an electronic device including a multi-function input interface, according to an embodiment.[0007] FIG. 3 illustrates an example of a cross section of an electronic device including a multi-function input interface, according to an embodiment.[0008] FIG. 4 is flowchart illustrating an exemplary method for a multifunction input interface, according to an embodiment.[0009] FIG. 5 is flowchart illustrating an exemplary method for making a multi-function input interface, according to an embodiment.[0010] FIG. 6 is a system diagram of an exemplary electronic device including a multi-function input interface, according to an embodiment.DETAILED DESCRIPTION[0011] The present application relates to devices and techniques for a multi-function input interface, such as an input interface adapted to transceive a signal and detect a user input. The following detailed description and examples are illustrative of the subject matter disclosed herein; however, the subject matter disclosed is not limited to the following description and examples provided. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims.[0012] As electronic devices continue to decrease in size, the space for components of the electronic device decreases accordingly. Cosmetic design consideration may impact the size and placement of various components of electronic devices as well. In some examples, the technological capabilities of electronic devices are increasing, even for more traditional electronic devices, such as electronic wristwatches. For instance, smart watches may include a mobile phone, radio, mobile app, graphical display, touch screen, camera, activity tracking, GPS, and GNSS capabilities. Consequently, designers of electronic devices are faced with incorporating more components and more functionality into increasingly smaller spaces.[0013] According to the present disclosure, an electronic device may include a multi-function input interface. The multi-function input interface may increase the functionality of a component of the electronic device, such as a cosmetic component. For instance, the functionality of the input interface may be increased by combining functions of two or more components, such as an input interface and an antenna into one multi-function input interface.Accordingly, the multi-function input interface may reduce the number of components of the electronic device and correspondingly reduce the size of the electronic device.[0014] For example, the multi-function input interface for an electronic device may include a conductive portion to transceive a signal through the input interface, and the input interface may include a positional element to detect a user input. A transceiver may be communicatively coupled, for instance by an antenna feed, to the conductive portion to transceive the signal. The conductive portion may be an antenna or part of an antenna, including but not limited to, a monopole antenna, a Planar Inverted-F antenna, loop antenna, or other type of antenna. In an example, the electronic device may include a circuit, such as a circuit board, communicatively coupled between the conductive portion and a processing unit. The circuit may include a ground element, and a ground feed may be coupled between the conductive portion and the ground element. A dielectric may be disposed between the conductive portion and the ground element. The ground element can also be electrically coupled to other conductive parts of the electronic device. In various examples, the input interface may be moveable. For instance, the input interface may be an annular bezel and may be rotatable about a center axis of the bezel to receive the user input. In a further example, the input interface may be a slider. The positional element of the input interface may include, but is not limited to a sensor to detect pressure, capacitance, resistance, inductance, electric field, magnetic field, a magnet, fiducial marker, or other component contributing to the detection of a user input to the input interface. In an example, an optical sensor may detect a fiducial marker attached to the input interface. In a further example, a sensor may detect a position of a magnet attached to the input interface. Accordingly, the multifunction input interface may function as both an antenna and an input interface to reduce the size of the electronic device. Placing the antenna away from the body of the user may reduce near-field electromagnetic (EM) radiation absorption and thus improve antenna performance.[0015] FIG. 1 illustrates an example of an electronic device 100 including a multi-function input interface, according to an embodiment. The electronic device 100 may include, but is not limited to, a mobile device, wearable device, Internet of Things (IOT) Device, thermostat, or other electronic device. In the example of FIG. 1 the electronic device 100 is a wristwatch. The wristwatch may include an input interface 102 configured as a rotatable bezel, for instance, the rotatable bezel may be positioned around a lens 108 (e.g., crystal, polymer, or glass lens) of the wristwatch. The user 106 may rotate the input interface 102 (e.g., clockwise or counter-clockwise) to provide the user input.[0016] The input interface 102 may include at least two functional capabilities. For example, in operation, the input interface 102 may transceive a signal 104 and detect a user input. As shown in the example of FIG. 1, the input interface 102 may include a cosmetic component of the electronic device 100. The user 106 may interact with the input interface 102 to provide a user input. For instance, the user 106 may touch, tap, or swipe along the input interface 102 to provide the user input. In another example, the user 106 may move the user input, for example, rotate or translate the input interface 102. In various examples, the input interface 102 may be a rocker or a slider. The user input may be an input to navigate menus, select items, change settings, input or send messages, or other functions. The input interface 102 may include a conductive portion to transceive (e.g., transmit, receive, or radiate) electro-magnetic signals. In various examples, the input interface may supplement or substitute for other antennas, input interfaces (e.g., buttons), user interfaces (e.g., touch screens), or the like.[0017] FIG. 2 is a system diagram 200 of an electronic device including a multi-function input interface, according to an embodiment. System diagram 200 of the electronic device, illustrates, for example, the electronic device 100 including the input interface 102. In the example of FIG. 2, the system diagram 200 may include the input interface 102, a transceiver 202, and a processing unit 204. The transceiver 202 may be communicatively coupled between the input interface 102 and the processing unit 204. For instance, as shown in FIG. 2, the processing unit 204 may be communicatively coupled to the transceiver 202, input interface 102, positional element 206, or sensor 208 through a circuit 210. The transceiver 202 may transmit or receive the signal 104 through the conductive portion of the input interface 102. The transceiver 202 may include an audio processor, oscillator, at least one amplifier, frequency selector, mixer, tunable RF switches or other transceiver component. In an example, the transceiver 202 may generate the signal (e.g., signal 104 shown in FIG. 1) that is transmitted by the input interface 102. In a further example, the signal 104 may be received by the transceiver 202 through the conductive portion of the input interface 102. The signal 104 may be any type of wireless signal. The signal 104 may be a cellular phone signal or a wireless data signal for a smartwatch. In a further example, the processing unit 204 may actively control the transceiver 202 to increase the quality of received or transmitted signals.[0018] The input interface includes a positional element 206. The positional element 206 may be used to detect the user input to the input interface 102. For instance, the positional element 206 may include, but is not limited to, a sensor, such as a touch sensor to detect the user input to the input interface 102. In operation, the touch sensor may detect a change in pressure, capacitance, resistance, or inductance corresponding to the user input. In an example, the positional element 206 may include a capacitive touch sensor to detect a touch or gesture of the user 106 on the input interface 102. For instance, the input interface 102 may include an array of electrodes to detect a location of a user input on the input interface 102. The array of electrodes may be part of a capacitive sensing system, such as a surface capacitance system or a projected capacitance system. In an example, the input interface may be fixably attached to the electronic device 100, and the user 106 may move a finger along the input interface 102 to produce the user input (e.g., a swipe along the input interface 102). In a further example, the positional element 206 may detect a tap or other gesture to the input interface 102. The positional element 206 may output a parameter based on the user input for communication with the processing unit 204. The parameter may include positional information regarding the user input received from the touch-sensor. Positional information may include a degree of rotation, amount of travel (e.g., in mm, pixels, number of electrodes, or other degree of measure), or the like.[0019] Optionally, the electronic device 100 may include a sensor 208 adapted to detect the positional element 206 as shown in FIG. 2. In an example, the positional element 206 may include a magnet, fiducial marker, or other positional element. A sensor 208 may detect the positional element 206 and correspondingly the position of the input interface 102. For instance, the sensor 208 may include an optical sensor configured to detect a fiducial marker attached to the input interface 102. The fiducial marker may be any type of symbol, indicia, or other optically detectable marking. In another example, the sensor 208 may detect a position of a magnet attached to the input interface. For instance, the sensor 208 may include an inductive sensor. In operation, the sensor 208 may detect when the positional element 206 is aligned with the sensor 208. In a further example, the input interface 102 may include a plurality of positional elements 206. One or more of the positional elements 206 may be different than other positional elements 206 to represent various positions of the input interface 102. For instance, the various positional elements may represent a length of travel, degree of rotation, or other measure of movement of the input interface 102. The sensor 208 may detect the various positional elements 206 and output different parameters based on which positional element 206 is detected or aligned with the sensor 208. In a further example, the electronic device 100 may include a plurality of sensors 208 to detect one or more positional elements 206. For instance, movement of the input interface 102 may be determined by which sensor detects the positional element 206.[0020] The processing unit 204 may include but is not limited to, a processor, microcontroller (MCU), system-on-chip (SOC), Application specific integrated circuit (ASIC), central processing unit (CPU), graphics processing unity (GPU), field programmable gate array (FPGA), display driver, controller, computer, or the like. In an example, the processing unit 204 may include the transceiver 202. For instance, the transceiver 202 may be integrated into the processing unit 204. The processing unit 204 may be communicatively coupled to the transceiver 202, the input interface 102, the positional element 206, and the sensor 208. For instance, the processing unit 204 may be communicatively coupled through a circuit 210. The processing unit 204 may receive various inputs including, but not limited to, the signal 104, parameter, or both. The processing unit 204 may be configured to perform operations within the electronic device 100 to navigate menus, change settings, transmit or receive signals (e.g., signal 104), present information on a display, or other functions.[0021] FIG. 3 is an example of a cross section of an electronic device300 including an input interface, according to an embodiment. Electronic device 300 may be include an input interface, such as input interface 102 of FIG. 1. The electronic device 300 may include a housing 304. The input interface 102 may be coupled to the housing 304, for instance, fixably or movably coupled to the housing 304. The electronic device 300 may include a processing unit 316 (e.g., the processing unit 204), transceiver 202, and sensor 208 as previously discussed and shown in the example of FIG. 2. The processing unit 204, transceiver 202, and sensor 208 may be communicatively coupled in a circuit 210, such as on a circuit board 306 as shown in the example of FIG. 3. An antenna feed 308 may communicatively couple a conductive portion 318 of the input interface 102 to the transceiver 202. For instance, in the example of FIG. 3, the transceiver 202 may be integrated into the processing unit 316. In an example, the antenna feed 308 may be communicatively coupled to the transceiver 202 through the circuit board 306. Accordingly, the transceiver 202 may transceive the signal 104 to and from the conductive portion 318. A second feed 310, such as a ground feed may be communicatively coupled between the conductive portion 318 of the input interface 102 and the circuit board 306 to ground electrical energy from the conductive portion 318. In a further example, the second feed 310 may be a second antenna feed to transceive a second signal from the conductive portion 318. For instance, the second feed 310 may support the transmission of multiple communication protocols from the conductive portion 318 simultaneously. The circuit board 306 may be grounded to the housing 304, for instance, to electrically couple the ground element to the housing 304. In the example of FIG. 3, the circuit board 306 may be electrically coupled to the housing 304 by one or more contacts, such as a spring contact 324. In other examples, the circuit board 306 may be electrically isolated from the housing 304.[0022] As shown in the example of FIG. 3, the input interface 102 is a bezel that is rotatably coupled to the housing 304. The conductive portion 318 may be constructed of a material, including but not limited to, silver, copper, gold, aluminum, tungsten, zinc, nickel, iron, or other conductive material. The input interface 102 may be constructed of a solid conductive material (e.g., the conductive portion 318 is the input interface 102) or may be a composite construction including the conductive portion 318 and other materials. In the example of FIG. 3, the conductive portion 318 is the input interface 102. In other words the input interface 102 is a solid conductive material. The size and shape of the conductive portion 318 may be configured to radiate electro-magnetic energy in order to transceive the signal 104. For instance, the impedance of the conductive portion 318 may be adjusted with respect to the impedance of the circuit to radiate the signal 104. As previously stated, the conductive portion 318 may be configured as a Planar Inverted-F (PIFA), loop, dipole, monopole, patch, slot, or other configuration of antenna. In various examples the conductive portion 318 may be a continuous loop or may include a plurality of electrically isolated segments. For instance, the segments may include equal lengths or different lengths. The segments may be communicatively coupled to the transceiver or ground element by one or more feeds, such as feed 308 or feed 310. In an example where the conductive portion 318 is a loop antenna, the annular bezel may include a gap separating a first end of the conductive portion 318 from a second end of the conductive portion 318. The antenna feed 308 may be communicatively coupled to the first end and a second antenna feed or ground feed (e.g., a feed 310) may be coupled to the second end of the conductive portion 318. A dielectric material, such as the dielectric 302 may electrically isolate the first end from the second end. In various examples, the conductive portion 318 may be configured as a Near-Field Communication (NFC), Bluetooth, Wi-Fi, Global Positioning system (GPS), Global Navigation Satellite System (GNSS) antenna or other antenna type.[0023] In an example, a positional element, such as positional element206 (referred to as positional element 312 in the example of FIG. 3), may be a magnet or a fiducial marker. The positional element 312 may be attached to the input interface 102. For instance, the magnet or fiducial marker may be attached by welding, ultra-sonic welding, adhesive, or fastened to the input interface 102. In an example, the positional element 312 may be an integrated visual indicia, including, but not limited to, an engraved marking, molded, machined, painted, or other feature attached or integral to the input interface 102. In a further example, the conductive portion 318 of the input interface 102 may be locally magnetized. For instance, a segment of the conductive portion 318 may be magnetized to construct the positional element 312.[0024] As shown in the example of FIG. 3, the sensor 314 may be located on the circuit board 306. The sensor, such as sensor 208 (also referred to in FIG. 3 as sensor 314) may be located on a path along which the positional element 312 may travel. In operation, the positional element 312 may move with respect to the sensor 314. The sensor 314 may detect the positional element 312 when the positional element 312 and the sensor 314 are aligned. As previously described with regard to sensor 208, the sensor 314 may output a parameter. The parameter may indicate a positon of the positional element 312, a number of positional elements 312 that have been detected by the sensor 314 along a direction (e.g., clockwise or counter-clockwise), or which positional element 312 is detected by the sensor 314 at a given time. Accordingly, the sensor 314 may provide a parameter that is indicative of the positon of the input interface 102 and correspondingly, the user input. For instance, detecting the user input may include detecting a rotation of the input interface 102, such as a rotation of the annular bezel about a center axis of the bezel in the example of FIG. 3. In other examples, detecting the user input includes detecting a translation of the input interface 102, for instance, where the input interface 102 is a slider.[0025] In a further example, the positional element 312 may include a capacitive, inductive, resistive, or other type of touch-sensor attached to the input interface 102. In operation, the sensor 208 (e.g., the touch-sensor as previously described herein) may detect the user input to the input interface 102. For instance, the touch-sensor may include a capacitive sensor as previously described. The touch-sensor may be located along an upper surface 320 of the input interface 102, lower surface 322 of the input interface 102, or any location therebetween. Accordingly, the touch-sensor may detect a position of a touch or a gesture of the user 106. Because the positional element 312 may be wirelessly detected by the sensor 314, the effects of environmental conditions, such as shock, vibration, thermal cycling, humidity, or the like may be mitigated.[0026] The circuit board 306 may include a Copper Clad Laminate(CCL). The CCL may include the conductive layer (e.g., metallic foil) that may be attached to (e.g., laminated on) one or more dielectric layers of the circuit board 306. The circuit board 306 may include a single sided, double sided, or multi-layer construction. For instance, the circuit board 306 have dielectric layer s fabricated from materials including, but not limited to, FR-4, prepreg, ceramic, epoxy, other glass or fiber filled resin, or the like. In an example, the conductive layer may be electrodeposited (electroplated) onto the circuit board 306. The circuit board 306 may include a ground plane, such as one or more of the conductive layers of the circuit board 306.[0027] A dielectric 302 is located between the conductive portion 318 and the housing 304 to electrically isolate the conductive portion 318 from the housing 304. In an example, the dielectric 302 may include features to fasten the input interface 102 (fixably or movably) to the housing 304. The dielectric 302 may be fabricated from materials including, but not limited to, ABS, FR-4, prepreg, ceramic, epoxy, other glass or fiber filled resin, or the like. The dielectric 302 may provide a radio frequency gap (RF gap) between the conductive portion 318 and ground, such as the ground plane, housing 304 (e.g., metallic or conductive housing 304), or other ground. The RF gap may be alter the antenna gain of the input interface 102. For instance, the RF gap may be tuned for a quarter-wave antenna or a half-wave antenna. In other words, the size of the RF gap may be increased or decreased to adjust the antenna properties of the conductive portion 318. In an example, the efficiency of the antenna may be increased as the RF gap is increased. The RF gap may include, but is not limited to, 0.50 to 2.0 mm in some examples. In an example where the conductive portion 318 includes a plurality of isolated segments, one or more dielectrics 302 may electrically isolate the segments of the conductive portion 318. The dielectric 302 between each segment and the housing 304 may be equal or different lengths. For instance, each dielectric 302 can be tuned for each respective segment. In another example, the dielectric may provide electro static discharge (ESD) protection to the circuit board 306.[0028] In the example of FIG. 3, the antenna feed 308 iscommunicatively coupled between the conductive portion 318 and the circuit board 306. In other examples, the antenna feed 308 may be communicatively coupled between the conductive portion 318 and the circuit 210, transceiver 202, or processing unit 204 of FIG. 2, or other. The antenna feed 308 may include, but is not limited to, a spring contact, spring probe (e.g., pogo pin), or other slidable electrical connector. For instance, the antenna feed 308 may be slidably coupled between the conductive portion 318 and the circuit board 306, circuit 210, processing unit 204, transceiver 202 or other component. In an example, the antenna feed 308 may include a ratcheting coupling to the conductive portion 318. For instance, the input interface 102 may be rotatable in one direction (e.g., clockwise). The conductive portion 318 may include teeth that are engageable with the antenna feed 308. As the input interface 102 is rotated (e.g., clockwise), a bias element of the antenna feed 308 may compress and the antenna feed 308 may advance to an adjacent tooth of the conductive portion 318. In a further example, the antenna feed 308 may include a ball bearing located between the conductive portion 318 and the bias element. The bias element may provide a contact force between the conductive portion 318 and the antenna feed 308 to reduce the electrical contact resistance of the slidable coupling. [0029] In another example, the conductive portion 318 may be capacitively or inductively coupled to the transceiver 202. For instance, the electronic device 300 may include a primary antenna and the conductive portion 318 may be a parasitic antenna. The signal 104 may be communicated capacitively between the primary antenna and the parasitic antenna (e.g., the conductive portion 318). In a further example, the electronic device may include an inductive coil to induce the signal in the conductive portion 318 for transceiving the signal 104 from the conductive portion 318. Accordingly, the signal 104 may be communicated to the conductive portion 318 wirelessly (e.g., contact-free), and the number of moving parts may be reduced, mechanical wear may be reduced, space constraints may be reduced, or any combination thereof. The conductive portion 318 can be used as an NFC antenna or to boost near-field coupling of an integrated antenna (such as an NFC antenna) integrated in the electronic device 300.[0030] In further examples, the electronic device 300 may include other feeds, such as feed 310 to communicatively couple the conductive element 318. For instance, the other feed 310 may include, but is not limited to, a ground feed, sensor feed, or other electronically conductive feed. The other feed 310 may include a slidable coupling as described with regard to the antenna feed 308 above. The ground feed may provide a ground path between the conductive portion 318 for ESD protection or for antenna grounding. For instance, the ground feed may disperse electrostatic charge build-up from repeated touching of the conductive portion 318 from the user 106. In an example, the feed 310 may be communicatively coupled to an electronic radio-frequency switch to actively control antenna radiation characteristics of the conductive portion 318.[0031] FIG. 4 is flowchart illustrating an exemplary method 400 for a multi-function input interface, according to an embodiment. The multi-function input interface for an electronic device may be the electronic device previously described in the examples herein and shown for instance in FIGS. 1-3. In describing the method 400, reference is made to one or more components, features, functions, and processes previously described herein. Where convenient, reference is made to the components, features, processes and the like with reference numerals. Reference numerals provided are exemplary and are nonexclusive. For instance, features, components, functions, processes, and the like described in the method 400 include, but are not limited to, thecorresponding numbered elements provided herein. Other corresponding features described herein (both numbered and unnumbered) as well as their equivalents are also considered.[0032] At 402, a wireless signal may be transceived through a conductive portion (e.g., conductive portion 318) of an input interface, such as input interface 102. The conductive portion includes at least one antenna. In various examples, transceiving the wireless signal though the conductive element may include transceiving the signal from an conductive portion including or configured as a loop antenna, Planer Inverted-F antenna, NFC antenna, or other type of antenna. In an example, the signal, such as signal 104, may be generated or processed by a transceiver, such as transceiver 202. The signal may then be communicated between the transceiver and the conductive portion to radiate or receive the signal through the conductive portion. In an example, the signal may be communicated through an antenna feed, such as antenna feed 308, communicatively coupled between the conductive portion and the transceiver. As previously described the antenna feed may include a slidable coupling to the conductive portion. In a further example, the antenna feed may include a bias element.[0033] At 404, a user input at the input interface may be detected based on a parameter from a sensor, such as the touch-sensor or the sensor 208 as previously described herein. In an example, the user input may be a tap, touch, swipe or other gesture along the input interface, for instance, along the conductive portion of the input interface. In a further example, the user input may be movement of the input interface, such as a translation or rotation of the input interface. The user input may be to navigate menus, select items, change settings, input or send messages, or other functions. The user input may be detected in various ways including, but not limited to, identifying movement of a magnetic element of the input interface with the sensor, identifying movement of a fiducial marker of the input interface with an optical sensor, detecting a translation of the input interface (e.g., where the input interface is a slider), detecting movement of the input interface with an array of sensors, sensing the user input with a capacitive electrode (e.g., touch-sensor), sensing the user input with a resistive touch-sensor, sensing the user input with an inductive touch- sensor, or the like. The parameter may include, but is not limited to, a signal corresponding to an open or closed switch, a value of a measured characteristic, image data, or other output of the sensor. The parameter may be received by the circuit (e.g., circuit 210), the processing unit (e.g., processing unit 204), or another component of the electronic device, such as electronic device 100. In an example, the processing unit 204 (or a circuit of passive components) may be configured to perform operations within the electronic device 100 to navigate menus (e.eg change a menu), change settings, transmit or receive signals (e.g., signal 104), present information on a display, or other functions.[0034] FIG. 5 is flowchart illustrating an exemplary method 500 for making a multi-function input interface, according to an embodiment. In describing the method 500, reference is made to one or more components, features, functions, and processes previously described herein. Where convenient, reference is made to the components, features, processes and the like with reference numerals. Reference numerals provided are exemplary and are nonexclusive. For instance, features, components, functions, processes, and the like described in the method 500 include, but are not limited to, thecorresponding numbered elements provided herein. Other corresponding features described herein (both numbered and unnumbered) as well as their equivalents are also considered.[0035] At 502, a conductive portion of an input interface may be tuned to transceive a signal through the input interface. For instance, the conductive portion may be the conductive portion 318, the input interface may be the input interface 102, and the signal may be signal 104 as previously described herein. The signal may include a radiated frequency and wavelength. Tuning the conductive portion may include, but is not limited to, configuring the conductive portion to radiate the signal at the frequency and one-quarter, one-half, or other wavelength. The conductive portion may be provided with a size and shape that is configured to radiate the signal at the frequency or wavelength from the conductive portion. For instance, a length or width of the conductive portion may be adapted to match the impedance of the conductive portion to an impedance of a circuit coupling a transceiver, such as transceiver 202 to the conductive portion. The conductive portion may be isolated from the ground element with a dielectric between the conductive portion and the ground element. A thickness of the dielectric may be adjusted to tune the conductive portion to radiate the signal at the frequency or wavelength. In an example, the feed may be communicatively coupled to an electronic radio-frequency switch to actively control antenna radiation characteristics of the conductive portion.[0036] In an example, the conductive portion may be configured as a loop antenna having a first antenna feed located adjacent to a first and of an open loop. In other examples, the conductive portion may be configured as a Planar Inverted-F, dipole, monopole, patch, or other type of antenna. In various examples, the conductive portion may be configured as a GSM, WCDMA, LTE, NFC, Bluetooth, Wi-Fi, GPS, Global Navigation Satellite System (GNSS) antenna or other antenna type.[0037] At 504, a positional element, such as positional element 206 or312, may be disposed on the input interface to detect a user input to the input interface. In an example, the positional element may include, but is not limited to, a touch-sensor, such as a capacitive, inductive, resistive, or other sensor to detect the user input. Disposing the positional element on the input interface may include attaching the positional element to the input interface by welding, ultrasonic welding, adhesive, a fastener or the like. The touch-sensor may be located along a top surface of the input interface, bottom surface of the input interface, or any location therebetween. Accordingly, the touch-sensor may detect a position of a touch or a gesture of the user.[0038] In a further example the input interface may be movably coupled to the electronic device, such as electronic device 100. For instance, the input interface may include rotatably coupling the input interface to the electronic device. The input interface may be an annular bezel that is rotatable about a center axis of the bezel to receive the user input. In a further example, the input interface may be slidably coupled to the electronic device. For instance, the input interface may be configured to translate with respect to the electronic device.[0039] In an example, a positional element, such as positional element206 or 312, may be a magnet or a fiducial marker. The positional element may be attached to the input interface. For instance, the magnet or fiducial marker may be welded, bonded with adhesive, or fastened to the input interface. In an example, the positional element may be an integrated visual indicia, including, but not limited to, an engraved marking, molded, machined, or painted, or other feature attached or integral to the input interface. In a further example, the conductive portion may be locally magnetized. For instance, a segment of the conductive portion may be magnetized to construct the positional element. In a further example, a plurality of positional elements may be disposed on the input interface. One or more of the positional elements may be different than other positional elements to represent various positions of the input interface. For instance, the various positional elements may represent a length of travel, degree of rotation, or other measure of movement of the input interface.[0040] The sensor, such as sensor 208 or a sensor 314, may be located on a path along which the positional element travels when the input interface is moved. For instance, the sensor may be located on the circuit board, such as circuit board 306. In a further example, the sensor may be located on the input interface and the positional element may be disposed on the circuit board, housing, or other component of the electronic device. In operation, the positional element may move with respect to the sensor. The sensor may detect the positional element when the positional element and the sensor are aligned. As previously described, the sensor may output a parameter. The parameter may indicate a positon of the positional element, a number of positional elements that have been detected by the sensor along a direction (e.g., clockwise or counterclockwise), or which positional element is detected by the sensor at a given time. Accordingly, the sensor may provide a parameter that is indicative of the positon of the input interface and correspondingly, communicate the user input to the processing unit. For instance, detecting the user input may include detecting a rotation of the input interface, such as a rotation of the annular bezel about a center axis of the bezel. In other examples, the detecting the user input includes detecting a translation of the input interface, for instance, where the input interface is a slider. In an example, the sensor may detect one or more positional elements and output different parameters based on which positional element is detected. In a further example, the electronic device may include a plurality of sensors to detect one or more positional elements. For instance, movement of the input interface may be determined by which sensor detects the positional element.[0041] The positional element or the sensor may be calibrated to detect the user input. For instance, where the positional element includes a touch- sensor, the sensitivity of the sensor may be adjusted to determine the user input. In a further example, the position of an electrode or a plurality of electrodes may be adjusted to calibrate the positional element to detect the user input.[0042] The method may include communicatively coupling the conductive portion to one or more of a circuit (e.g., circuit board 306), transceiver (e.g., transceiver 202), processing unit (e.g., processing unit 204), sensor, positional element, or other component of the electronic device. A slidable coupling may communicatively couple the conductive portion to the circuit, transceiver, processing unit, sensor, positional element, or other component. The slidable coupling may include, but is not limited to, a spring probe, spring contact, or other type of slidable electrical contact. A bias element may be included in the slidable coupling to provide contact force and reduce contact resistance between the conductive portion and the circuit, transceiver, processing unit, sensor, positional element or other component.[0043] In an example, the conductive portion may be communicatively coupled to the transceiver with an antenna feed, such as antenna feed 308. The antenna feed may be configured to communicate the signal from the transceiver to the input to radiate the signal. The antenna feed may be configured to communicate the signal received by the conductive portion to the transceiver. The conductive portion may be grounded to a ground element, such as a ground plane on the circuit board 306 or the housing, by a grounding feed. In various examples, the antenna feed or the grounding feed may include, but are not limited to, a spring probe, spring contact, slidable electrical contact, conductive foam, conductive pad, or other type of electrical connection. In a further example, the antenna feed may be inductively or capacitively coupled to the conductive portion. For instance, the antenna feed may include a radiating element (e.g., microstrip antenna), inductive element (e.g., inductive coil), or the like to wirelessly transmit the signal from the transceiver to the conductive portion.[0044] FIG. 6 is a block diagram illustrating an example machine 600 upon which any one or more of the devices (e.g., electronic devices, such as electronic device 100) or techniques (e.g., methods, such as method 400 or 500) discussed herein may perform. In alternative embodiments, the machine 600 may operate as a standalone device or may be connected (e.g., networked) to other machines. The machine 600 may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, watch, smartwatch, smart home system, internet-of-things device, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.[0045] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.[0046] Accordingly, the term "module" is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.[0047] The machine (e.g., computer, or computer system) 600 may include a hardware processor 602 (e.g., a CPU, GPU, a hardware processor core, or any combination thereof), a main memory 604, and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608. The machine 600 may further include a display device 610, an alphanumeric input device 612 (e.g., a keyboard), and a user interface (UI) navigation device 614 (e.g., a mouse). In an example, the display device 610, input device 612 and UI navigation device 614 may be a touch screen display. The machine 600 may additionally include a mass storage device (e.g., drive unit) 616, a signal generation device 618 (e.g., a speaker), a network interface device 620, and one or more sensors 621, such as a global positioning system (GPS) sensor, compass, accelero meter, or other sensor. The machine 600 may include an output controller 628, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR)) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).[0048] The mass storage device 626 may include a machine readable medium 622 on which is stored one or more sets of data structures orinstructions 624 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 624 may also reside, completely or at least partially, within the main memory 604, within static memory 606, or within the hardware processor 602 during execution thereof by the machine 600. In an example, one or any combination of the hardware processor 602, the main memory 604, the static memory 606, or the mass storage device 616 may constitute machine readable media.[0049] While the machine readable medium 622 is illustrated as a single medium, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that arranged to store the one or more instructions 624.[0050] The term "machine readable medium" may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 600 and that cause the machine 600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non- limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine readable medium with a plurality of particles having resting mass. Specific examples of massed machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto- optical disks; and CD-ROM and DVD-ROM disks.[0051] The instructions 624 may further be transmitted or received (e.g., transceived) over a communications network 626 using a transmission medium via the network interface device 620 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 602.11 family of standards known as Wi-Fi®, IEEE 602.16 family of standards known as WiMAX®), peer-to-peer (P2P) networks, among others. In an example, the network interface device 620 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 626. In an example, the network interface device 620 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 600, and includes digital or analogcommunications signals or other intangible medium to facilitate communication of such software.Various Notes & Examples[0052] Each of these non-limiting examples may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples. To better illustrate the method and apparatuses disclosed herein, a non-limiting list of embodiments is provided here:[0053] Example 1 is a multi-function input interface for an electronic device, the input interface comprising: a conductive portion to transceive a signal through the input interface; and a positional element to detect a user input to the input interface.[0054] In Example 2, the subject matter of Example 1 optionally includes a transceiver communicatively coupled to the conductive portion.[0055] In Example 3, the subject matter of Example 2 optionally includes wherein the transceiver is communicatively coupled to the conductive portion by an antenna feed, the antenna feed slidably coupled between the conductive portion and the transceiver.[0056] In Example 4, the subject matter of Example 3 optionally includes wherein the conductive portion includes a plurality of electrically isolated segments, a plurality of antenna feeds communicatively couple each segment respectively to the transceiver.[0057] In Example 5, the subject matter of any one or more of Examples3-4 optionally include wherein the antenna feed includes a bias element. [0058] In Example 6, the subject matter of any one or more of Examples3-5 optionally include wherein the antenna feed is a capacitive coupling to the conductive portion.[0059] In Example 7, the subject matter of any one or more of Examples 3-6 optionally include wherein the antenna feed includes an inductive coupling to the conductive portion.[0060] In Example 8, the subject matter of any one or more of Examples1-7 optionally include wherein the input interface includes a grounding feed coupled between the conductive portion and a ground element.[0061] In Example 9, the subject matter of any one or more of Examples1-8 optionally include wherein the system includes a dielectric between the conductive portion and a ground element.[0062] In Example 10, the subject matter of any one or more ofExamples 1-9 optionally include wherein the conductive portion includes a plurality of electrically isolated segments, and a plurality of dielectric elements are disposed between the respective segments and a housing, each dielectric element including a different thickness.[0063] In Example 11, the subject matter of any one or more ofExamples 1-10 optionally include a circuit, wherein the circuit iscommunicatively coupled to the conductive portion and the positional element.[0064] In Example 12, the subject matter of any one or more ofExamples 1-11 optionally include wherein the input interface includes a loop antenna, wherein the loop antenna includes a first end and a second loop end and the loop antenna is communicatively coupled to a first antenna feed located adjacent to the first end.[0065] In Example 13, the subject matter of any one or more ofExamples 1-12 optionally include wherein the input interface is a Planar Inverted-F antenna.[0066] In Example 14, the subject matter of any one or more ofExamples 1-13 optionally include wherein the input interface is a near-field communication antenna. [0067] In Example 15, the subject matter of any one or more ofExamples 1-14 optionally include wherein the input interface is movable to receive the user input.[0068] In Example 16, the subject matter of any one or more ofExamples 1-15 optionally include wherein the input interface is an annular bezel and is rotatable about a center axis of the bezel to receive the user input.[0069] In Example 17, the subject matter of any one or more ofExamples 1-16 optionally include wherein the positional element is to detect a position of the input interface relative to the electronic device.[0070] In Example 18, the subject matter of any one or more ofExamples 1-17 optionally include wherein the input interface is a slider to receive the user input.[0071] In Example 19, the subject matter of any one or more ofExamples 1-18 optionally include wherein the positional element includes a sensor to detect the user input to the input interface.[0072] In Example 20, the subject matter of any one or more ofExamples 1-19 optionally include wherein the positional element is magnetic and movement of the positional element is identifiable by a sensor to detect the user input.[0073] In Example 21, the subject matter of any one or more ofExamples 1-20 optionally include a sensor to identify a location of the positional element, wherein the positional element is a fiducial marker and the sensor is an optical sensor.[0074] In Example 22, the subject matter of any one or more ofExamples 1-21 optionally include wherein the electronic device includes an array of sensors to detect the user input.[0075] In Example 23, the subject matter of any one or more ofExamples 1-22 optionally include wherein the positional element is a capacitive electrode to detect the user input to the input interface.[0076] In Example 24, the subject matter of any one or more ofExamples 1-23 optionally include wherein the positional element is an inductive sensor to detect the user input. [0077] In Example 25, the subject matter of any one or more ofExamples 1-24 optionally include a processing unit, wherein the processing unit includes a transceiver and receives input from a sensor.[0078] Example 26 is a method for a multi-function input interface for an electronic device, wherein the input interface includes a conductive portion, the method comprising: transceiving a wireless signal through the conductive portion of the input interface, wherein the conductive portion is an antenna; and detecting a user input to the input interface based on a parameter from a sensor.[0079] In Example 27, the subject matter of Example 26 optionally includes wherein detecting the user input includes detecting movement of the input interface with the sensor.[0080] In Example 28, the subject matter of Example 27 optionally includes wherein detecting the user input includes identifying movement of a magnetic element of the input interface with the sensor.[0081] In Example 29, the subject matter of any one or more ofExamples 27-28 optionally include wherein detecting the user input includes identifying movement of a fiducial marker with an optical sensor to detect movement of the input interface.[0082] In Example 30, the subject matter of any one or more ofExamples 27-29 optionally include wherein detecting the user input includes detecting rotation of the input interface, the input interface including an annular bezel rotatable about a center axis of the bezel to receive the user input.[0083] In Example 31, the subject matter of any one or more ofExamples 27-30 optionally include wherein detecting the user input includes detecting a translation of the input interface, wherein the input interface is a slider to receive the user input.[0084] In Example 32, the subject matter of any one or more ofExamples 27-31 optionally include wherein detecting the user input includes detecting movement of the input interface with an array of sensors.[0085] In Example 33, the subject matter of any one or more ofExamples 26-32 optionally include wherein detecting the user input includes sensing the user input with a capacitive electrode. [0086] In Example 34, the subject matter of any one or more ofExamples 26-33 optionally include wherein transceiving the wireless signal though the conductive portion includes transceiving the signal from the conductive portion, wherein the conductive portion is a loop antenna.[0087] In Example 35, the subject matter of any one or more ofExamples 26-34 optionally include wherein transceiving the wireless signal though the conductive portion includes transceiving the signal from the conductive portion, wherein the conductive portion is a Planar Inverted-F antenna.[0088] In Example 36, the subject matter of any one or more ofExamples 26-35 optionally include wherein transceiving the wireless signal though the conductive portion includes transceiving the signal from the conductive portion, wherein the conductive portion is a near-fieldcommunication antenna.[0089] In Example 37, the subject matter of any one or more ofExamples 26-36 optionally include transceiving the wireless signal between a transceiver and the conductive portion by an antenna feed, wherein the antenna feed includes a slidable coupling.[0090] In Example 38, the subject matter of any one or more ofExamples 26-37 optionally include transceiving the wireless signal between a transceiver and the conductive portion by an antenna feed, wherein the antenna feed includes a slidable coupling and the slidable coupling includes a bias element.[0091] In Example 39, the subject matter of any one or more ofExamples 26-38 optionally include wherein transceiving a wireless signal through the conductive portion includes transceiving a plurality of different wireless signals through a plurality of electrically isolated segments of the conductive portion, a plurality of antenna feeds corresponding to the plurality of signals respectively communicatively couple each segment respectively to the transceiver.[0092] In Example 40, the subject matter of any one or more ofExamples 26-39 optionally include changing a setting of an electronic device based on the parameter. [0093] In Example 41, the subject matter of any one or more ofExamples 26-40 optionally include changing a menu based on the parameter.[0094] Example 42 is at least one machine-readable medium including instructions, which when executed by a machine, cause the machine to perform operations of any of the methods of Examples 26-41.[0095] Example 43 is an apparatus comprising means for performing any of the methods of Examples 26-41.[0096] Example 44 is a method for a multi-function input interface for an electronic device, the method comprising: tuning a conductive portion of an input interface to transceive a signal through the input interface; and disposing a positional element on the input interface to detect a user input to the input interface.[0097] In Example 45, the subject matter of Example 44 optionally includes calibrating the positional element to detect the user input.[0098] In Example 46, the subject matter of any one or more ofExamples 44-45 optionally include communicatively coupling the conductive portion to a transceiver through an antenna feed.[0099] In Example 47, the subject matter of Example 46 optionally includes wherein the antenna feed includes a slidable coupling.[00100] In Example 48, the subject matter of Example 47 optionally includes wherein the slidable coupling includes a bias element.[00101] In Example 49, the subject matter of any one or more ofExamples 44-48 optionally include grounding the conductive portion to a ground element with a grounding feed.[00102] In Example 50, the subject matter of any one or more ofExamples 44-49 optionally include isolating the conductive portion from a ground element with a dielectric between the conductive portion and the ground element.[00103] In Example 51, the subject matter of any one or more ofExamples 44-50 optionally include isolating the conductive portion from a ground element with a dielectric between the conductive portion and the ground element, wherein the conductive portion includes a plurality of electrically isolated segments, and a plurality of dielectric elements are disposed between the respective segments and a housing, each dielectric element including a different thickness.[00104] In Example 52, the subject matter of any one or more ofExamples 44-51 optionally include capacitively coupling an antenna feed to the conductive portion.[00105] In Example 53, the subject matter of any one or more ofExamples 44-52 optionally include inductively coupling an antenna feed to the conductive portion.[00106] In Example 54, the subject matter of any one or more ofExamples 44-53 optionally include communicatively coupling a circuit to the conductive portion and the positional element.[00107] In Example 55, the subject matter of any one or more ofExamples 44-54 optionally include communicatively coupling a processing unit to a transceiver and to the positional element.[00108] In Example 56, the subject matter of any one or more ofExamples 44-55 optionally include communicatively coupling a transceiver to the conductive portion.[00109] In Example 57, the subject matter of any one or more ofExamples 44-56 optionally include communicatively coupling a circuit to the positional element.[00110] In Example 58, the subject matter of any one or more ofExamples 44-57 optionally include communicatively coupling a processing unit to the conductive portion and the positional element.[00111] In Example 59, the subject matter of any one or more ofExamples 44-58 optionally include wherein tuning the conductive portion includes tuning a loop antenna, wherein the loop antenna includes a first end and a second end, and a first antenna feed is communicatively coupled adjacently to the first end.[00112] In Example 60, the subject matter of any one or more ofExamples 44-59 optionally include wherein tuning the conductive portion includes tuning the conductive portion, wherein the conductive portion is a Planar Inverted-F antenna. [00113] In Example 61, the subject matter of any one or more ofExamples 44-60 optionally include wherein tuning the conductive portion includes tuning the conductive portion, wherein the conductive portion is a near- field communication antenna.[00114] In Example 62, the subject matter of any one or more ofExamples 44-61 optionally include wherein tuning the conductive portion includes tuning the conductive portion, wherein the conductive portion includes a plurality of electrically isolated segments, a plurality of antenna feeds communicatively couple each segment respectively to a transceiver.[00115] In Example 63, the subject matter of any one or more ofExamples 44-62 optionally include movably coupling the input interface to the electronic device.[00116] In Example 64, the subject matter of Example 63 optionally includes wherein movably coupling the input interface includes rotatably coupling the input interface to the electronic device, wherein the input interface is an annular bezel and is rotatable about a center axis of the bezel to receive the user input.[00117] In Example 65, the subject matter of any one or more ofExamples 44-64 optionally include wherein disposing a positional element includes disposing a sensor to detect the user input to the input interface.[00118] In Example 66, the subject matter of any one or more ofExamples 44-65 optionally include wherein disposing a positional element includes disposing a magnet on the input interface, the magnet detectable by a sensor to identify movement of the positional element.[00119] In Example 67, the subject matter of any one or more ofExamples 44-66 optionally include wherein disposing a positional element includes disposing a fiducial marker on the input interface, the fiducial marker detectable by an optical sensor to identify a location of the fiducial marker.[00120] In Example 68, the subject matter of any one or more ofExamples 44-67 optionally include communicatively coupling an array of sensors to the electronic device to detect the user input. [00121] In Example 69, the subject matter of any one or more ofExamples 44-68 optionally include wherein disposing a positional element includes disposing a capacitive electrode on the input interface.[00122] In Example 70, the subject matter of any one or more ofExamples 44-69 optionally include wherein disposing a positional element includes disposing an inductive sensor to detect the user input.[00123] In Example 71 is a multi-function input interface comprising: a means for transceiving a wireless signal through a conductive portion of an input interface, wherein the conductive portion is an antenna; and a means for detecting a user input to the input interface of the input interface based on a parameter from a sensor.[00124] In Example 72, the subject matter of Example 71 optionally includes wherein the means for detecting the user input includes means for detecting movement of the input interface with the sensor.[00125] In Example 73, the subject matter of Example 72 optionally includes wherein the means for detecting the user input includes means for identifying movement of a magnetic element of the input interface with the sensor.[00126] In Example 74, the subject matter of any one or more ofExamples 72-73 optionally include wherein the means for detecting the user input includes means for identifying movement of a fiducial marker with an optical sensor to detect movement of the input interface.[00127] In Example 75, the subject matter of any one or more ofExamples 72-74 optionally include wherein the means for detecting the user input includes means for detecting rotation of the input interface, the input interface including an annular bezel rotatable about a center axis of the bezel to receive the user input.[00128] In Example 76, the subject matter of any one or more ofExamples 72-75 optionally include wherein the means for detecting the user input includes means for detecting a translation of the input interface, wherein the input interface is a slider to receive the user input.[00129] In Example 77, the subject matter of any one or more ofExamples 72-76 optionally include wherein the means for detecting the user input includes means for detecting movement of the input interface with an array of sensors.[00130] In Example 78, the subject matter of any one or more ofExamples 71-77 optionally include wherein means for detecting the user input includes means for sensing the user input with a capacitive electrode.[00131] In Example 79, the subject matter of any one or more ofExamples 71-78 optionally include wherein the means for transceiving the wireless signal though the conductive portion includes means for transceiving the signal from the conductive portion that is a loop antenna.[00132] In Example 80, the subject matter of any one or more ofExamples 71-79 optionally include wherein the means for transceiving the wireless signal though the conductive portion includes means for transceiving the signal from the conductive portion that is a Planar Inverted-F antenna.[00133] In Example 81, the subject matter of any one or more ofExamples 71-80 optionally include wherein the means for transceiving the wireless signal though the conductive portion includes means for transceiving the signal from the conductive portion that is a near-field communication antenna.[00134] In Example 82, the subject matter of any one or more ofExamples 71-81 optionally include a means for transceiving the wireless signal between a transceiver and the conductive portion with a slidable coupling.[00135] In Example 83, the subject matter of any one or more ofExamples 71-82 optionally include a means for transceiving the wireless signal between a transceiver and the conductive portion by a slidable coupling, wherein the slidable coupling includes a bias element.[00136] In Example 84, the subject matter of any one or more ofExamples 71-83 optionally include a means for transceiving a plurality of different wireless signals between a transceiver and a plurality of electrically isolated segments of the conductive portion, wherein a plurality of antenna feeds corresponding to the plurality of segments communicatively couple each segment respectively to the transceiver.[00137] In Example 85, the subject matter of Example 84 optionally includes wherein the means for electrically isolating the segments includes electrically isolating the plurality of segments from a ground element with a plurality of respective dielectric elements located between the respective segment and the ground element, wherein the plurality of dielectric elements include different thicknesses.[00138] In Example 86, the subject matter of any one or more ofExamples 71-85 optionally include a means for changing a setting of an electronic device based on the parameter.[00139] In Example 87, the subject matter of any one or more ofExamples 71-86 optionally include a means for changing a menu based on the parameter.[00140] Each of these non-limiting examples may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples.[00141] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[00142] In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.[00143] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In this document, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[00144] Method examples described herein may be machine or computer- implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non- transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.[00145] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
A data storage system is described with an array of front fans and moving doors for airflow control. In one example an enclosure is configured to mount in a rack. A horizontal plane board in the enclosure has memory connectors aligned in a row and external interfaces. Memory cards connect to a respective memory connector of the board. Removable fans at the front of the enclosure push air along the memory cards to the rear and doors at the front of the enclosure, each have an open position to accommodate a corresponding fan and a closed position to block airflow when the corresponding fan is removed.
What is claimed is:1. A memory array chassis comprising:an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling;a plane board in the enclosure having a plurality of memory connectors aligned in a row; a plurality of memory cards, each having an edge connector at one end of the memory card to connect to a respective memory connector of the board, each memory card extending parallel to each other memory card;a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear; anda plurality of doors at the front of the enclosure, each door having an open position to accommodate a corresponding fan and a closed position to block airflow when the corresponding fan is removed.2. The chassis of Claim 1 , wherein each door is connected to the enclosure by a hinge to allow the door to move between the open position and the closed position.3. The chassis of Claim 2, wherein the hinge is attached to the enclosure over the corresponding fan and wherein the door pivots about the hinge to move downward when the fan is removed.4. The chassis of any one or more of the above claims, wherein the door blocks air loss through the former position of the removed fan when the door is in the closed position.5. The chassis of any one or more of the above claims, wherein the corresponding fan holds the door in the open position.6. The chassis of any one or more of the above claims, wherein the corresponding fan has a top edge configured to push the door to the open position when the fan is pushed into the front of the enclosure.7. The chassis of any one or more of the above claims, further comprising a biasing means to push the door into the closed position when the corresponding fan is removed from the front of the enclosure.8. The chassis of any one or more of the above claims, wherein the door further comprises a tab to prevent the door from moving toward the front of the enclosure when the door is in the closed position.9. The chassis of Claim 8, wherein the tab is on a bottom edge of the door to engage a slot in a fan controller board below the corresponding fan.10. The chassis of Claim 8, wherein the tab is on a side edge of the door to engage a vertical post in the enclosure.11. The chassis of any one or more of the above claims, further comprising:a handle attached to a memory card behind the corresponding fan, the handle configured to pull the memory card out of the front of the enclosure through the position of thecorresponding fan after the corresponding fan is removed; anda biasing means to push the handle from a back position to a forward position when the fan is removed from the chassis.12. The chassis of Claim 11, wherein the handle includes a status display to indicate a memory card fault.13. A memory array chassis comprising:an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling,a plane board in the enclosure having a plurality of memory connectors aligned in a row and a plurality of external interfaces;a plurality of memory cards, each having an edge connector at one end of the memory card to connect to a respective memory connector of the board, each memory card extending parallel to each other memory card;a plurality of interface connectors each to connect an edge connector to a respective board connector;a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear;a handle attached to a memory card behind the corresponding fan, the handle configured to pull the memory card out of the front of the enclosure through the position of thecorresponding fan after the corresponding fan is removed; anda biasing means to push the handle from a back position to a forward position when the fan is removed from the chassis.14. The apparatus of Claim 13, wherein the handle includes a status display to indicate that the memory card is to be replaced.15. The apparatus of Claim 14, wherein the status display includes a light pipe of the handle optically coupled to a status indicator on the memory card.16. The apparatus of Claim 15, wherein the status indicator is an LED attached to the memory card and the light pipe extends from the LED to an end of the handle opposite the memory card.17. The apparatus of any one or more of claims 13-16, wherein the biasing means comprises a spring and wherein the fan, when installed, holds the handle in the back position.18. An all flash memory array chassis comprising;an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling;a horizontal plane board in the enclosure having a plurality of memory connectors to connect to a plurality of orthogonally mounted parallel memory cards and a plurality of external interfaces;a plurality of interface connectors each to connect an edge connector to a respective board connector;a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear;a plurality of doors at the front of the enclosure, each door having an open position to accommodate a corresponding fan and a closed position to block airflow when the corresponding fan is removed;a power supply proximate the rear of the enclosure to provide power to the memory cards through the memory card connectors and having a fan to pull air from the front of the enclosure between the memory cards and to push air out the rear of the enclosure;a switch fabric card coupled to the external interfaces of the horizontal plane board to couple the memory cards to external devices; anda cabling interface at the rear of the switch fabric coupled to the external connectors.19. The chassis of Claim 18, wherein each door is connected to the enclosure by a hinge that is attached to the enclosure over the corresponding fan and wherein the door pivots about the hinge to move downward when the fan is removed to the closed position and to pivot upward to the open position.20. The chassis of Claim 18 or 19, wherein the memory cards, the switch fabric, and the power supply are at a first level within the enclosure, the apparatus further comprising a compute module coupled to the memory cards and having an external cabling interface, wherein the computing device are at a second level within the enclosure, and the horizontal plane board is between the first level and the second level.
DATA STORAGE SYSTEM WITH ARRAY OF FRONT FANSAND MOVING DOORS FOR AIRFLOW CONTROLFIELDThe present description pertains to the field of data storage systems and, in particular, to a system with a moving door to block airflow when a fan is removed.BACKGROUNDHigh capacity, high speed, and low power memory is in demand for many different high powered computing systems, such as servers, entertainment distribution head ends for music and video distribution and broadcast, and super computers for scientific, prediction, and modeling systems. The leading approach to provide this memory is to mount a large number of spinning disk hard drives in a rack mounted chassis. The chassis has a backplane to connect to each hard drive and to connect the hard drives to other rack mounted chassis for computation or communication. The hard disk drives connect using SAS (Serial Attached SCSI (Small Computer System Interface)), SATA (Serial Advanced Technology Attachment), or PCIe (Peripheral Component Interface express) or other storage interfaces.Flash arrays are constructed at high volume in a 2.5" hard disk drive form factor and in a M.2 module form factor. These form factors have been specifically developed for notebook computers and provide an amount of storage, speed, power consumption and cost that is best suited for notebook computers. An AFA (All Flash Array) could be built using these standard form factor SSDs (Solid State Drives). When off the shelf 2.5" SSDs are used for a large capacity solution and they are vertically mounted there is a minimum rack-mount chassis size of 2U or 3U due to the size of the drives, the mounting connectors and the need for airflow. M.2 SSDs have a lower capacity and so require many more devices and connectors.In high speed memory arrays, the fans and the memory storage are most prone to failure. The fans are in constant use and the mechanical bearings and motors wear over time. The memory storage is in constant use and is stressed by high speed applications. Each memory cell has a limited number of read and write cycles in its lifetime and the other components of a memory may also wear or fail from temperature and usage stress.To service a flash array, the chassis slides forward out of the rack partly or fully and a lid is removed to provide access to the memory cards or SSDs. A special cable solution is provided to allow the chassis to move forward without being disconnected at the rear. In some cases front mounted 2.5" SSDs are used to allow the drives to be serviced without moving the chassis. The front serviceable SSDs require middle mounted fans to allow access to the SSDs from the front.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.Figure 1 is a top plan view of a memory system with a top cover removed according to an embodiment.Figure 2 is a top view diagram of a portion of a memory storage system enclosure according to an embodiment.Figure 3 is a front view diagram of a portion of the memory storage system enclosure of Figure 2 according to an embodiment.Figure 4 is a side view diagram of a single bay inside the memory storage enclosure in an operational status according to an embodiment.Figure 5 is a side view diagram of a single bay inside a memory storage enclosure with a fan removed according to an embodiment.Figure 6 is a front view of a bay door for use in the bay of Figure 5 according to an embodiment.Figure 7 is a process flow diagram of maintaining a memory system with front accessible fans according to an embodiment.Figure 8 is a side plan view of a memory card according to an embodiment.Figure 9 is a top plan view of the memory card of Figure 8.Figure 10 is a cross-sectional side view diagram of an alternative memory system according to an embodiment.Figure 11 is a block diagram of a computing device incorporating a memory system or capable of accessing a memory system according to an embodiment.DETAILED DESCRIPTIONCooling for a rack mountable memory array is improved using fans in the front of the enclosure. Serviceability is improved using front serviceable fans and storage modules. The system and structure described herein has excellent airflow characteristics for an All Flash Array (AFA) memory storage array. The structure is described in the context of a 19" long solid state drive (SSD), but may be applied to other types and configurations of front serviceable storage modules. Keeping the fans in the front of the enclosure provides easy serviceability of the fans and the storage modules. In addition, the chassis need not be put on rails to slide out in a rack. This avoids the need for expensive rails and for complex cable management in the back of the enclosure.The described system and structure provides front serviceability of both memory drives and fans with excellent airflow characteristics. This is a highly dense, modular, redundant solution targeted at warm and cold storage markets. The system may include any of a variety of different features including fans mounted in front of front serviceable SSDs, garage doors to keep inside pressurized air from leaking out the front when an SSD is removed, a mechanism to pull the SSDs out of the chassis from the front, and an LED (Light Emitting Diode) indication of which SSD is to be removed. The doors are alternately referred to as "garage doors" or "doors." The term "garage" refers to the doors being hinged at the top and covering a bay, but these features are not necessary to obtain the benefits of the invention.As described, dense memory storage boxes have high airflow, heat dissipation and storage density using a thin and long SSD form factor. This SSD will be referred to herein as a "Ruler Storage Module," "RSM," "ruler," "RSSD" or "memory card." Several RSMs may be used in a 19 inch wide rack-mount SSD system. However other memory configurations may be used instead. The memory cards may be placed in a single row multiple column arrangement, which helps guide the airflow and provides maximum surface area for the NAND media.Figure 1 is a top view diagram of an example memory system with a top cover removed. The doors described below are attached to the top cover and so are also removed. A high level architecture is shown of a variation of a 19" SSD 108. An array of fans 110 at the front of the enclosure blow air across an array of SSDs 108. In this example there are 10 fans to blow air across 30 RSSD memory cards. The precise number of fans may be adapted to suit the dimensions of the enclosure and particular type and configuration of fan and any other guides, shrouds, or other structures. The cards are placed vertically and aligned to be parallel to each other. The cards connect to a midplane 106 that has 30 connectors 183, one for each card. The connectors are at the rear end of the card. The connector may take a variety of different forms. The midplane is connected to a system module. The system module PCB is not visible in this view because it is covered by other components.The midplane is coupled through a power connector 136 on left and right sides of the midplane (top and bottom as shown) to a left and right side power supply 112. These power supplies may be complementary or redundant and the midplane may be wired so that both power supplies are coupled to each RSSD.The midplane base board is also coupled through an array of data connectors 130 to two switching modules 134. The left module serves the 16 RSSDs on the left and the right module serves the 16 RSSDs on the right. The RSSDs may also be cross-coupled so that each RSSD is coupled to both modules or connected in any of a variety of different patterns that include various types of redundancy.The switching modules may contain any of a variety of different components, depending on the implementation. In this example, there is a PCIe switch 126 for each module and a network interface card (NIC) 128 for each module. The NICs allow for an Ethernet connection to external components. The Ethernet connection is converted to PCIe lanes for the RSSDs. Each RSSD may use one or more lanes of a PCIe interface depending on the speed and the amount of data for the particular implementation. The switching modules may also include system management sensors and controllers to regulate temperature, monitor wear and failures and report status. While switching modules are shown, other types of modules may be used including server computers that use the RSSDs as a memory resource. There are also one or more fan controller boards under the fans and coupled to the system management bus to control rotational speed and provide status. The system management bus may also send status for the fans and for the RSSDs to a display or alert on a corresponding fan.Figure 2 is a top view diagram of a portion of a memory storage system enclosure. The chassis 202 may be seen as having three zones. There is a front fan zone 204. This zone is accessible from the front of the chassis, when the chassis is installed into a storage rack. Even when other enclosures are stacked above and below the chassis 202, the front is still accessible. In this example, eight fans 212 are shown. These fans are configured to draw air into the chassis from the front and push the air across and between the RSSDs. The air is then exhausted out the rear of the chassis. There may be additional fans in the middle or the rear of the chassis to help move the air from front to rear.There are also a corresponding set of eight garage doors 210. The doors are above the fans when the fans are in place. Each door has a front hinge 209 to allow it to rotate up as shown to make room for the fan or to rotate down as shown in Figure 5 to close off the corresponding area of the fan zone when a fan has been removed. While eight separate fans and doors are shown, there may be more or less, depending on the particular dimensions of the chassis. With a standard 19" width, there may be many more fans.Storage modules 212 are behind the fans in a storage zone 206. The storages modules may take any of a variety of different forms. In some embodiments, the storage modules are RSSD's as shown for example in Figure 8 carrying memory chips, memory controller, and related components. Each RSSD has a rear edge connector to connect to the next zone for data and control and a front handle to allow it to be removed and replaced from the front of the chassis 202. As shown, with a corresponding fan removed, the RSSD may be accessed from the front of the chassis.Behind the storage zone is a connection zone 208. This zone may include a middle row of fans, memory interfaces, processors, external interfaces to other chassis, power supplies, and rear fans. The particular configuration of the connection zone may be adapted to suit many different systems and uses.Figure 3 is a front view diagram of a portion of the memory storage system enclosure. The chassis has an array of fans 212 across the front of the chassis. For illustration purposes each fan is shown with a different status. These different states correspond to various stages of an RSSD being removed. The first status shown in the left most fan bay 251 corresponds to normal operation. A fan 212 is installed and the garage door (not visible in this view) is open and rests above the fan as shown in Figure 4. There are no status indicators because the system is operating normally. In some embodiments, there may be status indicators for normal operation for the fan, the RSSD, and any other components of the memory system.A second stage is shown in the second bay 252 where an LED 237 associated with a failing RSSD is illuminated to indicate which fan must be removed to access the RSSD. This LED may be an LED on the failing RSSD, attached to the RSSD, or it may be an LED on the fan housing. In some embodiments, a management system monitors the status of each RSSD and then, upon detecting a failure or any other error status, sends a signal to the corresponding fan to illuminate an LED. In other embodiments, status indicators on the RSSDs are visible through the fan or fan housing.A third stage is shown in the third bay 253, where the corresponding fan has been removed. As a result, the corresponding door 210 has dropped down from a horizontal to a vertical position as shown in Figure 5. The door falls into a closed vertical position with the fan removed to block air loss through the front of chassis. The remaining fans pull air into the chassis to drive the air out the back after flowing across the memory and other components. Accordingly, the interior of the chassis is at a higher air pressure than the exterior. Without the door, some of the cooling air would escape out the front of the enclosure instead of flowing across the RSSD's to the back. This would reduce the effectiveness of the cooling for all of the other RSSD's.A fourth stage is shown in the fourth bay 254 where the fan is removed and the door is held open by hand or by some prop or latch to allow any of four RSSDs 220, 222, 224, 226 to be accessed. Each RSSD has a visible status light 236. The status light may be provided in any of a variety of different ways including an LED soldered to the memory card or using a combined light pipe and handle as described below. In this example, the LED color on the handle indicates the RSSD status. Green may be used to mean proper operation. Yellow may be used to indicate an error status and red may be used to indicate a failed or failing status. Any other color code may be used instead. Alternatively, blinking or flashing may be used or different combinations of multiple lights or text or symbols may be used, depending on the implementation.As shown, upon pushing open the corresponding door for the fourth bay 254, the failed RSSD 224 may easily and quickly be identified. The operator may then grab the RSSD by a handle, by hand, or using a tool and remove the defective RSSD. The RSSD may be repaired and reinstalled or replaced with another ready drive. In some embodiments, the light pipe may be used as a handle to remove the RSSD from chassis.The fifth stage follows after replacing the defective memory and is similar to the first stage in which the portion of the memory system is operating properly. In the fifth bay 255 the fan has been re-installed or replaced after one or more of the RSSDs behind the fan have been serviced. The door is pushed up to a horizontal position and the fan is inserted below the door to hold the door up. With all of the RSSDs behind the fan in the fifth bay in good operating condition, there is no status light indication. In some embodiments, there may be a green status light to indicate a correct operational condition status.Figure 4 is a side view diagram of a single bay inside a chassis such as that of Figure 3. There is a fan 212 at the front of the chassis, a memory card 216, such as an RSSD, immediately behind the fan and other components such as memory controllers, input/output interfaces, processors, power supplies, and other components (not shown) behind the memory card. The fan 212 holds a door 210 in a horizontal position above the fan. The hinge 209 allows the door to swing up and down so that its far end defines an arc as indicated by a dotted line 211.When a fan 212 is inserted into the enclosure, the top back edge or right edge as shown in Figure 4 pushes against the door 210. As the fan is pushed into the enclosure, the door is pushed back and up by the surface of the fan to reach the roughly horizontal position as shown. The fan is then snapped or fastened into place below the door so that the door stay open.Figure 5 is a side view diagram of the same bay with the fan removed. In this example, the memory card remains in place but the door 210 has pivoted about its hinge 209 which is near the front of the bay to allow the bottom of the door to swing down and forward in the arc 211 from a horizontal open and up position to a vertical closed and down position as shown. As shown, the hinge is a piano hinge but one or more other types of hinges may be used as desired. As mentioned, the door may operate on gravity, simply falling down into the closed position. The higher air pressure inside the chassis, when other fans are activated, applies a forward pressure on the door that will bias the door to the closed position.In addition to biasing the door to the closed position by gravity and air pressure another bias source may be provided. As an example, the hinge 209 may include an integrated coil or leaf spring. Alternatively a spring may be mounted between the door and the enclosure in another location to urge the door into the closed position. In some embodiments, the hinge is on the side of the door so that the door moves back and to the side to open. For a side hinge, gravity will not move the door to either the open or closed position, so a spring or other bias source may be used to push the door closed when a fan is removed.With the hinge near the front of the cabinet and the door swinging down and forward to close, the door is pushed back and up to open. With the door directly over the fan, a fan may be pushed against the door to open the door and pulled away from under the door to allow the door to close. The door may also be opened by hand or by any other means. The fan may have a back upper edge 260 that is configured to push against the door to push it up as the fan is pushed into the chassis. The fan may be attached with bendable tabs, with separate fasteners or in any other way.Figure 6 is a front view of the door 210 without the chassis. This view shows a tab 213 at the bottom of the door. The tab engages a slot 262, a ledge or a rail (see Figure 5) at the front of the chassis. The slot prevents the tab from moving forward past the slot. The slot is placed to meet the tab when the door is in the vertical position. The slot may be formed in a controller board for the fan, in the metal housing, or in any other suitable structure. The slot and the tab stop further forward motion caused by the inner air pressure or any other force but it does not stop backward motion toward the RSSDs to open the door. The tab is used on the bottom of the door to engage the slot in the fan board to stop the door from swinging outwards. The door can still swing inwards with a push from a hand, a fan or any other object.The tab is used on the bottom of the door to engage a slot in the fan board and thereby to stop the door from swinging outwards. While a single tab is shown, there may be multiple tabs to engage multiple slots. The number and position may be determined based on the internal air pressure and the material of the door. Alternatively, the tab may be made larger to distribute the force across more of the bottom of the door. A single tab may extend across all or most of the width of the door, depending on the materials used. If the slots are in a fan power controller board then the slots may be minimized to reduce the impact on signal routing space in the fan controller board. For a slot in the metal chassis, a larger slot may be easier to provide. The tab on the bottom of the door allows the sides of each bay to be open. No vertical posts are required between each fan bay. However, in some embodiments, the tab is on one or both sides of the door and engages a vertical post of the enclosure to prevent the door from swinging beyond the vertical position of Figure 4. In other embodiments, the hinge 209 is configured with stops to prevent rotation of the door past the vertical position of Figure 4.Figure 4 also shows a light pipe 232 fastened to the memory card 216. The light pipe is coupled to a spring 230 to push it forward toward the front of the chassis away from the memory card. It also has a bend to create a handle 264 opposite the memory card. The handle has one end of the light pipe that allows the lights to be seen from the front of the enclosure behind the corresponding fan. The other end at the memory card is coupled to one or more LEDs to receive light and pass it to the front where it is visible to an operator.The garage door 210, the fan 212, and the light pipe 232 on the RSSD 216 are all functionally connected. The door is pushed up by the fan, when the fan is inserted into its position in the chassis and holds the door up. When the fan is removed from the chassis the door falls to its closed position until the tab hits the slot. In this position the door blocks air loss through the front of the chassis because the other fans that are in place in the chassis are still operational.The extended light pipe 232 shows the state of the RSSD. In Figure 4, the light pipe is pushed against the spring 230 toward the RSSD. With the fan removed, the light pipe extends forward toward the closed door as shown in Figure 5. The LED color or some other light feature indicates the status of the RSSD. In addition, the light pipe may be used as a handle to remove the RSSD from the chassis. The spring pushes the light pipe forward so that the status LEDs are easier to read and so that the handle 264 is easier to reach by an operator.The light pipes are narrow enough to not restrict airflow, but strong enough to allow the RSSD to be removed from the chassis by pulling on the handle. In some embodiments, the light pipes are aligned with the RSSD's PCB and mounted on a heatsink assembly to minimize impact to the RSSD and to airflow.Figure 7 is a process flow diagram of repairing memory array as described herein. The process starts with normal operation of the memory system. At 702 a memory card fault is detected by a memory controller and at 702, the fault is sent out on a system management bus of the memory system. At 706 the fault is sent to the memory card to active a warning on a status indicator of the memory card. This may be an LED on a light pipe at the front of the affected memory card or it may be some other visible status indicator. The fault may also becommunicated to a control console, a remote management console or some other management device. An operator may observe the fault by observing the indicator on the memory card or may first be alerted by a management console to inspect the affected memory system.An operator then acts to service the faulty memory card. At 708 the corresponding fan is removed and the corresponding door is released to close over the opening from which the fan was removed. At 710 the memory card handle extends forward from the affected memory card as the fan is removed. If there are multiple memory cards behind the fan, then all of the corresponding handles extend outward toward the front of the enclosure. The operator may then observe the status indicator to select the affected memory card. Alternatively, with no status indicator, the affected card may be indicated by position or another indicator by the management console. The operator pushes the door open and grabs the handle to pull out the memory card. After the card is withdrawn, the system relies on redundant stored data to continue to operate using redundant memory resources at 712. With the fan and the affected memory card removed the door is closed at 714. This maintains proper air flow for the other memory cards that remain in the enclosure.This may end the service of the system. In other cases, the memory card will be replaced.The operator then uses the same memory card after repairs or a different functional memory card, pushes the door open and slides the new memory card into the appropriate slot. At 716 the door is closed after the replacement card is installed. At 718 it will be started, checked, and then integrated into the redundant memory array. To finish the replacement, the operator pushes the bay door open to allow the fan to be installed at 720. As described, the fan may be used to push open the door and make room for the fan. After the fan is installed, then at 722 it maintains the door in the open position and pushes all of the handles back toward the memory cards. The system has returned to normal operation. The fan may have electrical and mechanical connectors to the enclosure or a special fan controller board not shown here. The operator will also restore these connections.Figure 8 is a side plan view diagram of an RSSD or memory card 108 suitable for use with the memory system as described herein. The card has a printed circuit board (PCB) structure 150 with a connector 181 to the midplane at one end. Multiple memory chips 154, in this case eighteen chips, are mounted to one side of the PCB structure. There may be more or fewer depending on the application. Each memory chip generates heat with use and consumes power with read and write operations. The number of chips may be determined based on power, cost, heat, and capacity budgets. In some embodiments 3D NAND flash memory chips are used. However, other types of solid state memory may be used including PCM (Phase Change Memory), STTM (Spin Transfer Torque Memory), magnetic, polymer, and other types of memory.The memory card further includes memory controllers 156 to control operations, manage cells, mapping, and read and write between the connector 152 and the memory chips 154. Fan out hubs 158 may be used to connect the memory controllers to the cells of each memory chip. Buffers 160 may also be used to support write, read, wear leveling, and move operations. A handle 159 that may include a light pipe is attached to a side of the PCB 150 for use in pulling the card out of the rear connector.Figure 9 is a top view of the memory card of Figure 8 showing the same components. The card may be configured to support more memory chips on the other side or only one side may be used, depending on the budget for power, heat, and capacity. The memory card may have heat sinks and exposed chip package surfaces as shown, or may be covered with one or more larger heat sinks or heat spreaders as well as protective covers. The handle 159 is coupled to a sliding mechanism 157 that pushes the handle forward against a fan. This allows the handle to extend out to be more easily reached when a fan is removed.The particular configuration and arrangement of the chips and the handle may be modified to suit requirements of different chips and to match up with wiring routing layers within the PCB. The buffers may be a part of the memory controllers or in addition to those in the memory controllers. There may be additional components (not shown) for system status and management. Sensors may be mounted to the RSSD to report conditions to the memory controller or through the connector to an external controller or both.The RSSD allows a large amount of NAND flash memory to be packed into a small design. In this example with 1TB of memory per NAND chip 154, 36TB of memory may be carried on a single memory card. This amount may be reduced for lower cost, power, and heat and still use the same form factor. The Ruler Storage Module is shown with an end connector. This allows modules to be replaced without removing a top cover of the chassis even for a top serviceable enclosure. The memory modules may be removed and replaced simply by moving fans or access panels at the front. The handle is then accessible behind the fan or door to grab the module and remove it. Typical equipment racks allow the enclosure to slide forward to allow access without removing the enclosure from the rack but this is not required to service only the memory.The Ruler Storage Module provides optimized airflow and a maximal surface area for storage media. This new storage module allows for a 1U high, extremely dense SSD solution. This new storage module form factor does not hinder airflow in the system and yet is dense enough to provide a great advantage over existing form factors that were developed for other purposes, such as 2.5" notebook drives, AIC (Advanced Industrial Computer) memory, M.2 cards, and Gum-stick memory (typical USB stick style configurations). Some of these form factors cannot be used in a 1U height enclosure in any arrangement.The RSSDs provide quick and secure connections and may be configured to be hot- swappable in some systems. Using modular compute and connectivity blocks for the 19" SSD system described herein, one can easily, without system shut-down, swap out a compute module and insert a new compute module with varying compute horse power, depending upon the storage solution requirements, within the 19" SSD enclosure. For example, a low power compute module, such as an Intel® Atom® processor-based system may be used for storage targets that need mid-range compute capabilities, such as Simple Block Mode Storage, NVMe over Fabrics, iSCSI/SER, Fiber Channel, NAS (Network Attached Storage), NFS (Network File System), SMB3 (Server Message Block), Object store, distributed file system etc. A higher performance processor on the compute module may be used for Ceph nodes, Open Stack Object, Custom Storage Services and Key/Value Stores. For very high performance, the computing module may be in a different enclosure on the same or another rack and connected using PCIe switches or another memory interface.In addition to providing interchangeable RSSDs, the same chassis and enclosure may allow for the system modules at the middle and rear of the enclosure to be interchangeable. This may allow for different connectivity modules to be used. The system may be upgraded to a different storage protocol (e.g. NVMe over Fabric RDMA (Remote Direct Memory Access), iSCSI (internet SCSI), NVMe, PCIe, or even Ethernet) without changing any of the RSSDs. This modularity also enables two modules to be used for redundancy and fail-over in some applications (e.g. traditional enterprise storage) and a single module for other applications (e.g. cloud computing).Figure 10 is a diagram of an alternative chassis enclosure for a 2U (3.5" or 89mm) rack height. In this enclosure, the same memory card configuration is used for 1 PetaByte plus of storage. The additional height allows for additional computing and switching components to be included with short fast connections to the memory. In this example, there is an array of memory cards 408 with handles 459 proximate the front of the enclosure coupled through connectors 481, 482, 483 to a midplane PCB 406 near the center of the enclosure. The midplane is coupled through connectors 414 to a system module PCB 404 at the rear of the enclosure. There is a front fan zone with an array of fans 410 to push air across the memory cards 408 and a rear power supply 412 fastened to or adjacent to the system module PCB 404 proximate the rear of the enclosure to pull air out of the chassis.In contrast to the 1U configuration, the system module may be on either the lower or upper side of the enclosure. The RSSDs have the same configuration and therefore use only one half of the 2U chassis. In this example, the RSSDs are in the lower half of the enclosure but could alternatively be in the upper half. The system module is in the upper half opposite the RSSDs. Due to the PCB structure of the midplane and the system module, the PCBs are in the center of the enclosure and horizontal while the components on the PCBs extend vertically from the PCBs into the upper half of the enclosure. An additional system module (not shown) may also be added to the lower half of the enclosure at the rear of the enclosure.The 2U configuration also allows an additional system module PCB 216 to be added at the front of the enclosure above the RSSDs. As mentioned, the RSSDs may be in the upper half, in which case, the additional system module may be in the lower half instead. The additional system module may be used to provide computing power or additional switch fabric. As an example, the rear system module may be used as interface, switch fabric, and power supply, while the front system module is used as a computing zone with microprocessors and memory for low power or high power computing. Alternatively, the front system module or an additional rear module may be used for PCIe adapter cards for graphics rendering, audio or video processing, or other specialized tasks.Figure 11 is a block diagram of a computing device 100 in accordance with one implementation. The computing device 100 houses a system board 2. The board 2 may include a number of components, including but not limited to a processor 4 and at least onecommunication package 6. The communication package is coupled to one or more antennas 16. The processor 4 is physically and electrically coupled to the board 2.Depending on its applications, computing device 100 may include other components that may or may not be physically and electrically coupled to the board 2. These other components include, but are not limited to, volatile memory (e.g., DRAM) 8, non-volatile memory (e.g., ROM) 9, flash memory (not shown), a graphics processor 12, a digital signal processor (not shown), a crypto processor (not shown), a chipset 14, an antenna 16, a display 18 such as a touchscreen display, a touchscreen controller 20, a battery 22, an audio codec (not shown), a video codec (not shown), a power amplifier 24, a global positioning system (GPS) device 26, a compass 28, an accelerometer (not shown), a gyroscope (not shown), a speaker 30, a camera 32, a microphone array 34, and a mass storage device (such as hard disk drive) 10, compact disk (CD) (not shown), digital versatile disk (DVD) (not shown), and so forth). These components may be connected to the system board 2, mounted to the system board, or combined with any of the other components.The communication package 6 enables wireless and/or wired communications for the transfer of data to and from the computing device 100. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication package 6 may implement any of a number of wireless or wired standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, Ethernet derivatives thereof, as well as any other wireless and wired protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 100 may include a plurality of communication packages 6. For instance, a first communication package 6 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication package 6 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The computing system may be configured to be used as the system module. The computing system also reflects the entire rack-mount memory system where the mass memory is formed from multiple memory cards, as described. The memory system may have multiple iterations of the computing system within a single enclosure for each system module and also for the overall system.In various implementations, the computing device 100 may be an entertainment front end unit or server, a music or video editing station or back end, a cloud services system, a database, or any other type of high performance or high density storage or computing system.Embodiments may be include one or more memory chips, controllers, CPUs (Central Processing Unit), microchips or integrated circuits interconnected using a motherboard, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA).References to "one embodiment", "an embodiment", "example embodiment", "various embodiments", etc., indicate that the embodiment(s) so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments. In the following description and claims, the term "coupled" along with its derivatives, may be used. "Coupled" is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them.As used in the claims, unless otherwise specified, the use of the ordinal adjectives "first","second", "third", etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.The following examples pertain to further embodiments. The various features of the different embodiments may be variously combined with some features included and others excluded to suit a variety of different applications. Some embodiments pertain to an apparatus that includes in one example memory array chassis that includes an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling, a plane board in the enclosure having a plurality of memory connectors aligned in a row, a plurality of memory cards, each having an edge connector at one end of the memory card to connect to a respective memory connector of the board, each memory card extending parallel to each other memory, a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear, and a plurality of doors at the front of the enclosure, each door having an open position to accommodate a corresponding fan and a closed position to block airflow when the corresponding fan is removed.In further embodiments each door is connected to the enclosure by a hinge to allow the door to move between the open position and the closed position. In further embodiments the hinge is attached to the enclosure over the corresponding fan and wherein the door pivots about the hinge to move downward when the fan is removed.In further embodiments the door blocks air loss through the former position of the removed fan when the door is in the closed position.In further embodiments the corresponding fan holds the door in the open position.In further embodiments the corresponding fan has a top edge configured to push the door to the open position when the fan is pushed into the front of the enclosure.Further embodiments include a biasing means to push the door into the closed position when the corresponding fan is removed from the front of the enclosure.In further embodiments the door further comprises a tab to prevent the door from moving toward the front of the enclosure when the door is in the closed position.In further embodiments the tab is on a bottom edge of the door to engage a slot in a fan controller board below the corresponding fan.In further embodiments the tab is on a side edge of the door to engage a vertical post in the enclosure.Further embodiments include a handle attached to a memory card behind thecorresponding fan, the handle configured to pull the memory card out of the front of the enclosure through the position of the corresponding fan after the corresponding fan is removed, and a biasing means to push the handle from a back position to a forward position when the fan is removed from the chassis.In further embodiments the handle includes a status display to indicate a memory card fault.Some embodiments pertain to a memory array chassis that includes an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling, a plane board in the enclosure having a plurality of memory connectors aligned in a row and a plurality of external interfaces, a plurality of memory cards, each having an edge connector at one end of the memory card to connect to a respective memory connector of the board, each memory card extending parallel to each other memory card, a plurality of interface connectors each to connect an edge connector to a respective board connector, a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear, a handle attached to a memory card behind the corresponding fan, the handle configured to pull the memory card out of the front of the enclosure through the position of the corresponding fan after the corresponding fan is removed, and a biasing means to push the handle from a back position to a forward position when the fan is removed from the chassis. In further embodiments the handle includes a status display to indicate that the memory card is to be replaced.In further embodiments the status display includes a light pipe of the handle optically coupled to a status indicator on the memory card.In further embodiments the status indicator is an LED attached to the memory card and the light pipe extends from the LED to an end of the handle opposite the memory card.In further embodiments the biasing means comprises a spring and wherein the fan, when installed, holds the handle in the back position.Some embodiments pertain to an all flash memory array chassis that includes an enclosure configured to mount in a rack, the enclosure having a front configured to receive airflow and a rear configured for cabling, a horizontal plane board in the enclosure having a plurality of memory connectors to connect to a plurality of orthogonally mounted parallel memory cards and a plurality of external interfaces, a plurality of interface connectors each to connect an edge connector to a respective board connector, a plurality of removable fans at the front of the enclosure to push air along the memory cards to the rear, a plurality of doors at the front of the enclosure, each door having an open position to accommodate a corresponding fan and a closed position to block airflow when the corresponding fan is removed, a power supply proximate the rear of the enclosure to provide power to the memory cards through the memory card connectors and having a fan to pull air from the front of the enclosure between the memory cards and to push air out the rear of the enclosure, a switch fabric card coupled to the external interfaces of the horizontal plane board to couple the memory cards to external devices, and a cabling interface at the rear of the switch fabric coupled to the external connectors.In further embodiments each door is connected to the enclosure by a hinge that is attached to the enclosure over the corresponding fan and wherein the door pivots about the hinge to move downward when the fan is removed to the closed position and to pivot upward to the open position.In further embodiments the memory cards, the switch fabric, and the power supply are at a first level within the enclosure, the apparatus further comprising a compute module coupled to the memory cards and having an external cabling interface, wherein the computing device are at a second level within the enclosure, and the horizontal plane board is between the first level and the second level.
PROBLEM TO BE SOLVED: To provide a system to achieve application usage continuum across platforms.SOLUTION: A system 100 includes a first client device 120A, a second client device 120B, and a server 102 that generates a private domain. The first client device 120A generates state information and data associated with execution of a first instance of an application running on the first client device 120A, and encrypts and causes the state information to be sent to the second client device 120B to enable the second client device 120B to continue operation of the application using the state information that is restored. The server 102 generates an application encryption key Ka for each of the first instance and a second instance of the application and a domain encryption key P for the private domain, and stores the domain encryption key P on each of the first and second client devices.SELECTED DRAWING: Figure 1
A system for providing an application using continuity between the client device, is configured to present a list of potential multiple target devices, includes a first client device configured to perform a first instance of the application , the first client device further, said that the operation of the first instance of the application running on the first client device, is executed by the second client device from among the list of the plurality of target devices receiving a transfer instruction to the second instance of the application, generates data and status information about the execution of the first instance of the application on the first client device, the encrypted state blob using an application key Ka or sign performed, said first form encrypted state blob (state) Ka applications including status information of an instance, the secure processor and domain encryption key of the first client device of said protected application in the application key Ka to encrypt or signing of the (state) Ka by using the P, to form the domain encryption state blob ((state) Ka) P, to send the a ((state) Ka) P to the second client device , the second to the second instance of the application on the client device enabled, is so configured to continue operation of the application by the second client device using the status information from the first client device, the status information includes the first client device in the first instance of the application being executed, information indicating a relative position in the set of data that is orderly, system.Server configured to generate a private domain comprising respective and identity of the first and second client devices, and the identity of the applicationFurther comprising a system according to claim 1.Said server, each of the application key Ka of said first and second instances of the application, the generate and domain encryption key P of the private domain, the domain encryption on each of the first and second client devices system according constituted, in claim 2 to store the key P.The second client deviceThe (state) system described consists, in any one of claims 1 to 3 to the Ka decryption or authentication, restoring the state information to the second client device.The second client device, using said second client device decrypts ((state) Ka) P, wherein (state) having a secure processor configured to form a Ka, of claims 1 4 system according to any one.The status information includes timing counter, video frame or a data corresponding to the page number, according to any one of claims 1 5 system.Wherein the transfer instruction of the operation of the first instance comprises a user input system according to any one of claims 1 to 6 of the application.The user input, gesture recognition, gesture recognition, speech recognition, voice commands, and is selected from the group consisting of proximity sensing techniques, according to claim 7 system.The first client device further by identifying a plurality of devices that are within communication range of the first client device configured to determine a list of the potential multiple target devices, claims 1 to 8 system according to any one of.System according constituted, in claim 9 so as to place the first client device further list of the potential multiple target devices, based on the physical position of the target device for the first client device.The first client device is further configured to present a list of the potential multiple target devices graphically using one or more icons, according to any one of claims 1 to 10 system.The one or more icons, cellular phones, smart phones, personal media players, personal digital assistants, notebook computers, netbooks, and one or more potential target device selected from the group of handheld electronic device It represents a system according to claim 11.Client deviceA method of providing an application using continuity between, is configured to present a list of potential plurality of target devices at a first client device with a first instance of the application, running on the first client device and receiving a transfer instruction to the operation of the first instance, the second instance of the application that is executed by the second client device from among the list of the plurality of target devices of the application is being, the method comprising the response to a transfer instruction for the operation of the first instance of the application, generates data and status information about the execution of the first instance of the application on the first client device by the first client device, by the first client device, to encrypt or sign the state blob using an application key Ka, the application encryption status blob (states including state information of a first instance of said application that is protected by the application key Ka ) Ka is formed, (to encrypt or signing of state) Ka, domain encryption state blob ((state) said using the first client device and domain encryption key P to form the Ka) P, said by the first client device, the ((state) Ka) to send the P and the data to the second client device, the second instance of the application in the second client device and enable, said of the application the second and a step of continuing the operation of the application by the second client device by using the said state information from said instances first client device, the state information, the application running on the first client device wherein the first instance includes information indicating a relative position in the set of data that is orderly, the method of.Generating a private domain by using the serverFurther comprising a,The private domain, the and each of the identity of the first and the second client device, said including the application of identity, method according to claim 13.And generating an application key Ka for each of said first and second instances of the application,Phase and wherein generating a domain encryption key P of the private domain, and stores the domain encryption key P in each of the first and second client devicesFurther comprising The method of claim 14.Using the second client device, the decrypts ((state) Ka) P, forming the (state) Ka, the steps of decrypting the (state) Ka using the second client device further comprising the second instance of the application is configured to restore the status information in the second client device, the method according to any one of claims 13 15.The status information includes timing counter, comprising data corresponding to the video frame or page number, The method according to any one of claims 13 16.The transfer instruction of the operation of the first instance of the application comprises a user input method according to any one of claims 13 17.The user input, gesture recognition, gesture recognition, speech recognition, voice commands, and is selected from the group consisting of proximity sensing techniques, the method according to claim 18.By the first client device, by identifying a plurality of devices that are within communication range of the first client device, the step of defining a list of the potential multiple target devicesFurther comprising, a method according to any one of claims 13 19.Step by the first client device, in which a list of the potential multiple target devices, based on the physical position of the target device for the first client device, arrangedFurther comprising The method of claim 20.By the first client device, the step of presenting a list of the potential multiple target devices by using one or more icons graphicallyFurther comprising, a method according to any one of claims 13 21.The one or more icons, cellular phones, smart phones, personal media players, personal digital assistants, notebook computers, netbooks, and one or more potential target device selected from the group of handheld electronic device It represents a method according to claim 22.ComputerIn, it is configured to present a list of potential plurality of target devices, the first client device with a first instance of the application, in response to a transfer instruction for the operation of the first instance of the application, the first a step of generating a data and status information about the execution of the first instance of the application running on the client device, by the first client device, to encrypt or sign the state blob using an application key Ka, the a step of forming an application key Ka in protected the encrypted state blob (state) Ka applications including status information of the first instance of the application, using said first client device and a domain encryption key P ( state) to encrypt or signing of the Ka, the transmission and the procedure for forming a domain encryption state blob ((state) Ka) P, said in the first client device ((state) Ka) P to the second client device by, and enable a second instance of the application in the second client device, run a procedure to continue the operation of the application in the second client device using the state information from the first client device It is allowed, the status information includes the first client device in the first instance of the application being executed, information indicating a relative position in the set of data that is orderly, program.To receive the operations of the first instance of the application running on the first client device, a transfer instruction to said second instance of said application executed by the second client device by the first client device the further the computer to execute, the program of claim 24.By the first client device, by identifying a plurality of devices that are within communication range of the first client device, the procedure for determining the list of the potential multiple target devicesThe further the computer to execute, the program according to claim 24 or 25.By the first client device, a list of the potential multiple target devices, based on the physical position of the target device for the first client device, the procedure to placeThe further the computer to execute, the program of claim 26.By the first client device, the procedure for presenting a list of the potential multiple target devices graphically using one or more iconsThe further the computer to execute, the program according to any one of claims 24 27.The one or more icons, cellular phones, smart phones, personal media players, personal digital assistants, notebook computers, netbooks, and one or more potential target device selected from the group of handheld electronic device represent, program according to claim 28.Computer readable recording medium storing a program according to any one of claims 24 29.
Applications use continuity between platformsThe present disclosure relates to applications using continuity, and more particularly to applications using continuity between platforms.Personal computing, including desktop, notebook, netbook, tablet, and / or the smart device. In the conventional method for sharing information, such as documents and / or media content between one or more devices, the user may temporarily transfer means the file desired by the first device (e.g., flash storage device, E-mail transfer , and stored / or IM such as file transfers), it is necessary to transfer to the device destination addressed the entire file. To access in the destination of the device to transfer files, the user, open the file in the appropriate application that exists in the destination of the device, by using a bookmark or contextual information about the content of the file, stored data from, there is a need to restore the state of the transferred information in the manual.Features and advantages claims are to be considered with reference to the accompanying drawings, will become apparent from the following detailed description of embodiments.Show an embodiment of an exemplary system according to the present disclosure.Is a flowchart illustrating an operation for setting the private domain according to an exemplary embodiment of the present disclosure.Is a flow chart illustrating an application using continuity of operation between the platform according to an embodiment of the present disclosure.It shows an embodiment of another exemplary system according to the present disclosure.Further shows an embodiment of another exemplary system according to the present disclosure.Is a flowchart showing an operation according to an exemplary embodiment of the present disclosure.Although the following detailed description illustrated with reference to exemplary embodiments, many alternatives thereof, modifications, and variations will become apparent to those skilled in the art.Generally, the present disclosure provides a system (and method) for application use continuity between the client platform. An example of the system includes a second client device running a first client device to run the first instance of the application, and a second instance of the application. (First device) the user decides to transfer the operation of the application running on the first client device to the second client device. In response to the instruction from the user instructs to transfer the application operation, the first client device generates the status information about the operating parameters of the applications running, and transfers the status information to the second client device . The second client device starts the second instance of the application, advances the application from the point where the user has canceled the first device. Advantageously the user by this it is, "Live" applications in real time (or near real time) two among different devices without loss of session data (e.g., video stream, audio stream, etc.). Therefore allow transfer it is.In some embodiments, the user creates one or more private domain, registering a device and applications in each domain. Only trusted devices in the domain may be used an encryption mechanism to be involved in the "live" transfer of application state information. In other embodiments, using the server in the "cloud" environment, the server holds the private domain, by providing an encryption key, provides security in the private domain, the content is shared outside the domain prevent Rukoto. Server and client devices using various communication standards may be performed adjusted to allow communication between the client devices.Figure 1 illustrates a system 100 according to various embodiments of the present disclosure. System 100 uses continuity server 102 (hereinafter, the server 102), and a plurality of client devices 120A, including 120B. As a general overview of the system 100, each client device 120A, 120B may communicate with each other in order to encourage the transfer from one device of the operation of the running application to the other. Server 102, the client device 120A, a private domain 114A comprising an identifier corresponding to 120B, the private domain 114A including the identifier of the client device 120A, executable 120B least one application 118 in the flop rebate domain 114A settings may be used to. Also server 102, for each application, and / or encryption keys for each domain (e.g., Ka107 and / or P113) provides, client device 120A, may be enabling secure transfer of state information between 120B. Each of the components of FIG. 1 will be described in greater detail herein.Server 102 may include application registration engine 104 and the private domain engine 108. Application registration engine 104 registers the at least one application 118 in the server 102, application 118 first and second client devices 120A, may be performed for determining whether the executable 120B. Client devices 120A, 120B, independently of one another, a mobile phone, smart phone, personal media players (PMP), a personal digital assistant (PDA), a netbook, a notebook, desktop, and / or it contains a handheld electronic device, without limitation. In addition, application registration engine 104 may also include application key generation unit 106. Key generation unit 106 generates an encryption key for each application 118 is registered in the server 102 (hereinafter Ka107). Ka107, for example, the public key may include a secret key or other known encryption key.Private domain engine 108 may generate at least one private domain 114A on the server 102. In the embodiment shown, private domain engine 108, a plurality of private domain 114A on the server, 114B, ..., to produce a 114N. In order to simplify the description, a good individual private domain be any among the plurality of private domain referred to herein as "private domain 114". Private domain engine 108, the server 102, in particular with respect to specific private domain 114, first and second client devices 120A, may include a device registration engine 110 for registering at least one of 120B. In addition, the device registration engine 110 may also include domain key generation unit 112. Domain key generation unit 112 generates a domain key P113 for each private domain 114 which is generated by the server 102. Domain key P113, for example, the public key may include a secret key or other known encryption key.In the embodiment shown, each private domain 114 includes a registered application table 117, registered user table 119, and a registered device table 121. Registered application table 117, may include the identity of each application that has been registered in the private domain 114 on the server 102. Registered user table 119 may include the identity of each user registered in the private domain 114 on the server 102. Registered device table 121, registered in the server 102, each client device 120A in a particular domain 114, may include the identity of 120B.When a domain key P113 for each private domain 114 is generated, the server 102 performs is registered in the server 102, each client device 120A seen in a registered device table 121, the communication via communication link 132 and 120B. In addition each client device (eg, client device 120A) communicates with other client devices (e.g. client devices 120B) via the communication link 132. Communication links 132, WiFi, WiMax, one of 802.1x standard, and / or Bluetooth (registered trademark) including the communication may include wired and / or wireless communication means is not limited thereto. Server 102 further each client devices registered (e.g., client devices 120A, 120B) is transferred to the domain key P113 securely, it provides a domain key P113 to secure processor 124 of each client device 120A, 120B. Secure processor 124, a general purpose function, and / or security features (ie, the ability to securely hold the key data, and high-speed digital signature calculation function) may comprise a processor having a.In one embodiment, the server 102, client devices 120A, 120B may be made to be registered temporarily specific domain 114. Client device 120A to the private domain 114, guest access mode is made possible by the temporary registration of 120B. Also client device 120A, due to a temporary registration of 120B, the first a separate domain (for example, 114A and 114N) to registered devices 120A, share of time-limiting temporary information between 120B users can and will , it is possible to immediate cooperation network between users.Server 102, the client device 120A, respectively 120B, and / or application 118 and may include circuitry for any type of exchanging commands and data. For example, the server 102, general-purpose computing system (for example, a desktop PC, laptop, mobile PC, a handheld mobile device, smart phone, etc.) commodity circuit that may be included in (for example, multiple processing cores and the arithmetic processing unit (ALU) hints may) multicore CPU, memory, a memory controller unit, video processor, a network processor, a network processor, a bus controller, etc.), and / or general purpose computing system and / or application specific computing system (e.g., trusted system it may include custom circuitry which may be included in the supercomputing systems, etc.).The term "circuit", as used with respect to embodiments herein, for example, it just, or some combination, hardwired circuitry, programmable circuitry, state machine circuitry, and / or is executed by a programmable circuit It may include firmware that contains the instructions that.Applications 118 may comprise any type of software package, code modules, firmware, and / or the server 102 and each client device 120A, the instruction set to exchange 120B and the commands and data. For example, application 118, general-purpose computing system (for example, the end users of general-purpose application (for example, Microsoft Word, the software package that is associated with the Excel, etc.), network application (eg, a web browser application, such as E-mail application)), and / or, a custom software package, custom code module, custom firmware, and / or, a custom set of instructions written for general-purpose computing system and / or application-specific computing system (for example, scientific computing package, such as a database package) It may include.For the purposes of this disclosure, the term the term "source device" refers to a first client device that the user wishes to transfer the application 118 being executed (eg, client device 120A), "Target Device", the execution user to receive the application 118 in the points to the second client device that you want (for example, client device 120B). Therefore, in the following description, using the term source device so as to be compatible with the first client device 120A, use the term target device and the compatible second client device 120B. Each client device 120A, 120B, the host portion and / or the open part 128, and may include secure portion 130. As you can understand, part 128 of the host can not only access a limited to a secure portion 130.In one embodiment, the at least one first instance of the application 118 (1) is running on the first client device 120A. In addition, the second instance of the application 118 (2) is included in the second client device 120B. The user of the first client device 120A (source device), the state information and the data from the application 118 (1) of the running in the first client device 120A, the second client device 120B that is included in the private domain 114 (target device) You may wish to transfer to. The state information in the present specification, at the time of transferring the application running on the first client device 120A to another device, means information indicating the relative position within the series of data erected order of the application. For example, status information, the user represents a point in time in the application 118 when instructing the transfer function, the same in the application 118 on the target device 120B, or to start executing the application 118 at approximately the same relative position, the operating parameters of the application 118 It may include.As an example, the user is viewing the audio application on a first client device 120A. The user can instruct to select the transfer of audio applications running on other client devices (e.g. the second client device 120B) at some point during the execution of audio application. Status information of the audio applications in a was the point of transfer instruction may comprise a relative position in the audio file at the time the transfer instruction has occurred (e.g., data corresponding to the timing counter).As another example, the user is viewing a moving image application in the first client device 120A. User while the video file is being played may instruct choose to transfer the video file in execution to the other client devices (e.g. the second client device 120B). State information of the moving image file may include the relative positions of the time the transfer instruction has occurred (e.g., data corresponding etc. video frame). In some cases, the state information may include relative positions of the audio (if included) corresponding to the moving image frame.As yet another example, a user may initiate a transfer to the office suite application on the client device running on the first client device 120A (for example, the second client device 120B). Office suite applications, word processing applications, spreadsheet applications, presentation applications, and / or may comprise a drawing application, but are not limited to. Status information office suite application of point there was a transfer instruction may include data corresponding to the relative position at the time the transfer instruction has occurred (for example, data that the user corresponding to the page or sheet of viewing).Application 118 (1), first and second instance of 118 (2) may be registered in the server 102 according to the methods described herein. Application 118 (1), the first and second instances 118 (2) is registered, (registered application 118 (1) to 118 (2) specific) application-specific key Ka107 is generated. The user may further, as described herein, the first and second client device 120A to the server 102, by registering the 120B, may generate a private domain 114A. Each private domain 114A registered application table 117 and registered device table 121 is registered applications 118 (1), 118 (2) and the registered client device 120A, may hold 120B identifier. When the device 120A, each of the 120B is registered, the server 102, to provide each client device 120A has been registered, in the secure processor 124 120B (the private domain 114A-specific) domain key P113 to secure.In some cases, the client device 120A, 120B, the registered application table 117, (is not shown for clarity) it may also include a copy of the registered user table 119, and / or registered device table 121. For example, the client device 120A, 120B is used registered application table 117, a particular application running on the client device, may be performed for determining whether it is possible corresponding to the transfer operation in accordance with the present disclosure. Whether the client device 120A, the 120B using the registered user table 119, the client device 120A, a particular user of the 120B, whether there is a right to transfer a specific application, and can transfer the application to any client device user it may be carried out of the decision. Further client devices 120A, 120B by using a registered device table 121 may determine and / or identify other client devices that can forward a specific running application. For example, the first client device 120A by using a registered device table 121, creates a list of potential client devices that are within communication range of the first client device 120A, this list was presented to the user can select it may be.Application 118 (1), 118 (2) and the client device 120A, the 120B is registered, the user may transfer to the target device 120B of the operation of the first instance of the application 118 (1) running on the source device 120A choose to wish, indicated by the user input 125. The user input 125, gesture recognition, can include motion recognition, and / or including techniques considering proximity selective input means by the user, but not limited to, as well as other input means. Although not exhaustive list, a user input 125, includes part of the swipe operation (touch screen, etc.) display device, vibration behavior, password and / or PIN number input, such as start-up of icons and / or menu sell. Each client device 120A, 120B receives user input 125, the application 118 (1) may comprise a first and a forwarding module 134 for communicating an indication of the second instance of 118 (2).Transfer module 134, the specific and / or authenticate the user input 125 as a user input 125, and / or a list of potential target devices to forward may be presented. For example, the transfer module 134, a potential target devices registered user table 119, registered device table 121, and / or based at least in part on the device identity in a communication range of the source device 120A, to forward list of may be presented. In one embodiment, the transfer module 134 may be presented by the graph a list of potential target devices based on the registered device table 121 by using a one or more types of icons. Icons, a plurality of different types of devices (e.g., cellular phones, smart phones, personal media players (PMP), a personal digital assistant (PDA), a netbook, a notebook, desktop, and / or including, such as handheld electronic devices these may represent a not limited) to. As can understand, several potential target devices that are associated with the registered device table 121, the limit of the performance of the communication link 132 (2) used in communication between devices is also that it is out of service obtained (e.g., when the communication link 132 (2) short-range wireless communication, is such a wireless ad hoc network). Accordingly, the transfer module 134 only the presentation client devices registered in the communication range (for example, a display). Transfer module 134, in some cases, may be arranged icons on the display based on the relative physical position relative to the source device 120A. The user, for example using a gesture operation, may select the target device 120B to be desired. The transfer module 134, a transfer instruction of data representing the identity of operation and the target device 120B, may be transferred to the first instance of the registered application 118 (1).The first instance of the registered application 118 (1) may include a status information generating unit 122 for generating a status blob (state) receives the operation transfer instruction. The term "blob" is an image, it may refer to a collection of data stored as a single entity, which may include audio and / or multimedia object. The first instance of the application 118 (1), to encrypt and / or signature of the state blob (state) using the application-specific key Ka107, to form the application encryption state blob (state) Ka. Key for signature can be extracted from the domain key P113. Application encryption status blob (state) Ka includes state information, in some cases, may include a data of the first instance of the application 118 is registered in the running on the source device 120A (1).For example, in one embodiment, the first instance of the registered application 118 (1) includes a moving image file to be executed by the source device 120A. When operating the transfer instruction has been received, the user is assumed to view specific mark or frame of the moving image file, such as "frame F '. (State) the state information included in the Ka may include data representative of the frame F (that is, the time when the operation transfer instruction has been received frame). And (state) the state information included in the Ka is, remaining frames (e.g., frames up to the last frame of the moving image file from the frame F) may also include data representative of. Alternatively status information may include data indicating, and a frame F (all frames of the moving image file) moving image file entirety, the user this means that the view the entire video file, or the file is forwarded by the target device 120B continued from the time of the frame (frame F) was able to watch. For example, the user and / or application 118 (1), pause the video file (frame F) in the source device 120A, to transfer the video file to the target device 120B, resume the viewing of the video from the frame F on the target device 120B to.The first instance of the registered application 118 (1) further communicates with the secure processor 124 of the first client device 120A via a secure communications link 126. The first instance of the registered application 118 (1), be transferred to the secure processor 124 of the source device 120A (the state) Ka via a communication link 126. The first instance of the registered application 118 (1) In addition, with respect to the secure processor 124, a request to perform the encryption and / or signature of the (state) Ka in the domain key P113. Upon receipt of the registered application 118 (1) the first from an instance (state) Ka and encryption request, the secure processor 124, a domain key P113 of (state) Ka to encrypt and / or signature, domain encryption It has been (protected in the domain key P113) to form a state blob ((state) Ka) P.The client device 120A to the private domain 114A, the 120B is registered, the first and second client devices 120A, 120B via a communication link 132 (2) to each other, information, in particular, running on one device the state information of the registered application may communicate and transfer. In addition to the examples of the communication link 132 described herein, communication link 132 (2), the short-range wireless communication may include such as a wireless ad-hoc networks. Further client devices 120A, 120B is, in such a scenario to be used in cooperation, may communicate with server 102 in order to find the identity such as IP address for each Klein To devices registered.If it finds a client device, the source device 120A is, to establish a secure connection with the target device 120B through a communication link 132 of the private domain 114A (2). Source device 120A further transfers via a communication link 132 ((state) Ka) P to the target device 120B. Upon receipt of the ((state) Ka) P, the target device 120B (for example, the transfer module 123 (B)) is, to start the start-up of the second instance of the registered application 118 (2). User and / or the second instance of the registered application 118 (2), via a secure communications link 126 (Bs), it communicates with the secure processor 124B of the target device 120B, the ((state) Ka) P Forward. The second instance of the registered application 118 (2) further, requests that the decryption and / or authentication (the (state) Ka) P with respect to the secure processor 124B.Upon receiving from the second instance of the registered application 118 (2) ((state) Ka) P and decryption request, the secure processor 124B, using the domain key P113 ((state) Ka) decryption and / or authentication of the P and, to generate a (state) Ka. Secure processor 124B further transfers the second to the instance (state) Ka registered application 118 via a secure communications link 126 (B) (2). (State) Upon receipt of the Ka, the second instance of the registered application 118 (2), using the application-specific key Ka107 the (state) Ka and decryption and / or authentication, are transferred in ((state) Ka) P the status information of the first instance of the registered application 118 (1) to generate and restore was. The second instance of the registered application 118 (2) resumes the execution of the application 118 on the target device 120B using the state information generated was restored. Incidentally, the above description can be alternatively applied to other embodiments. For example, in those other embodiments, the second client device 120B is the source device, the first client device 120A is the target device.2, according to an embodiment of the present disclosure, a flow chart 200 illustrating an operation for setting the private domain. For example, method 200 includes an act of generating operation of creating a new private domain D (the operation 202), the registered device table including an identifier corresponding to one or more devices associated with the private domain D (operation 204) including the operation of generating a registered user table containing an identifier corresponding to one or more users associated with the private domain D (the operation 206), an identifier corresponding to one or more applications associated with the private domain D operation of generating the registered application table (act 208) and may contain.According to one embodiment, private domain D is by the server by the user is started, for example, may be generated by the server's private domain engine (operation 202). When you generate a private domain D, domain key P can be generated (act 210). Domain key P, the server, for example, may be generated by the domain key generating unit of the server, for example, the public key may include a secret key or other known encryption key. Domain key generation unit may be an Intel to the assignee of the present disclosure provides (TM) Advanced Encryption Standard (AES), but using any encryption method which is not limited to generating a domain key P it may be. Domain key P can optionally be stored on each device associated with the private domain D (operation 212). For example, the domain key P may be stored in the secure processor of each device associated with the private domain D.Registered device table may be generated by the user through the server (operation 204). For example, one or more users, and registers the device to the private domain D using the device registration engine server may generate the registered device table. Registered device table may comprise a plurality of identifiers, each corresponding to one of the devices associated with the private domain D.Similarly, one or more users, and registers the private domain D, may be generated registered device table (act 206). Registered user table, each in one of the user associated with the private domain D may also include a corresponding plurality of identifiers. Registered user table may optionally comprise an identifier corresponding to the particular device which the user is associated.1 or more applications, registered in the server, which may be generated registered application table (act 208). Application, the user may be registered by the manufacturer, and / or a third party. For example, applications may be registered in the server using the application registration engine. When an application is registered, the server, for example using an application key generator, to generate an application-specific key Ka (operation 214). Application-specific key Ka is, for example, the public key may include a secret key or other known encryption key, and and the like Intel to the assignee of the present disclosure provides (TM) Advanced Encryption Standard (AES) or it may be generated using any encryption method is not limited thereto. The application-specific key Ka is, may be stored in each device associated with the private domain D (operation 216). For example, application-specific key Ka may be stored in the registered application 118 of the client device.According to another embodiment, the method 200 of the present disclosure may be omitted server. In particular, one or more client devices, as described herein, the private domain engine may include device registration engine, and / or the application registration engine. The user, by using the private domain engine of the first client device to generate the private domain D (act 202), may generate the domain key P. The domain key P can optionally be transferred to other devices associated with the private domain D (operation 212).Similarly, one or more client devices may generate a registered device table (act 204). For example, the first client device registers the device, functions as a hub for generating registered device table. And registered device table, it may be transferred to other devices that are associated with the flop rebate domain D. Alternatively, each client device, independently of one another, may be performed (e.g. round robin, etc. by may be communicated to other devices registered device table) registered device table registered and / or generated.The registered application table is generated using one or more application registration engine of the client device associated with the private domain D (operation 208), application-specific key Ka, of the client device associated with the private domain D which may be generated using one or more application key generator (operation 214). For example, the first client device registers the application, may serve as a hub for generating registered application table. The registered application table may be transferred to other devices registered device table (act 216). Registered user table may be generated as in the other tables (operation 206).3, according to an embodiment of the present disclosure, a flowchart 300 illustrating the operation of the application using continuity between platforms for transferring the application running on the first device to the second device. Transfer instruction to the second device status information of an application running on the first device and those received by the first device (operation 302). Encryption state blob (state) Ka can be generated by the first device in the application key (act 304). (State) Ka is transferred from the first application to the secure processor of the first device (operation 306), the secure processor, encryption status blob ((state) Ka) may be generated P (operating in domain key 308). ((State) Ka) P is transferred to the second device (operation 310), a second instance of the application may be activated by the second device (act 312). ((State) Ka) P is obtained is transferred to the secure processor of the second device (act 314), the secure processor ((state) Ka) P decryption and / or authentication, to generate a (state) Ka good (act 316). Then (state) Ka is transferred obtained to a second instance of an application running on the second device (operation 318), the second instance, (state) the Ka decryption and / or authentication, the second instance of the application it may generate the status information for (act 320). The second instance, the second device, based on the status information, for example the transfer function may be resumed from the point that is initiated by the first device (operation 322) of the application.Figure 4 generally illustrates another embodiment of a system 400 according to various embodiments of the present disclosure. System 400 is similar to system 100 of FIG. 1, includes a client device 420A, 420B hypervisor and / or supervisor 410, an application 118 that are independent is different from the transfer operation. In particular, the supervisor 410, the transfer to another device application 118 user receives from the transfer module 134 an indication that the desired state of generating status information associated with the first instance of the application 118 (1) including the information generating unit. Supervisor 410 further generates a state blob (state) Upon receiving the motion transfer instruction, to encrypt and / or signature of the state blob (state) using the application-specific key Ka107, application encryption state blob (state) to form a Ka. Application encryption status blob (state) Ka, the state information, and optionally, contains data of the first instance of the registered application 118 running on the source device 420A (1).The supervisor 410 further transfers (state) Ka to the secure processor 124 of the source device 120A via communications link 126, to the secure processor 124, using the domain key P113 (state) encryption and / or signing of the Ka requests to perform. Upon receipt of the from the supervisor 410 (state) Ka and encryption request, the secure processor 124, carried out the encryption and / or signature of the (state) Ka using the domain key P113, it has been the domain encryption (in the domain key P113 protected) state blob ((state) Ka) to form a P.Domain encryption status blob ((state) Ka) P (eg communications link 132 through the (2)) obtained is received by the second client device 420B, a second instance of the registered application 118 (2) is started. Supervisor 410 (B) is, secure communication via link 126 (B) to communicate with the secure processor 124B of the target device 420B ((state) Ka) to transfer the P, with respect to the secure processor 124B ((state) Ka ) to request that decrypt and / or authenticate the P.Upon receiving from the second instance of the registered application 118 (2) ((state) Ka) P and decryption request, the secure processor 124B, using the domain key P113 ((state) Ka) decryption and / or authentication of the P and, to generate a (state) Ka. Secure processor 124B further transfer supervisor 410 to the (B) the (state) Ka through a secure communication link 126 (B). Upon receipt of the (state) Ka, the supervisor 410 (B), using the application-specific key Ka107 decrypt and / or authenticate the (state) Ka, ((state) Ka) registered application 118 to be transferred in the P ( to generate and restore the state information of the first instance of 1). The second instance of the registered application 118 (2) resumes the execution of the application 118 on the target device 420B using the state information generated was restored.5 generally illustrates yet another embodiment of a system 500 according to various embodiments of the present disclosure. System 500 generates a private domain, and / or, one or more of transferring application 118 running on the first client device 520 ( "source device 520A") to the second client device 520B ( "target device 520B") equipped with the device 520A, the 520B. Thus the system 500 is used continuity server 102 of the system 100 (FIG. 1) is omitted. As a general schematic of a system 500, each client device 520A, 520B may communicate with each other in order to encourage the transfer from one device of the operation of the running application to the other. Those same as the embodiment of the system 100 of the aspects of the system 500 (FIG. 1) will be omitted in order to simplify the description.System 500 of device 520A, one or more device 120A of the system 100 out of 520B, but may be similar to 120B, the device 520A, at least one of 520B, as described herein, a new private It generates a domain D, to generate a registered device table including an identifier corresponding to one or more devices associated with the private domain D, registered, including identifiers corresponding to one or more users associated with the private domain D It generates a user table, to generate a registered application table containing an identifier corresponding to one or more applications associated with the private domain D. Easy-to-understand explanation Subeku, for the first device 520A will be explained. However, any of the client devices in the system 500 may also perform the following operations.Further, the first client device 520A includes a private domain engine 508 to generate one or more private domain (e.g. private domain D), one or more client devices will be associated with the private domain D user to be registered, (sent to one or more other devices associated with the private domain D) may comprise a device registration engine 510 to generate a registered device table 121. Domain key (P) generation unit 512 may generate a good domain key P be sent to one or more other devices associated with the private domain D on the basis of the registered device table 121. In addition, registered user table 119, the user and may be generated by the client device 520A to link one or more applications and / or a device private domain.Will be associated with the private domain D application 118 may be registered with an application registration engine 504 of the first device 520A. When the application 118 is registered, the application key generation unit 506 may generate an application-specific key Ka. Resulting registered application table 117 is created, application-specific key Ka may be stored in the application 118 (1). Further, the first device 520A may transmit an application-specific key Ka to one or more of the private domain D in addition to the associated devices.The user may initiate a substantially promoted is the transfer operation as described herein. Client devices 520A, 520B generates a private domain to register the application, because it generates the key Ka, P, in the system 500, eliminating the need for the server of Figure 1. Again, it is not necessary that all of the client devices associated with the system 500 to perform these functions, instead, a single client device, other devices that function substantially similarly to those shown in FIG. 1 hub acts as a device, the key Ka, P is generated by the hub device (eg, client device 520A), it may be those only in that it receives from the hub device is different.FIG 6 is a flowchart 600 illustrating the operation according to an exemplary embodiment of the present disclosure. Operation of the present embodiment, the operation of the first instance of an application running on a first client device, the operation of receiving a transfer instruction to the second instance of the application at the first client device to be executed by the second client device (operation 602) may include. In addition operation, and response from the first client device of the operation of the first instance of the application to transfer instruction to the second client device, the status information, including the current operating parameters and data about the execution of the application at the first client device it may also include operation of generating by the first client device (act 604). Operation further, the first client device sends a status information to the second client device, the second client device and enabling the second client device by using the status information from the second instance and the first client device application in may include the operation to continue the application of behavior (in 606).Having described the various systems and methods, would various features in any of the embodiments will be understood that may be combined with other embodiments. For example hypervisor 4 may be combined with FIGS. 1 and / or 5. Alternatively (or additionally) 1 or 4 of the system may be modified to omit the server as shown in Figure 5. Indeed, in other embodiments of the present disclosure, operation shown FIGS. 2, 3, and / or 6, without departing from aspects of the present disclosure, combined in a manner not specifically shown in any of figures obtain. Therefore, the claims of the features and / or operations that are not accurately shown in the drawings are considered to be included in the aspects and content of this disclosure.Systems and / or methods of the present disclosure, in real time while maintaining the state information and data, the execution application, seamless and secure and dynamic, to allow porting between devices reliable. In addition, it encourages the reliable transfer and data transfer of the application state information secure communications infrastructure can be established.In other embodiments, the systems and / or methods of the present disclosure may include additional usage manner. The user is able to conceive a number of different available methods to correspond to a plurality of different available. For example, Morning Commute (MC) method may include Airport Morning Commute (AMC) scheme, Work Morning Commute (WMC) system, and / or Travel Morning Commute the (TMC) system. Each method may include a collection of trusted applications for a particular use.For example, the AMC scheme may include weather, traffic, radio content, media content, telephone, aviation information, and / or a work-related application. In one embodiment, AMC scheme may be aggregated to the user's home system. The user before leaving the house, AMC other IA device reliability in the trusted domain to the state information of the user of the application (for example, smart phones, car entertainment information system, PC, etc.) to (for example, gestures and / or operating technology, voice commands, and / or by using, such as one-touch of the pre-programmed button) can be transferred. User to when the board to its own vehicle in order to run to the airport, for example, vehicle entertainment information center, including a GPS is kept in a state that has already been displayed the route and traffic information to the airport.The embodiments described herein may be implemented in hardware to perform a method and / or operations described herein, software, and / or can also be performed by using a firmware. Some embodiments described herein, the tangible machine-readable storing the machine-executable instructions for executing the method and / or operations described herein when executed by the machine to the machine it may be provided as a medium. Examples of tangible machine-readable medium, a floppy disk, an optical disk, a compact disk read only memory (CD-ROM), compact disk rewritable (CD-RW), and one containing the magneto-optical disk type of disk, semiconductor devices such as read only memory (ROMs), dynamic and static random access memory (RAM) such as RAM, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic or optical cards, or, including but any type of tangible media suitable for storing electronic instructions, but not limited to. A machine, any suitable processing platform, device or system, computing platform, includes a device or system, may be implemented using any suitable combination of hardware and / or software. The instruction contains any suitable type of code may be implemented using any suitable programming language.Thus, in one embodiment, the present disclosure provides a system for providing an application using continuity between the client devices. The system includes a first client device to run the first instance of the application, and a second client device to run a second instance of the application. The first client device further operation of the first instance of an application running on a first client device, receives a transfer instruction to the second instance of the application to be executed by the second client device, the first client device to generate the state information and data about the execution of the first instance of the running application. The first client device further, to send the state information to the second client device, a second instance of the application of the second client device and enabled, the application in the second client device using the state information from the first client device to continue the operation.In another embodiment, the present disclosure provides a method for providing a second client device and the application usage continuity between executing the first second instance of the client devices and applications executing a first instance of the application. Method of operation of the first instance of an application running on a first client device, the method comprising: receiving a transfer instruction to the second instance of the application at the first client device to be executed by the second client device, the application in response to the operation transfer instruction of the first instance, the steps of the state information and data about the execution of the first instance of the application running on the first client device is generated by the first client device, the first client device status information and It sends the data to a second client device, a second instance of the application of the second client device and enabled, the second instance of the application and from the first client device status information and applications on the second client device using a and a stage to continue to operate.In another embodiment, the present disclosure is to be the computer system and executed by one or more processors, in response to a transfer instruction for the operation of the first instance of the application, the application running on the first client device a first and the procedure for generating state information and data about the execution of the instance by the first client device, the first client device sends the status information to the second client device, and enable a second instance of the application of the second client device, providing a tangible computer-readable medium storing instructions for executing a procedure for continuing the operation application of the second client device by using the status information from the first client device.The terms and expressions used herein are used as terms of description, not intended to be used for the limitation. Just because such terms and expressions are used, it is not intended to exclude equivalents of the indicated features described (or portions thereof), various within the scope of the claims it is possible to make modifications. Accordingly, the claims are intended to cover all such equivalents.Various features herein have been described aspects, and embodiments. Those skilled in the art will appreciate, they features, aspects, and embodiments can be combined with one another, changes and modifications may be added. Thus, the present disclosure, such combinations, variations, and modifications should also be regarded as covering. A system for providing an application using continuity between [Claim 1] client device, a first client device running a first instance of the application, a second client device running a second instance of the application wherein the first client device further transfer instruction to said first operation of the first instance of the application running on the client device, the second client device and the second instance of the application running receives said first generates status information and data about the execution of the first instance of the application running on the client device, to send the status information to the second client device, by the second client device the and enabling the second instance of the application, using the status information from the first client device to continue the operation of the application by the second client device, system. [Claim 2] wherein each of said and identity of the first and second client devices, further comprising a server that generates the private domain containing the identity of the application system of claim 1. [Claim 3] The server, each of the application encryption key Ka of said first and second instances of the application, generates and domain encryption key P of the private domain, the first and the second client storing the domain encryption key P in each of the devices, the system of claim 2. [Claim 4] The first instance of the application, to encrypt or sign the state blob using the application encryption key Ka, the first instance of said application protected by the application encryption key Ka the system according to form an application encryption state blob (state) Ka, to claim 3, including the state information. [Claim 5] wherein the second instance of the application, the application encryption status blob (state) Ka decryption or authentication, restoring the state information to the second client device, the system of claim 4 . [Claim 6] The first client device performs encryption or signing of the application encryption state blob (state) Ka by using the first client device and the domain encryption key P, domain encryption state blob ((state) Ka) having a secure processor for forming a P, system according to claim 4 or 5. [Claim 7] The second client device, the decrypts the domain encryption state blob ((state) Ka) P, to form the application encryption state blob (state) Ka using the second client device having a secure processor system of claim 6. [Claim 8] The state information includes a timing counter, video frame or a data corresponding to the page number, according to any one of claims 1 7 system. [Claim 9] the transfer instruction for the operation of the first instance of the application comprises a user input system of any one of claims 1 8. [Claim 10] The user input gesture recognition, gesture recognition, speech recognition, voice commands, and is selected from the group consisting of considering proximity technology, according to claim 9 system. [11.] it is a method for providing a second client device and application use continuity between to run the first client device to run the first instance of the application and the second instance of the application, the first client device in the operation of the first instance of the application is running, the the steps of the transfer instruction to said second instance of said application to be executed by the second client device received by the first client device, said application the method comprising the response to the transfer instruction of the operation of the first instance, to generate the status information and data about the execution of the first instance of the application on the first client device by the first client device, wherein the 1 client device sends the status information and the data to the second client device, the said second instance of said application of the second client device and enabling the first client and the second instance of the application method comprising the steps of continuing the operation of the application by the second client device by using the said status information from the device. The [Claim 12] The server further includes the step of generating a private domain comprising respective and identity of the first and second client devices, and the identity of the application, The method of claim 11. [Claim 13] The application said generating a application encryption key Ka for each of the first and second instances to generate a domain encryption key P of the private domain, the first and second client devices further comprising the method of claim 12 and a step of storing the domain encryption key P in each. [Claim 14] to encrypt the condition blob by using the said first instance of said application the application encryption key Ka, the status information of the first instance of the application that the protected with the application encryption key Ka stage and, by using the said domain encryption key P and the first client device, said to encrypt the application encryption state blob (state) Ka, domain encryption to form the application encryption state blob (state) Ka, including further comprising the steps of forming a state blobs ((state) Ka) P, a method according to claim 13. By [15.] said second client device, comprising the steps of decrypting the domain encryption state blob ((state) Ka) P, to form the application encryption state blob (state) Ka, the first of the application further comprising a step of decrypting the application encryption status blob (state) Ka, the second instance of the application restores the state information in said second client device using two instances, according to claim 14 the method of. [Claim 16] The state information includes a timing counter, comprising data corresponding to the video frame or page number, The method of any one of claims 11 15. The transfer instruction of the operation of [Claim 17] wherein the first instance of the application comprises a user input method according to any one of claims 11 16. [Claim 18] The user input gesture recognition, gesture recognition, speech recognition, voice commands, and is selected from the group consisting of considering proximity technology, the method according to claim 17. To the one or more processors to be executed by [Claim 19] one or more processors, in response to a transfer instruction for the operation of the first instance of the application, the first instance of the application running on the first client device procedures and, said the first client device is the state information is transmitted to the second client device, the second instance of the application in the second client device that of the state information and data about the execution generated by the first client device and enabling the program to execute a procedure for continuing the operation of the application by the second client device using the status information from the first client device. [Claim 20] of the operation of the first instance of the application running on the first client device, the first client forwarding instruction to the second instance of the application executed by the second client device It said steps of receiving at the device further execute one or more processors, the program of claim 19.
Some disclosed methods involve controlling, via a control system of an apparatus, a touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus. Some disclosed methods involve controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus. Some disclosed methods involve determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches and controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.
CLAIMS1. An apparatus, comprising: a touch sensor system; a fingerprint sensor system; and a control system configured for communication with the touch sensor system and the fingerprint sensor system, the control system being further configured for: controlling the touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling the fingerprint sensor system to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling a size of the touch sensor system active area based, at least in part, on the n touch locations.2. The apparatus of claim 1 , wherein the control system is further configured for setting a number of last user touches to zero after an apparatus boot-up process.3. The apparatus of claim 2, wherein the control system is further configured for setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.4. The apparatus of claim 3, wherein the control system is further configured for setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one. 5. The apparatus of claim 4, wherein the control system is further configured for setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.6. The apparatus of claim 5, wherein the control system is further configured for setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.7. The apparatus of claim 1, wherein the control system is further configured for: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.8. The apparatus of claim 1, wherein the control system is further configured for: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.9. The apparatus of claim 1 , wherein the control system is further configured for: determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities, each touch sensor tile of the plurality of touch sensor tiles including one or more touch sensor pixels, the touch probability being a probability that a next user touch will be on a particular touch sensor tile; and controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities.10. The apparatus of claim 9, wherein the touch probability for each touch sensor tile is based on a distance from each of the n touch locations to each touch sensor tile.11. The apparatus of claim 10, wherein the control system is further configured for: identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles; finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles; and determining the touch sensor system active area based on the encompassing shape.12. The apparatus of claim 1, wherein the control system is further configured for controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations.13. The apparatus of claim 12, wherein the control system is further configured for controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations.14. The apparatus of claim 13, wherein the control system is further configured for controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.15. A method, comprising: controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.16. The method of claim 15, further comprising setting a number of last user touches to zero after an apparatus boot-up process.17. The method of claim 16, further comprising setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.18. The method of claim 17, further comprising setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.19. The method of claim 18, further comprising setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.20. The method of claim 19, further comprising setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.21. The method of claim 15, further comprising: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.22. The method of claim 15, further comprising: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.23. The method of claim 15, further comprising: determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities, each touch sensor tile of the plurality of touch sensor tiles including one or more touch sensor pixels, the touch probability being a probability that a next user touch will be on a particular touch sensor tile; and controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities.24. The method of claim 23, wherein the touch probability for each touch sensor tile is based on a distance from each of the n touch locations to each touch sensor tile.25. The method of claim 24, further comprising: identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles; finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles; and determining the touch sensor system active area based on the encompassing shape.26. The method of claim 15, further comprising controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations.27. The method of claim 26, further comprising controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations.28. The method of claim 27, further comprising controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.29. One or more non-transitory media having software stored thereon, the software including instructions for controlling one or more devices to perform a method, the method comprising: controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.30. The one or more non-transitory media of claim 29, wherein the method further comprises setting a number of last user touches to zero after an apparatus boot-up process.31. The one or more non-transitory media of claim 30, wherein the method further comprises setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.32. The one or more non-transitory media of claim 31, wherein the method further comprises setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.33. The one or more non-transitory media of claim 32, wherein the method further comprises setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.34. The one or more non-transitory media of claim 33, wherein the method further comprises setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.35. The one or more non-transitory media of claim 29, wherein the method further comprises: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.36. The one or more non-transitory media of claim 29, wherein the method further comprises: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.37. An apparatus, comprising: a touch sensor system; a fingerprint sensor system; and control means for: controlling the touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling the fingerprint sensor system to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling a size of the touch sensor system active area based, at least in part, on the n touch locations.38. The apparatus of claim 37, wherein the control means includes means for setting a number of last user touches to zero after an apparatus boot-up process.39. The apparatus of claim 38, wherein the control means includes means for setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.40. The apparatus of claim 39, wherein the control means includes means for setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.41. The apparatus of claim 40, wherein the control means includes means for setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.
POWER SAVING FOR LARGE- AREA SENSORCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to United States Patent Application No. 17/249,352, filed on February 26, 2021 and entitled “POWER SAVING FOR LARGE- AREA SENSOR,” which is hereby incorporated by reference.TECHNICAL FIELD[0002] This disclosure relates generally to fingerprint sensor devices and related methods, including but not limited to touch sensor systems and fingerprint sensor systems, and methods for using such systems. DESCRIPTION OF THE RELATED TECHNOLOGY[0003] Touch sensor systems are commonly featured in a variety of devices. Biometric authentication, including but not limited to fingerprint-based authentication, can be an important feature for controlling access to devices, secured areas, etc. Although some existing touch sensor systems and fingerprint sensor systems provide satisfactory performance under some conditions, improved methods and devices would be desirable.SUMMARY[0004] The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein. [0005] One innovative aspect of the subject matter described in this disclosure may be implemented in an apparatus. The apparatus may include a touch sensor system, a fingerprint sensor system and a control system configured for communication with (e.g. electrically or wirelessly coupled to) the touch sensor system and the fingerprint sensor system. In some examples, the control system may include a memory, whereas in other examples the control system may be configured for communication with a memory that is not part of the control system. According to some examples, the apparatus may be integrated into a mobile device. The control system may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof.[0006] According to some examples, the control system may be configured for controlling the touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus and for controlling the fingerprint sensor system to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus. In some examples, the control system may be configured for determining, based on the touch sensor data, n touch locations corresponding to n last user touches and for controlling a size of the touch sensor system active area based, at least in part, on the n touch locations.[0007] In some examples, the control system may be configured for setting a number of last user touches to zero after an apparatus boot-up process. According to some such examples, the control system may be configured for setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero. In some such examples, the control system may be configured for setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one. According to some such examples, the control system may be configured for setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two. In some such examples, the control system may be configured for setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0008] According to some examples, the control system may be configured for determining a shape that encompasses the n touch locations and for setting the touch sensor system active area to correspond with the shape. In some examples, the control system may be configured for determining a shape that encompasses at least a threshold portion of the n touch locations and for setting the touch sensor system active area to correspond with the shape.[0009] In some examples, the control system may be configured for determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities. Each touch sensor tile of the plurality of touch sensor tiles may include one or more touch sensor pixels. The touch probability may be a probability that a next user touch will be on a particular touch sensor tile. In some such examples, the touch probability for each touch sensor tile may be based on a distance from each of the n touch locations to each touch sensor tile. In some such examples, the control system may be configured for controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities. In some such examples, the control system may be configured for identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles. In some such examples, the control system may be configured for finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles and for determining the touch sensor system active area based on the encompassing shape.[0010] According to some examples, the control system may be configured for controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations. In some such examples, the control system may be configured for controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations. In some such examples, the control system may be configured for controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.[0011] Other innovative aspects of the subject matter described in this disclosure may be implemented in a method. In some examples, the method may involve controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus. In some such examples, the control system may be configured for controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus. In some such examples, the control system may be configured for In some such examples, the control system may be configured for determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches. In some such examples, the control system may be configured for controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.[0012] In some examples, the method may involve setting a number of last user touches to zero after an apparatus boot-up process. In some such examples, the method may involve setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero. In some such examples, the method may involve setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one. In some such examples, the method may involve setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two. In some such examples, the method may involve setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0013] According to some examples, the method may involve determining a shape that encompasses the n touch locations and setting the touch sensor system active area to correspond with the shape. In some examples, the method may involve determining a shape that encompasses at least a threshold portion of the n touch locations and setting the touch sensor system active area to correspond with the shape.[0014] In some examples, the method may involve determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities. Each touch sensor tile of the plurality of touch sensor tiles may include one or more touch sensor pixels. The touch probability may be a probability that a next user touch will be on a particular touch sensor tile. In some such examples, the method may involve controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities. In some such examples, the touch probability for each touch sensor tile may be based on a distance from each of the n touch locations to each touch sensor tile. In some such examples, the method may involve identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles, finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles and determining the touch sensor system active area based on the encompassing shape.[0015] According to some examples, the method may involve controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations. In some such examples, the method may involve controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations. In some such examples, the method may involve controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.[0016] Some or all of the operations, functions and/or methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented in one or more non-transitory media having software stored thereon. For example, the software may include instructions for controlling one or more devices to perform a method.[0017] In some examples, the method may involve controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus. In some such examples, the control system may be configured for controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus. In some such examples, the control system may be configured for In some such examples, the control system may be configured for determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches. In some such examples, the control system may be configured for controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.[0018] In some examples, the method may involve setting a number of last user touches to zero after an apparatus boot-up process. In some such examples, the method may involve setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero. In some such examples, the method may involve setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one. In some such examples, the method may involve setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two. In some such examples, the method may involve setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0019] According to some examples, the method may involve determining a shape that encompasses the n touch locations and setting the touch sensor system active area to correspond with the shape. In some examples, the method may involve determining a shape that encompasses at least a threshold portion of the n touch locations and setting the touch sensor system active area to correspond with the shape.[0020] In some examples, the method may involve determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities. Each touch sensor tile of the plurality of touch sensor tiles may include one or more touch sensor pixels. The touch probability may be a probability that a next user touch will be on a particular touch sensor tile. In some such examples, the method may involve controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities. In some such examples, the touch probability for each touch sensor tile may be based on a distance from each of the n touch locations to each touch sensor tile. In some such examples, the method may involve identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles, finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles and determining the touch sensor system active area based on the encompassing shape. [0021] According to some examples, the method may involve controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations. In some such examples, the method may involve controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations. In some such examples, the method may involve controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.BRIEF DESCRIPTION OF THE DRAWINGS[0022] Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale. Like reference numbers and designations in the various drawings indicate like elements.[0023] Figure 1 is a block diagram that shows example components of an apparatus according to some disclosed implementations.[0024] Figure 2A shows an example of an active touch area and an example of an active fingerprint sensor area according to one current mobile device implementation.[0025] Figure 2B shows an example of an active touch area and an example of an active fingerprint sensor area according to one possible future mobile device implementation that is based on the same logic underlying the example of Figure 2A.[0026] Figure 3A shows an example of an apparatus that has been powered on (also referred to herein as being booted up, or as having undergone a boot-up process), but has not yet been unlocked since being powered on.[0027] Figure 3B shows an example of the apparatus of Figure 3 A after the apparatus has been unlocked one time since being powered on.[0028] Figure 3C shows an example of the apparatus of Figure 3B after the apparatus has been unlocked an additional time since being powered on. [0029] Figure 3D shows an example of the apparatus of Figure 3C after the apparatus has been unlocked an additional time since being powered on.[0030] Figure 3E shows an example of the apparatus of Figure 3D after the apparatus has been unlocked an additional time since being powered on.[0031] Figure 3F shows an example of the apparatus of Figure 3E after the apparatus has been unlocked an additional time since being powered on.[0032] Figure 3G shows an example of the apparatus of Figure 3F after the apparatus has been unlocked an additional time since being powered on.[0033] Figure 3H shows an example of the apparatus of Figure 3G after the apparatus has been unlocked an additional time since being powered on.[0034] Figure 31 shows an example of the apparatus of Figure 3H after the apparatus has been unlocked an additional time since being powered on.[0035] Figures 4 and 5 are flow diagrams that provide examples of operations according to some disclosed methods.DETAILED DESCRIPTION[0036] The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein may be applied in a multitude of different ways. The described implementations may be implemented in any device, apparatus, or system that includes a biometric system as disclosed herein. In addition, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, smart cards, wearable devices such as bracelets, armbands, wristbands, rings, headbands, patches, etc., Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, global positioning system (GPS) receivers/navigators, cameras, digital media players (such as MP3 players), camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (e.g., e- readers), mobile health devices, computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS) applications including microelectromechanical systems (MEMS) applications, as well as non-EMS applications), aesthetic structures (such as display of images on a piece of jewelry or clothing) and a variety of EMS devices. The teachings herein also may be used in applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, steering wheels or other automobile parts, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.[0037] The use of fingerprint sensors for authentication is now commonplace. (As used herein, the term “finger” may refer to any digit, including a thumb. Accordingly, a thumbprint will be considered a type of “fingerprint.”) In some examples, a control system of an apparatus will obtain a target object location (e.g., a digit location) for fingerprint sensor scanning via input from a touch sensor system.[0038] In some implementations, at least a portion of a device’s touch sensor system (e.g., a portion corresponding with the fingerprint sensor) will remain active or “on” even when the apparatus is locked and/or in a sleep state. If the fingerprint sensor area occupies a relatively small portion of the overall touch sensor system area, the power consumption caused by an “always on” touch sensor system portion can be mitigated. For example, the 4mm x 9mm or 8mm x 8mm fingerprint sensors that are currently deployed by the present assignee occupy a relatively small portion of the overall touch sensor system area of a cell phone, which generally corresponds to a display area of the cell phone. However, some large- format fingerprint sensor systems under development by the present assignee may extend underneath a substantial portion (e.g., half or more) of a cell phone display area. If a corresponding portion of a of the touch sensor system will remain active even when the apparatus is locked and/or in a sleep state, the power consumption caused by an “always on” touch sensor system portion will increase substantially.[0039] Some disclosed methods involve dynamically changing the size of the touch sensor system active area. Some such methods involve dynamically changing the size of the touch sensor system active area based, at least in part, on recent touch sensor data received from a touch sensor system. In some examples, the size of the touch sensor system active area may be based, at least in part, on n touch locations corresponding to n last user touches. A maximum value for the number n may, for example, be configurable at a factory in which the apparatus is assembled, by a device vendor, and/or by an end user. According to some such examples, fingerprint sensor scanning will only be performed in a target object location.[0040] Particular implementations of the subject matter described in this disclosure may be implemented to realize one or more of the following potential advantages. Dynamically changing the size of the touch sensor system active area can result in a substantially lower power consumption caused by an “always on” touch sensor system portion. For example, in some tests conducted by the present inventors, dynamically changing the size of the touch sensor system active area according to the five last user touch locations resulted in in approximately a 70% lower power consumption, as compared to an “always on” touch sensor system portion that corresponded to the entire fingerprint sensor area.[0041] Figure 1 is a block diagram that shows example components of an apparatus according to some disclosed implementations. In this example, the apparatus 101 includes a fingerprint sensor system 102, a touch sensor system 103 and a control system 106. Some implementations may include an interface system 104, a memory system 108 and/or a display system 110.[0042] According to some examples, the fingerprint sensor system 102 may be, or may include, an ultrasonic fingerprint sensor. Alternatively, or additionally, in some implementations the fingerprint sensor system 102 may be, or may include, an optical fingerprint sensor. In some examples, an ultrasonic version of the fingerprint sensor system 102 may include an ultrasonic receiver and a separate ultrasonic transmitter. In some such examples, the ultrasonic transmitter may include an ultrasonic plane-wave generator. However, various examples of ultrasonic fingerprint sensors are disclosed herein, some of which may include a separate ultrasonic transmitter and some of which may not. For example, in some implementations, the fingerprint sensor system 102 may include a piezoelectric receiver layer, such as a layer of polyvinylidene fluoride PVDF polymer or a layer of polyvinylidene fluoride-trifluoroethylene (PVDF-TrFE) copolymer. In some implementations, a separate piezoelectric layer may serve as the ultrasonic transmitter. In some implementations, a single piezoelectric layer may serve as both a transmitter and a receiver. The fingerprint sensor system 102 may, in some examples, include an array of ultrasonic transducer elements, such as an array of piezoelectric micromachined ultrasonic transducers (PMUTs), an array of capacitive micromachined ultrasonic transducers (CMUTs), etc. In some such examples, PMUT elements in a single-layer array of PMUTs or CMUT elements in a single-layer array of CMUTs may be used as ultrasonic transmitters as well as ultrasonic receivers.[0043] Data received from the fingerprint sensor system 102 may sometimes be referred to herein as “fingerprint sensor data,” “fingerprint image data,” etc., although the data will generally be received from the fingerprint sensor system in the form of electrical signals. Accordingly, without additional processing such image data would not necessarily be perceivable by a human being as an image.[0044] The touch sensor system 103 may be, or may include, a resistive touch sensor system, a surface capacitive touch sensor system, a projected capacitive touch sensor system, a surface acoustic wave touch sensor system, an infrared touch sensor system, or any other suitable type of touch sensor system. In some implementations that include a display system 110, the area of the touch sensor system 103 may extend over most or all of a display portion of the display system 110.[0045] The control system 106 may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof. According to some examples, the control system 106 also may include one or more memory devices, such as one or more random access memory (RAM) devices, read-only memory (ROM) devices, etc. In this example, the control system 106 is configured for communication with, and for controlling, the fingerprint sensor system 102 and the touch sensor system 103. According to some examples, the control system 106 may include a dedicated component for controlling the fingerprint sensor system 102 and/or a dedicated component for controlling the touch sensor system103. If the apparatus includes a display system 110, the control system 106 may be configured for communication with, and for controlling, the display system 110. If the apparatus includes a memory system 108 that is separate from the control system 106, the control system 106 also may be configured for communication with the memory system 108. In some implementations, functionality of the control system 106 may be partitioned between one or more controllers or processors, such as between a dedicated sensor controller and an applications processor of a mobile device.[0046] In some examples, the memory system 108 may include one or more memory devices, such as one or more RAM devices, ROM devices, etc. In some implementations, the memory system 108 may include one or more computer-readable media, storage media and/or storage media. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. In some examples, the memory system 108 may include one or more non-transitory media. By way of example, and not limitation, non-transitory media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disc ROM (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer.[0047] Some implementations of the apparatus 101 may include an interface system104. In some examples, the interface system 104 may include a wireless interface system. In some implementations, the interface system 104 may include a user interface system, one or more network interfaces, one or more interfaces between the control system 106 and the fingerprint sensor system 102, one or more interfaces between the control system 106 and the touch sensor system 103, one or more interfaces between the control system 106 and the memory system 108, one or more interfaces between the control system 106 and the display system 110, and/or one or more interfaces between the control system 106 and one or more external device interfaces (e.g., ports or applications processors).[0048] The interface system 104 may be configured to provide communication (which may include wired or wireless communication, electrical communication, radio communication, etc.) between components of the apparatus 101. In some such examples, the interface system 104 may be configured to provide communication between the control system 106 and the fingerprint sensor system 102. According to some such examples, the interface system 104 may couple at least a portion of the control system 106 to the fingerprint sensor system 102 and the interface system 104 may couple at least a portion of the control system 106 to the touch sensor system 103, e.g., via electrically conducting material (e.g., via conductive metal wires or traces. According to some examples, the interface system 104 may be configured to provide communication between the apparatus 101 and other devices and/or human beings. In some such examples, the interface system 104 may include one or more user interfaces. The interface system 104 may, in some examples, include one or more network interfaces and/or one or more external device interfaces (such as one or more universal serial bus (USB) interfaces or a serial peripheral interface (SPI)).[0049] In some implementation, the apparatus 101 includes a display system 110. In some such examples, the display system 110 may include layers, which may be referred to collectively as a “display stack.” In some examples, the display system 110 may be, or may include, a light-emitting diode (LED) display, such as an organic light-emitting diode (OLED) display.[0050] The apparatus 101 may be used in a variety of different contexts, some examples of which are disclosed herein. For example, in some implementations a mobile device may include at least a portion of the apparatus 101. In some implementations, a wearable device may include at least a portion of the apparatus 101. The wearable device may, for example, be a bracelet, an armband, a wristband, a ring, a headband or a patch. In some implementations, the control system 106 may reside in more than one device. For example, a portion of the control system 106 may reside in a wearable device and another portion of the control system 106 may reside in another device, such as a mobile device (e.g., a smartphone). The interface system 104 also may, in some such examples, reside in more than one device.[0051] Figure 2A shows an example of an active touch area and an example of an active fingerprint sensor area according to one current mobile device implementation. In this example, the apparatus is locked and in a sleep state. The apparatus 200a has a fingerprint sensor area 210a that occupies a relatively small portion of the overall touch sensor system area, which occupies most of the area of the display 220a in this example. In some examples, the fingerprint sensor area 210a may be 4mm x 9mm or 8mm x 8mm, whereas the area of the display 220a may be approximately 7 cm x 12 cm or larger. According to this implementation the active touch area 205 a during a locked and/or in a sleep state corresponds with, and is slightly larger than, the fingerprint sensor area 210a. Because the fingerprint sensor area 210a occupies a relatively small portion of the overall touch sensor system area, the power consumption caused by the active touch area 205 a during the locked and sleep state may be a relatively small fraction of the power consumption caused by the touch sensor system during an unlocked and awake mode.[0052] Figure 2B shows an example of an active touch area and an example of an active fingerprint sensor area according to one possible future mobile device implementation that is based on the same logic underlying the example of Figure 2A. According to this example, the apparatus 200b has been in use and, while still powered up, has reverted to a locked state and sleep state due to lack of activity within a threshold time interval. In this instance, the apparatus 200b has a fingerprint sensor area 210b that occupies most of the overall touch sensor system area. In this implementation the active touch area 205b during the locked and sleep state corresponds with, and is slightly larger than, the fingerprint sensor area 210b. Because the fingerprint sensor area 210b occupies most of the overall touch sensor system area, the power consumption caused by the active touch area 205b during the locked and sleep mode is equal to, or nearly equal to, the power consumption caused by the touch sensor system during an unlocked and awake mode.[0053] In order to avoid the high power consumption caused by the touch sensor system in implementations such as that of Figure 2B, some disclosed methods involve dynamically changing the size of the touch sensor system active area based, at least in part, on touch locations corresponding to the last n user touches. In some examples, the number n may be reset to zero as part of an apparatus logoff/power down process or as part of an apparatus boot-up process. A maximum value for the number n may, for example, be configurable at a factory in which the apparatus is assembled, by a device vendor, and/or by an end user.[0054] Figures 3A-3I show example states of an apparatus that is configured to avoid the high power consumption caused by the touch sensor system in implementations such as that of Figure 2B. As with other disclosed implementations, the types, numbers and arrangements of elements, as well as the dimensions of elements, that are shown in Figures 3A-3I are merely examples.[0055] Figure 3A shows an example of an apparatus that has been powered on (also referred to herein as being booted up, or as having undergone a boot-up process), but has not yet been unlocked since being powered on. According to this example, the apparatus 101 includes instances of the fingerprint sensor system 102, the touch sensor system 103, the control system 106 and the display system 110 that are described above with reference to Figure 1. In this example, the relative proportions of the touch sensor system active area 305a and the fingerprint sensor system active area 310a are similar to those of the active touch area 205b and the fingerprint sensor area 210b that are shown in Figure 2B. However, unlike the control system of the apparatus 200b, the control system 106 of the apparatus 101 is configured to implement some or all of the methods disclosed herein. In the example shown in Figure 3A, the control system 106 is configured for setting the touch sensor system active area 305a to the entire touch sensor area (e.g., to all sensor cells or sensor pixels of the touch sensor system 103) upon determining that the number of last user touches since the most recent boot-up process is zero.[0056] Figure 3B shows an example of the apparatus of Figure 3 A after the apparatus has been unlocked one time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a fingerprint authentication process that involved obtaining fingerprint sensor data from the touch location 315a, which is also referred to in Figure 3B as a “current FP (fingerprint) touch area.” The successful fingerprint authentication process may, for example, have involved extracting fingerprint features (such as fingerprint minutiae, keypoints and/or sweat pores) from the currently-obtained fingerprint sensor data from the touch location 315a and comparing the extracted fingerprint features with fingerprint features that were previously obtained during an enrollment process.[0057] In this example, the control system 106 determined the touch location 315a according to signals from the touch sensor system 103 that resulted from a corresponding user touch, then activated at least a portion of the fingerprint sensor system 102 to obtain fingerprint sensor data from the touch location 315a for the fingerprint authentication process.[0058] According to the example shown in Figure 3B, the control system 106 is configured for setting the touch sensor system active area 305b to a touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches (in this example, the number of user touches since the time of the last boot up process) is one. In this example, the touch sensor system active area 305b encompasses the touch location 315a and has an arcuate outline, except for the boundaries of the touch sensor system active area 305b that correspond with the edges of the entire touch sensor area. However, the specific shape and dimensions of the touch sensor system active area 305b that are shown in Figure 3B, the specific shapes and dimensions of the touch sensor system active areas 305c-305i that are shown in Figures 3C-3I, and the underlying methods used to produce the shapes and dimensions of the touch sensor system active areas 305b-305i, are merely provided by way of example. Various examples of determining the shapes and dimensions of touch sensor system active areas are provided in the present disclosure.[0059] Figure 3C shows an example of the apparatus of Figure 3B after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a second fingerprint authentication process since being powered on. Here, the second fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315b. The former touch location 320a is the penultimate touch location, corresponding to the touch location 315a of Figure 3B. According to the example shown in Figure 3C, the control system 106 is configured for setting the touch sensor system active area 305c to an area that is smaller than the touch sensor system active area 305b upon determining that the number n of last user touches (in this example, the number of user touches since the time of the last boot-up process) is two. In this example, the control system 106 is configured for setting the touch sensor system active area 305c to the particular size and shape indicated in Figure 3C based on the last two user touch locations that are indicated in Figure 3C. According to this example, the touch sensor system active area 305c encompasses the touch location 315b and the former touch location 320a, and has an elliptical shape.[0060] Figure 3D shows an example of the apparatus of Figure 3C after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a third fingerprint authentication process since being powered on. Here, the third fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315c. The former touch location 320b is the penultimate touch location, corresponding to the touch location 315b of Figure 3C. According to the example shown in Figure 3D, the control system 106 is configured for setting the touch sensor system active area 305d to an area that is smaller than the touch sensor system active area 305c upon determining that the number n of last user touches (in this example, the number of user touches since the time of the last boot-up process) is three. In this example, the control system 106 is configured for setting the touch sensor system active area 305d to the particular size and shape indicated in Figure 3D based on the last three user touch locations that are indicated in Figure 3D: according to this example, the touch sensor system active area 305d encompasses the touch location 315c and the former touch locations 320a and 320b, and has an elliptical shape.[0061] Figure 3E shows an example of the apparatus of Figure 3D after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a fourth fingerprint authentication process since being powered on. Here, the fourth fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315d. The former touch location 320c is the penultimate touch location, corresponding to the touch location 315c of Figure 3D. In the example shown in Figure 3E, the control system 106 is configured for setting the touch sensor system active area 305e to an area that is smaller than the touch sensor system active area 305d upon determining that the number n of last user touches (in this example, the number of user touches since the time of the last boot-up process) is four. In this example, the control system 106 is configured for setting the touch sensor system active area 305e to the particular size and shape indicated in Figure 3E based on the last four user touch locations since boot-up that are indicated in Figure 3E: according to this example, the touch sensor system active area 305e encompasses the touch location 315d and the former touch locations 320a-320c, and has an elliptical shape.[0062] Figure 3F shows an example of the apparatus of Figure 3E after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a fifth fingerprint authentication process since being powered on. Here, the fifth fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315e. The former touch location 320d is the penultimate touch location, corresponding to the touch location 315d of Figure 3E. In the example shown in Figure 3F, the control system 106 is configured for setting the touch sensor system active area 305f to an area that is smaller than the touch sensor system active area 305e upon determining that the number n of last user touches (in this example, the number of user touches since the time of the last boot-up process) is five. In this example, the control system 106 is configured for setting the touch sensor system active area 305f to the particular size and shape indicated in Figure 3F based on the last five user touch locations since boot-up that are indicated in Figure 3F: according to this example, the touch sensor system active area 305f has an elliptical shape and encompasses the touch location 315e, the former touch locations 320a, 320b and 320d, and part but not all of the former touch location 320c.[0063] Figure 3G shows an example of the apparatus of Figure 3F after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a sixth fingerprint authentication process since being powered on. Here, the sixth fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315f. The former touch location 320e is the penultimate touch location, corresponding to the touch location 315e of Figure 3F.[0064] According to this example, the control system 106 is configured for determining that the number of last user touches (in this example, the number of user touches since the time of the last boot-up process) has exceeded a determined maximum value for the number n, which is five in this example. Therefore, according to this example, the touch sensor system active area 305g is no longer based in part on the former touch location 320a, but instead is based only upon the last five touch locations: these include the former touch locations 320b-320e and the touch location 315f. In the example shown in Figure 3G, the control system 106 is configured for setting the touch sensor system active area 305g to an area that is approximately the same size as, but a different shape than, the touch sensor system active area 305f due to the different last five touch locations at the times corresponding to Figures 3F and 3G. In this example, the touch sensor system active area 305g has an elliptical shape and is based at least in part on the last five touch locations: the touch sensor system active area 305g encompasses most but not all of the touch location 315f, the former touch locations 320d and 320e, and part but not all of the former touch locations 320b and 320c.[0065] As noted elsewhere herein, the maximum value for the number n may be determined at different times, by different people and/or entities, depending on the particular implementation (e.g., at the factory, at a warehouse, at a retail location, at an end user location, etc.) Moreover, the maximum value for the number n may differ according to the particular implementation. In some alternative implementations the maximum value for the number n may be less than 5 (e.g., 2, 3 or 4), whereas in alternative implementations the maximum value for the number n may be more than 5 (e.g., 6, 7, 8, 9 or 10).[0066] Figure 3H shows an example of the apparatus of Figure 3G after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of a seventh fingerprint authentication process since being powered on. Here, the seventh fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315g. The former touch location 320f is the penultimate touch location, corresponding to the touch location 315f of Figure 3G.[0067] According to this example, the control system 106 is configured for determining that the number of last user touches (in this example, the number of user touches since the time of the last boot-up process) has once again exceeded a maximum value for n, which is five in this example. Therefore, according to this example, the touch sensor system active area 305g is no longer based in part on the former touch locations 320a or 320b, but instead is based only upon the last five touch locations: these include the former touch locations 320c-320f and the touch location 315g. In the example shown in Figure 3H, the control system 106 is configured for setting the touch sensor system active area 305h to an area that is somewhat smaller than, and a different shape than, the touch sensor system active area 305g due to the different last five touch locations at the time corresponding to Figure 3H, as compared to the last five touch locations at the time corresponding to Figure 3G. In this example, the touch sensor system active area 305h has an elliptical shape and encompasses most but not all of the former touch locations 320c-320f and all of the touch location 315g.[0068] Figure 31 shows an example of the apparatus of Figure 3H after the apparatus has been unlocked an additional time since being powered on. According to this example, the apparatus 101 has been unlocked after the successful completion of an eighth fingerprint authentication process since being powered on. Here, the eighth fingerprint authentication process involved obtaining fingerprint sensor data from the touch location 315h. The former touch location 320g is the penultimate touch location, corresponding to the touch location 315g of Figure 3H.[0069] According to this example, the control system 106 is configured for determining that the number of last user touches (in this example, the number of user touches since the time of the last boot-up process) has once again exceeded a maximum value for n, which is five in this example. Therefore, according to this example, the touch sensor system active area 305h is no longer based in part on the former touch locations 320a-320c, but instead is based only upon the last five touch locations: these include the former touch locations 320d-320g and the touch location 315h. In the example shown in Figure 31, the control system 106 is configured for setting the touch sensor system active area 305i to an area that is slightly smaller than, and a different shape than, the touch sensor system active area 305h due to the different last five touch locations at the time corresponding to Figure 31, as compared to the last five touch locations at the time corresponding to Figure 3H. In this example, the touch sensor system active area 305i has an elliptical shape and encompasses the former touch locations 320e and 320g, most but not all of the former touch locations 320d and 320f, and most of the touch location 315h.[0070] Figures 4 and 5 are flow diagrams that provide examples of operations according to some disclosed methods. The blocks of Figures 4 and 5 may, for example, be performed by the apparatus 101 of Figure 1 or by a similar apparatus. As with other methods disclosed herein, the methods outlined in Figures 4 and 5 may include more or fewer blocks than indicated. Moreover, the blocks of methods disclosed herein are not necessarily performed in the order indicated. In some implementations, one or more blocks may be performed concurrently.[0071] Referring first to Figure 4, in this example block 405 involves controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus. For example, block 405 may involve the control system 106 of Figure 1 controlling the touch sensor system 103 to obtain touch sensor data in a touch sensor system active area of the apparatus 101.[0072] In some instances (e.g., as described above with reference to Figures 3A-3I), the apparatus may be in a locked mode and/or a sleep mode at the time of receiving the touch sensor data obtained in block 405. According to some such examples, the control system may be configured to activate one or more components of the apparatus 101, such as the fingerprint sensor system 102, in response to the touch sensor data obtained in block 405.[0073] In some examples, block 405 (or another aspect of method 400) may involve determining touch location data that corresponds with the touch sensor data obtained in block 405. In some instances, the touch location may correspond with a contact area of a target object, such as a digit, that is in contact with the touch sensor system active area. The touch location data may, in some examples, include one or more x,y coordinates of a touch sensor coordinate system. For example, the touch location data may include a plurality of coordinate pairs that define a contact area of a target object, a single coordinate pair that defines a centroid of the contact area, etc. Alternatively, or additionally, the touch location data may include one or more touch sensor pixel locations that correspond with a contact area of a target object, such as a plurality of touch sensor pixel locations that define a contact area of a target object, a single touch sensor pixel location that defines a centroid of the contact area, etc. Alternatively, or additionally, the touch location data may include one or more touch sensor tile locations that correspond with a contact area of a target object, such as a plurality of touch sensor tile locations that define a contact area of a target object, a single touch sensor tile location that defines a centroid of the contact area, etc. As described in more detail below, each touch sensor tile may include one or more touch sensor pixels. According to some such examples, block 405 (or another aspect of method 400) may involve storing the touch location data in a memory, such as a memory of the control system or a separate memory, such as a memory device of the memory system 108 of Figure 1.[0074] In this example, block 410 involves controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus. For example, block 410 may involve the control system 106 of Figure 1 controlling the fingerprint sensor system 102 to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus 101. In some implementations, the control system may be configured to use touch sensor data obtained in block 405 to determine the fingerprint sensor system active area. However, in some examples the fingerprint sensor system active area may include the entire fingerprint sensor system area.[0075] According to some implementations, the fingerprint sensor data may be used for an authentication process. In some such examples, the control system may be configured to unlock the apparatus 101 if the authentication process concludes successfully.[0076] According to this example, block 415 involves determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches. As described elsewhere herein, in some examples of method 400, a control system may be determined to set the value of n to zero as part of, or after, a power-down or boot-up process.[0077] In some implementations in which n > 1, block 415 may involve retrieving touch location data corresponding to former touch locations from a memory. For example, referring to Figure 3D, n equals three. In this example, the last three touch locations since a boot-up process included the current touch location 315c and the two former touch locations 320a and 320b. In some such implementations, block 415 may involve retrieving touch location data corresponding to former touch locations 320a and 320b from a memory.[0078] In this example, block 420 involves controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations. According to some examples, block 420 (or another aspect of method 400) may involve controlling, via the control system, a shape of the touch sensor system active area based, at least in part, on the n touch locations. For example, referring again to Figure 3D, the last three touch locations since a boot-up process included the current touch location 315c and the two former touch locations 320a and 320b. In this example, the size and shape of the touch sensor system active area 305d are based, at least in part, on the current touch location 315c and the two former touch locations 320a and 320b. In some examples the shape of the touch sensor system active area may be arcuate or elliptical, as shown in Figures 3B-3I. In other examples, the shape of the touch sensor system active area may be square or rectangular, or may correspond to another geometric shape. In some examples the shape of the touch sensor system active area may correspond to an outline of touch sensor tiles that are included in the touch sensor system active area.[0079] According to some implementations, method 400 may involve imposing a maximum value for n, which may vary according to the particular implementation. In the examples described above with reference to Figure 3A-3I, the maximum value for n was set to five. In the example described above with reference to Figure 3G, when the control system determined that the number of touch locations since the last boot-up process had exceeded five, only the last five touch locations were used to determine the size and shape of the touch sensor system active area 305g.[0080] According to some examples, block 420 may involve controlling the size of the touch sensor system active area to be the entire touch sensor area upon determining that the number of last user touches since a boot-up process is zero. Figure 3A and the corresponding description provide one such example.[0081] In some examples, block 420 may involve controlling the size of the touch sensor system active area to be a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches since a boot-up process is one. Figure 3B and the corresponding description provide one such example.[0082] In some examples, block 420 may involve controlling the size of the touch sensor system active area to be a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches since a boot-up process is two. Figure 3C and the corresponding description provide one such example. [0083] In some examples, block 420 may involve controlling the size of the touch sensor system active area to be a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches since a boot-up process is three. Figure 3D and the corresponding description provide one such example.[0084] According to some implementations, method 400 may involve controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations. In some such examples, method 400 may involve controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations. According to some implementations, method 400 may involve controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations. In other words, in some examples the control system 106 may be configured to obtain fingerprint sensor data only in fingerprint sensor areas that correspond with the n touch locations, without activating all sensor pixels of the fingerprint sensor system 102.[0085] In some such implementations, method 400 may involve performing n authentication processes based on fingerprint sensor data obtained in each of the n touch locations. According to some such examples, the n touch locations only correspond to instances during which a person is placing a digit (or other target object) on the apparatus 101 in order to initiate a fingerprint-based authentication process, e.g., in an attempt to unlock the apparatus 101 after the apparatus 101 has reverted to a sleep/locked mode.[0086] According to some implementations, method 400 may involve determining a shape that encompasses the n touch locations and setting the touch sensor system active area to correspond with the shape. Figures 3B-3E and the corresponding descriptions above provide relevant examples.[0087] In some implementations, method 400 may involve determining a shape that encompasses at least a threshold portion of the n touch locations and setting the touch sensor system active area to correspond with the shape. Figures 3F-3I and the corresponding descriptions above provide relevant examples. In some such examples, the control system may be configured to include at least a threshold percentage of each of the last n touch locations, e.g., at least 50%, at least 60%, at least 70%, at least 80%, at least 90%, etc. In some alternative examples, the threshold may differ according to whether the touch location is the current touch location or former touch location. In some such examples, the control system may be configured to include at least a first threshold percentage of the current touch location (e.g., at least 70%, at least 80%, at least 90%, etc.) and at least a second threshold percentage of each of the last n-1 former touch locations (e.g., at least 50%, at least 60%, at least 70%, etc.). According to some examples, the threshold percentage may be higher for relatively more recent former touch locations and relatively lower for less recent former touch locations.[0088] Referring now to Figure 5, in this example block 505 involves determining, via a control system of an apparatus that includes a touch sensor system, n touch locations corresponding to n last user touches. As described elsewhere herein, in some examples method 500 may involve setting the value of n to zero as part of, or after, a power-down or boot-up process. In some examples, method 500 may involve setting a maximum value for n, e.g., as described above. In some examples, block 505 may correspond with block 415 of method 400. According to some such examples, blocks 510 and 515 may be regarded as specific examples of block 420. However, in some examples the method 500 is not linked to method 400. For example, in some examples the method 500 may not include block 410 or a comparable process.[0089] According to this example, block 505 involves determining, via the control system, a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities. In this example, each touch sensor tile of the plurality of touch sensor tiles includes one or more touch sensor pixels. According to this example, the touch probability is a probability that a next touch of the user will be in an area that includes a particular touch sensor tile. Greater computational efficiency may be obtained in some implementations by grouping multiple touch sensor pixels into one touch sensor tile, e.g., by grouping multiple touch sensor pixels into one square touch sensor tile having 4, 9, 16, 25, 36, 49 or 64 touch sensor pixels. Other implementations may group multiple touch sensor pixels into different shapes of touch sensor tiles, e.g., non-square rectangles or other geometric shapes.[0090] In some examples, the touch probability for each touch sensor tile may be based on a distance from each of the n touch locations to each touch sensor tile. According to some examples, the distances may be calculated between single points that represent the locations of each touch sensor tile (e.g., a centroid of each touch sensor tile) to single points that represent the locations of each of the n touch locations (e.g., a centroid of each of the n touch locations). For example, the distances between a single point that represents the location of a single touch sensor tile to single points that represent the locations of each of the n touch locations may be calculated and averaged. The same process may be performed for all other touch sensor tiles. Some such examples may involve determining the touch probability for each touch sensor tile by implementing, via the control system, a weighted random map method. According to some such examples, a weight may be assigned to each touch sensor tile. The weight may, for example, be inversely proportional to the distance from the touch sensor tile to each of the last n touch locations, e.g., inversely proportional to the average distance from the touch sensor tile to each of the last n touch locations. Because the weight for each touch sensor tile is inversely proportional to the average distance from the touch sensor tile to each of the last n touch locations in this example, the greater the average distance the lower the weight. In some examples, the weights may be normalized in order to range from zero to one, so as to correspond with a range of probabilities ranging from a minimum of zero to a maximum of one. According to some such examples, the touch probability for each touch sensor tile may correspond to a normalized weight value.[0091] In this example, block 515 involves controlling, via the control system, a size of a touch sensor system active area based, at least in part, on the plurality of touch probabilities. According to some such examples, block 515 may involve identifying touch sensor tiles having a touch probability greater than a touch probability threshold (e.g., 50%/.5, 60%/.6, 70%/.7, 80%/.8, 90%/.9, etc.), to determine identified touch sensor tiles. In some examples, block 515 may involve finding an encompassing shape that will encompass at least a threshold percentage (e.g., 50%, 60%, 70%, 80%, 90%, etc.) of identified touch sensor tiles. According to some such examples, block 515 may involve determining the touch sensor system active area based on the encompassing shape.[0092] According to some examples, block 515 may be based, at least in part, on the touch sensor system active area being a target percentage of the entire touch sensor system area. In some such examples, the target percentage may vary according to the number n of last user touches since a boot-up process. Table 1, below, shows one such example.Table 1 [0093] As with other disclosed implementations, the particular values shown inTable 1 are merely made by way of example. Other implementations may involve other percentages and/or other probabilities. In the example shown in Table 1, if there have been no user touches since the last boot-up process (n = 0), the entire touch sensor area is active. In this example, the threshold possibility corresponding to n = 0 is zero, which in this context means that all touch sensor areas (e.g., touch sensor pixels or touch sensor tiles) for which a probability of receiving the next user touch is greater than zero are activated.[0094] According to some examples, if there has been one user touch since the last boot-up process ( n = 1), 90% of the entire touch sensor area may be active. In this example, the threshold possibility corresponding to n = 1 is 0.1, which in this context means that all touch sensor areas for which a probability of receiving the next user touch is greater than 0.1 may be activated. In some implementations, either the percentage or the probability corresponding to n = 1 may be selected. For example, the percentage or the probability corresponding to n = 1 that results in the larger touch sensor active area may be selected.[0095] In some examples, if there have been two user touches since the last boot-up process (n = 2), 80% of the entire touch sensor area may be active. In this example, the threshold possibility corresponding to n = 2 is 0.2, which in this context means that all touch sensor areas for which a probability of receiving the next user touch is greater than 0.2 may be activated. In some implementations, either the percentage or the probability corresponding to n = 2 may be selected. For example, the percentage or the probability corresponding to n = 2 that results in the larger touch sensor active area may be selected.[0096] In the example shown in Table 1, a maximum value of n can be specified and implemented. According to this example, if there have been a maximum n value of user touches since the last boot-up process (e.g., n = 5), 30% of the entire touch sensor area may be active. In this example, the threshold possibility corresponding to a maximum n value is 0.8, which in this context means that all touch sensor areas for which a probability of receiving the next user touch is greater than 0.8 may be activated. In some implementations, either the percentage or the probability corresponding to the maximum n value may be selected. For example, the percentage or the probability corresponding to the maximum n value that results in the larger touch sensor active area may be selected.[0097] As used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.[0098] The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.[0099] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function.[0100] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.[0101] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, such as a non- transitory medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, non- transitory media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection may be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.[0102] Various modifications to the implementations described in this disclosure may be readily apparent to those having ordinary skill in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the claims, the principles and the novel features disclosed herein. The word “exemplary” is used exclusively herein, if at all, to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.[0103] Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.[0104] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.[0105] It will be understood that unless features in any of the particular described implementations are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary implementations may be selectively combined to provide one or more comprehensive, but slightly different, technical solutions. It will therefore be further appreciated that the above description has been given by way of example only and that modifications in detail may be made within the scope of this disclosure.[0106] Implementation examples are described in the following numbered clauses:[0107] 1. An apparatus, comprising: a touch sensor system; a fingerprint sensor system; and a control system configured for communication with the touch sensor system and the fingerprint sensor system, the control system being further configured for: controlling the touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling the fingerprint sensor system to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling a size of the touch sensor system active area based, at least in part, on the n touch locations.[0108] 2. The apparatus of clause 1 , wherein the control system is further configured for setting a number of last user touches to zero after an apparatus boot-up process.[0109] 3. The apparatus of clause 2, wherein the control system is further configured for setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.[0110] 4. The apparatus of clause 3, wherein the control system is further configured for setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.[0111] 5. The apparatus of clause 4, wherein the control system is further configured for setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.[0112] 6. The apparatus of clause 5, wherein the control system is further configured for setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0113] 7. The apparatus of any one of clauses 1-6, wherein the control system is further configured for: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0114] 8. The apparatus of any one of clauses 1-6, wherein the control system is further configured for: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0115] 9. The apparatus of any one of clauses 1-8, wherein the control system is further configured for: determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities, each touch sensor tile of the plurality of touch sensor tiles including one or more touch sensor pixels, the touch probability being a probability that a next user touch will be on a particular touch sensor tile; and controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities.[0116] 10. The apparatus of clause 9, wherein the touch probability for each touch sensor tile is based on a distance from each of the n touch locations to each touch sensor tile.[0117] 11. The apparatus of clause 10, wherein the control system is further configured for: identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles; finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles; and determining the touch sensor system active area based on the encompassing shape.[0118] 12. The apparatus of any one of clauses 1-11, wherein the control system is further configured for controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations.[0119] 13. The apparatus of clause 12, wherein the control system is further configured for controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations.[0120] 14. The apparatus of clause 13, wherein the control system is further configured for controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.[0121] 15. A method, comprising: controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.[0122] 16. The method of clause 15, further comprising setting a number of last user touches to zero after an apparatus boot-up process.[0123] 17. The method of clause 16, further comprising setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.[0124] 18. The method of clause 17, further comprising setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.[0125] 19. The method of clause 18, further comprising setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.[0126] 20. The method of clause 19, further comprising setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0127] 21. The method of any one of clauses 15-20, further comprising: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0128] 22. The method of any one of clauses 15-20, further comprising: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0129] 23. The method of any one of clauses 15-22, further comprising: determining a touch probability for each tile of a plurality of touch sensor tiles of the touch sensor system, to determine a plurality of touch probabilities, each touch sensor tile of the plurality of touch sensor tiles including one or more touch sensor pixels, the touch probability being a probability that a next user touch will be on a particular touch sensor tile; and controlling the size of the touch sensor system active area based, at least in part, on the plurality of touch probabilities.[0130] 24. The method of clause 23, wherein the touch probability for each touch sensor tile is based on a distance from each of the n touch locations to each touch sensor tile.[0131] 25. The method of clause 24, further comprising: identifying touch sensor tiles having a touch probability greater than a touch probability threshold, to determine identified touch sensor tiles; finding an encompassing shape that will encompass at least a threshold percentage of identified touch sensor tiles; and determining the touch sensor system active area based on the encompassing shape.[0132] 26. The method of any one of clauses 15-25, further comprising controlling the fingerprint sensor system to obtain fingerprint sensor data in each of the n touch locations.[0133] 27. The method of clause 26, further comprising controlling the fingerprint sensor system to obtain the fingerprint sensor data in each of the n touch locations after receiving touch sensor data corresponding to user touches in each of the n touch locations.[0134] 28. The method of clause 27, further comprising controlling the fingerprint sensor system active area to correspond with each of the n touch locations after receiving the touch sensor data corresponding to user touches in each of the n touch locations.[0135] 29. One or more non-transitory media having software stored thereon, the software including instructions for controlling one or more devices to perform a method, the method comprising: controlling, via a control system of an apparatus, a touch sensor system of the apparatus to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling, via the control system, a fingerprint sensor system of the apparatus to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, via the control system and based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling, via the control system, a size of the touch sensor system active area based, at least in part, on the n touch locations.[0136] 30. The one or more non-transitory media of clause 29, wherein the method further comprises setting a number of last user touches to zero after an apparatus boot-up process.[0137] 31. The one or more non-transitory media of clause 30, wherein the method further comprises setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.[0138] 32. The one or more non-transitory media of clause 31 , wherein the method further comprises setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.[0139] 33. The one or more non-transitory media of clause 32, wherein the method further comprises setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.[0140] 34. The one or more non-transitory media of clause 33, wherein the method further comprises setting the touch sensor system active area to a third touch sensor area that is smaller than the second touch sensor area upon determining that the number of last user touches is three.[0141] 35. The one or more non-transitory media of any one of clauses 29-34, wherein the method further comprises: determining a shape that encompasses the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0142] 36. The one or more non-transitory media of any one of clauses 29-34, wherein the method further comprises: determining a shape that encompasses at least a threshold portion of the n touch locations; and setting the touch sensor system active area to correspond with the shape.[0143] 37. An apparatus, comprising: a touch sensor system; a fingerprint sensor system; and control means for: controlling the touch sensor system to obtain touch sensor data in a touch sensor system active area of the apparatus; controlling the fingerprint sensor system to obtain fingerprint sensor data in a fingerprint sensor system active area of the apparatus; determining, based on the touch sensor data, n touch locations corresponding to n last user touches; and controlling a size of the touch sensor system active area based, at least in part, on the n touch locations.[0144] 38. The apparatus of clause 37, wherein the control means includes means for setting a number of last user touches to zero after an apparatus boot-up process.[0145] 39. The apparatus of clause 38, wherein the control means includes means for setting the touch sensor system active area to an entire touch sensor area upon determining that the number of last user touches is zero.[0146] 40. The apparatus of clause 39, wherein the control means includes means for setting the touch sensor system active area to a first touch sensor area that is smaller than the entire touch sensor area upon determining that the number of last user touches is one.[0147] 41. The apparatus of clause 40, wherein the control means includes means for setting the touch sensor system active area to a second touch sensor area that is smaller than the first touch sensor area upon determining that the number of last user touches is two.
An interface for a semiconductor fabrication facility which includes a real time dispatch system that provides near real time information regarding processing of semiconductor wafers and a middleware component. The interface includes a real time dispatcher application program interface coupled between the fabrication facility and the middleware component. The real time dispatcher application program interface provides a common interface which provides information to the middleware component.
1. An interface for a semiconductor fabrication facility including a real time dispatch system providing near real time information regarding processing of semiconductor wafers and a middleware component, the interface comprising:a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, the real time dispatcher application program interface providing a common interface for providing information to the middleware component. 2. The interface of claim 1 wherein:the real time dispatcher application program interface publishes lot data to the middleware component. 3. The interface of claim 1 wherein:the real time dispatcher application program interface publishes lot history data to the middleware component. 4. The interface of claim 1 wherein:the real time dispatcher application program interface publishes lot attribute data to the middleware component. 5. The interface of claim 1 wherein:the real time dispatcher application program interface is a client of the real time dispatcher. 6. The interface of claim 1 wherein:the fabrication facility includes a manufacturing execution system; and, the real time dispatcher application program interface is publishes manufacturing execution system data to the middleware component. 7. The interface of claim 1 wherein:the fabrication facility fabricates semiconductor devices. 8. A method for interfacing between a fabrication facility and a middleware component comprising:providing a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, the real time dispatcher application program interface providing a common interface for providing information to the middleware component from the fabrication facility; and publishing information from the fabrication facility to the middleware component via the real time dispatcher application program interface. 9. The method of claim 8 wherein:the publishing information includes publishing lot data to the middleware component. 10. The method of claim 8 wherein:the publishing information includes publishing lot history data to the middleware component. 11. The method of claim 8 wherein:the publishing information includes publishing lot attribute data to the middleware component. 12. The method of claim 8 wherein:the fabrication facility includes a real time dispatcher; and, the real time dispatcher application program interface is a client of the real time dispatcher. 13. The method of claim 8 wherein:the fabrication facility includes a manufacturing execution system; and, the real time dispatcher application program interface publishes manufacturing execution system data to the middleware component. 14. The method of claim 8 wherein:the fabrication facility fabricates semiconductor devices. 15. An computer readable media encoded with an application for providing an interface for a semiconductor fabrication facility including a real time dispatch system providing near real time information regarding processing of semiconductor wafers and a middleware component, the application comprising:a real time dispatcher application program interface module coupled between the fabrication facility and the middleware component, the real time dispatcher application program interface providing a common interface for providing information to the middleware component. 16. The application of claim 15 wherein:the real time dispatcher application program interface module includes instructions for publishing lot data to the middleware component. 17. The application of claim 15 wherein:the real time dispatcher application program interface module includes instructions for publishing lot history data to the middleware component. 18. The application of claim 15 wherein:the real time dispatcher application program interface module includes instructions for publishing information includes publishing lot attribute data to the middleware component. 19. The application of claim 15 wherein:the fabrication facility includes a real time dispatcher; and, the real time dispatcher application program interface module is a client of the real time dispatcher. 20. The application of claim 15 wherein:the fabrication facility includes a manufacturing execution system; and, the real time dispatcher application program interface module publishes manufacturing execution system data to the middleware component. 21. The application of claim 15 wherein:the fabrication facility conductor devices.
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to semiconductor manufacturing and more particularly to a real time dispatcher program interface.2. Description of the Related ArtManufacturing semiconductor devices uses a plurality of discrete process steps to create a semiconductor circuit from raw semiconductor material. The discrete process steps, from the initial melt and refinement of the semiconductor material, the slicing of the semiconductor crystal into individual wafers, the fabrication stages (e.g., etching, doping, ion implanting or the like), to the packaging and final testing of the completed device may be performed in different facilities in remote regions of the globe.One issue which arises in semiconductor manufacturing is that the various processes which may take place at discrete locations may make it difficult to track a semiconductor device through the fabrication process. Such tracking may be desirable for quality control as well as inventory management.In known semiconductor fabrication facilities, individual fabrication machines may provide and receive data regarding operating conditions during the fabrication process in many different data formats. Some of the data that is provided and received by the fabrication machines includes intrinsic data such as, for example, lot numbers, device model number or the like as well as extrinsic data such as production test data, production conditions or the like.SUMMARY OF THE INVENTIONIn one embodiment, the invention relates to an interface for a semiconductor fabrication facility which includes a real time dispatch system that provides near real time information regarding processing of semiconductor wafers and a middleware component. The interface includes a real time dispatcher application program interface coupled between the fabrication facility and the middleware component. The real time dispatcher application program interface provides a common interface for providing information to the middleware component.In another embodiment, the invention relates to a method for interfacing between a fabrication facility and a middleware component which provides a real time dispatcher application program interface coupled between the fabrication facility and the middleware component. The real time dispatcher application program interface provides a common interface for providing information to the middleware component from the fabrication facility and publishes information from the fabrication facility to the middleware component via the real time dispatcher application program interface.In another embodiment, the invention relates to a computer readable media encoded with an application which provides an interface for a semiconductor fabrication facility and includes a real time dispatch system which provides near real time information regarding processing of semiconductor wafers and a middleware component. The application includes a real time dispatcher application program interface module coupled between the fabrication facility and the middleware component. The real time dispatcher application program interface provides a common interface which provides information to the middleware component.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.FIG. 1 shows a block diagram of a semiconductor fabrication architecture including ERP integration.FIGS. 2A and 2B show a more detailed block diagram of the semiconductor fabrication architecture of FIG. 1.FIG. 3 shows a flow chart of an intrafacility shipping process flow.FIG. 4 shows a flow chart of an interfacility shipping process flow.FIG. 5 shows another interfacility shipping process flow.FIG. 6 shows a process flow for a lot history upload.FIG. 7 shows a process flow for a lot attribute upload.FIG. 8 shows a process flow for a wafer data upload.FIG. 9 shows a process flow for a post shipping lot create in a destination facility.FIG. 10 shows the process flow for creating a product that corresponds to a new ERP material.FIG. 11 shows the process flow for associating an MES route to an ERP material.FIGS. 12A and 12B show an alternate fabrication architecture.DETAILED DESCRIPTIONReferring to FIG. 1, a block diagram of a semiconductor fabrication architecture 100 is shown. More specifically, the front end fabrication architecture 100 includes a plurality of individual fabrication locations 110, which may be distributed across various manufacturing facilities. Each individual fabrication facility 110 represents a black box within the architecture 100. I.e., providing data to and receiving data from the fabrication facility 110 is in a format which is common to the architecture 100 whether or not this format is understood by an individual fabrication facility 110 or machines within the individual fabrication facility 110.Each individual fabrication facility 110 includes a manufacturing execution system (MES) 120 for tracking the overall processing of semiconductor wafers as well as a real time dispatch (RTD) system 122 for providing near real time information regarding the processing of the semiconductor wafers. The MES may be, for example, a manufacturing execution system such as Workstream available from Applied Materials. The real time dispatch system 122 executes at the factories at an execution level. An APF/RTD real time dispatch system available from Brooks Automation is an example of one known RTD system 122.Each individual fabrication facility 110 includes a respective work in progress (WIP) management application program interface (API) 130 as well as a Real Time Dispatch (RTD) application program interface (API) 132. The WM API 130 provides a common interface with which to communicate with each individual fabrication facility 110. The RTD API 132 provides a common interface from which to receive information from each individual fabrication facility 110.The RTD API 132 performs a plurality of functions. More specifically, the RTD API 132 publishes lot data (WIPLOT) to the middleware component 140. The RTD API 132 publishes lot history data (WIPLTH) to the middleware component 140. The RTD API 132 publishes lot attribute data (WIPLTA) to the middleware component. The RTD API 132 is a client of the real time dispatcher 122. The RTD API 132 publishes MES data to the message bus 140. The MES data that is published includes lot data, lot history data and lot attribute data.The WM API 130 performs a plurality of functions. More specifically, the WM API 130 interfaces with the XML communicator of the middleware component 140. The WM API 130 subscribes to route update information. The WM API 130 packages and publishes route operations based on a trigger. The WM API 130 subscribes to ship transactions. The WM API 130 packages and publishes wafer IDs associated with ship transactions. The WM API 130 creates lot information.The individual fabrication facilities 10 communicate with each other via a middleware component 140. One example of such a middleware component is the TIBCO Rendezvous System available from TIBCO Software, Inc. The middleware component 140 performs a plurality of functions. More specifically, the middleware component 140 provides an interface subscriber function. The middleware component 140 provides an interface publisher component. The middleware component 140 implements certified messages. The middleware component 140 transfers messages in an XML message format.A SAPPHiRE (Systematic Approach to Product Performance History and REliability Engineering) system 156 is also coupled to the middleware component 140. The SAPPHiRE system 150 may be located remotely from one or more of the individual fabrication facilities.An enterprise resource planning system 160 is also coupled to the middleware component 140. One example of an enterprise resource planning system is the ERP R/3 system available from ERP. The enterprise resource planning system 160 may be located remotely from one or more of the individual fabrication facilities as well as from the SAPPHiRE system 150. SAPPHiRE system 150 includes a database that collects engineering data and allows users to perform yields and engineering analysis. The SAPPHIRE system 150 also allows a user to trace product failures.An inventory management system 170 is also coupled to the middleware component 140. The inventory management system 170 includes a WIP management (WM) component 172 as well as a Lot Start component 174. One example of an inventory management system is available from Triniti Corporation. The inventory management system 170 provides a centralized lot database for global visibility into all WIP moves. The inventory management system also provides the following functions: real time integration of items between the ERP system 160, MES 120, and other systems; real time integration of Bill of Materials in the ERP system 160, MES 120, and other systems; real time integration of Routes from the ERP system 160 to the MES system 120 for Back-end facilities; real time integration of Routes from the MES system 120 to the ERP system 160 for Front-end facilities; real time access to all relevant MES lot Transactions; the ability to make ERP system BOM levels transparent to MES systems 120; real time updates to the ERP system 160 regarding costing data and inventory valuations, and a global real time view of inventory in the ERP system 160 and WIP in MES systems 120. The inventory management system 170 may be located remotely from one or more of the individual fabrication facilities as well as from the SAPPHiRE system 150 and the enterprise resource planning system 160.The WM component 172 provides a global view of WIP as well as static inventory. The WM component 172 also enables fallout fixing by matching WIP and inventory locations to physical lot locations. The WM component 172 is the system of record for all lot history, lot attribute and route/operation transactions performed and provides the capability to trace forward and backward from raw wafer to sort/die out. The lot start component 174 provides lot start information relating to all WIP.By providing the WM API 130 and the RTD API 132 each of the components within the architecture 100 may communicate using a common language in near real time. I.e., when a transaction is performed, information relating to the performance of the transaction is communicated to the other systems within the semiconductor fabrication architecture 100 as soon as the transaction occurs (there is no need for running another set of programs on a source system to obtain and transmit the data from the source system).Information from the MES 120 is interfaced to the database of the ERP system 160 via the WM component 172. The MES 120 sends all lot history, lot attribute and route operation information to the WM component 172. The WM component 172 filters, formats and forwards the data to the database of the ERP system 160. Thus, the database of the ERP system 160 reflects a subset of all MES activities implemented by the various modules of the individual fabrication facility 10.The WM API 130 subscribes to messages published by the RTD API 132 and by the WM component 172 and publishes messages back to the WM component 172. The RTD API 132 is an RTD client that publishes MES data to the middleware component 140. All lot, lot history, lot attribute, route and operation data is published via the RTD API 132.Referring to FIGS. 2A and 2B, a more detailed block diagram of the semiconductor fabrication architecture 100 is shown. More specifically, each individual fabrication facility 110 may be divided into a front end portion 200 and a back end portion 202. The front end portion 200 may be further divided into a front end MES portion 210 and a front end RTD portion 212. The back end portion 202 may be further divided into a back end MES portion 220 and a back end RTD portion 222.The front end MES portion 210 includes the MES 120 as well as a real time interceptor 230, a primary writer 232, a front end WM API 234. The front end MES 210 also includes a script writer 236 and a transport layer 238. The interceptor 230 intercepts transactions from the MES 120 that occur during and after an extract of information from the MES 120. The primary writer 232 stores extracted and intercepted messages in a message buffer until the messages are transferred to the front end RTD portion 212. The primary writer only keeps a message until the message is successfully communicated to the front end RTD portion 212. The front end WM API 234 has the capabilities to enable execution of remote transactions via the MES API functions.The front end RTD portion 212 includes a front end real time dispatcher 240, a secondary writer 242 and a front end RTD API 244 as well as an MDS API 246 and an MDS database 248. The front end real time dispatcher 240 prioritizes lot movement based upon predefined roles. The secondary writer 242 requests messages from the primary writer 232. The secondary writer 242 stores the messages that are received in a secondary writer message buffer. The secondary writer 242 then provides the messages received from the primary writer to the middleware component 140 via the RTD API 264 under the control of the front end real time dispatcher 240. The messages are also provided to the MDS database 268 via the MDS API 266 under the control of the front end real time dispatcher 240.The back end MES portion 212 includes the MES 120. The back end MES portion 212 also includes a back end API 252 which includes a back end WM API 254 and a back end MES API 256.The back end RTD portion 222 includes a back end real time dispatcher 260 as well as a combo writer 262. The back end RTD portion 222 also includes a back end RTD API 264 as well as a back end MDS API 266 and a back end MDS database 268. The combination writer 262 is a combination of a primary writer and a secondary writer. Thus the combination writer 262 receives messages from the M_B database 250 and provides messages to the RTD API 264 as well as to the MDS API 266 under control of the back end real time dispatcher 260.Referring to FIGS. 3-5, a number of shipping process flows are shown. More specifically, FIG. 3 shows a flow chart of an intrafacility shipping process flow. FIG. 4 shows a flow chart of an interfacility shipping process flow. FIG. 5 shows another interfacility shipping process flow.With an intrafacility shipping process flow, a lot is shipped from one MES facility 110 to another MES facility 110 that share the same database instance. For example, a lot is shipped from a fabrication portion of the facility 110 to a test portion of the facility or from a fabrication portion of a facility to a bump portion of the facility.With an interfacility shipping process flow, there are two instances. An interfacility shipping process is invoked when the facilities do not share the same database instance. In one instance, a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to the particular MES. In another instance, a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to another MES.Referring to FIG. 3, an intrafacility shipping process flow is shown. More specifically, when a lot is shipped from a first facility (facility 1) to another facility (facility 2) a ship lot (SHLT) transaction is initiated. A lot history WIPLTH record containing the SHLT transaction is received by the real time dispatcher 122 and is then intercepted the by RTD API 132. The RTD API 132 converts the record to XML and publishes the record to the middleware message bus 140. The WM API 130 of the facility 2 subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates its database to indicate that the lot has completed in the source facility (facility 1). If attributes associated with the lot were changes, these changes attributes are uploaded to the second facility via the WM API 130 of the second facility. The WIP management system 172 also initiates a goods receipt transaction within the ERP system 160 to place the lot in a storage location for the receiving facility. When the lot is received at the destination facility, a receive lot (RVLT) transaction is initiated by the destination facility. The receive lot transaction is received by the real time dispatcher 122 and is intercepted by the RTD API 132. The RTD API 132 publishes the updated WIPLTH record to the middleware message bus 140. The WIP management system 172 subscribes to the message and initiates a goods issue transaction to update the appropriate fields within the ERP system 160, thus completed the intrafacility shipping process.Referring to FIG. 4, the process flow for an interfacility shipping process is shown. More specifically, a process flow for when a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to the particular MES is shown. When a lot is shipped from one facility to another facility, a ship lot (SHLT) transaction is initiated. The lot is moved to an ERP specific shipping facility and the lot status is terminated. For example, if the lot is shipped from one fab facility to another bump facility, the lot is removed from the fab facility and placed in a shipping facility, where the status of the lot is terminated. The WIPLTH record containing the SHLT transaction is received the by RTD 122 and intercepted by the RTD API 132. The RTD API 132 converts the record to an XML message and published the record to the middleware message bus 140. The message includes the names of the source and destination facilities. The WM API 130 of the other facility subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates the WM database to indicate that the lot has completed in the source facility. If the attributes associated with the lot were changed, then these attributes are also uploaded. The WM system 172 also initiates a goods receipt transaction to place the lot in an ERP storage location for the destination facility. When the lot is physically shipped, a stock transfer order is completed. Then a goods issue transaction updates the ERP storage location of the lot and the WM system database. When the lot is received at the destination, a goods receipt transaction is issued to the ERP system 160. This initiates a post shipping remote create lot process. Completion of the post shipping remote create lot process completes the interfacility shipping process flow.Referring to FIG. 5, the process flow for another interfacility shipping process is shown. More specifically, a process flow for when a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to another MES is shown. When a lot is shipped from one facility to another facility, a ship lot (SHLT) transaction is initiated. The lot is moved to an ERP specific shipping facility and the lot status is terminated. For example, if the lot is shipped from one fab facility to another bump facility, the lot is removed from the fab facility and placed in a shipping facility, where the status of the lot is terminated. The WIPLTH record containing the SHLT transaction is received the by RTD 122 and intercepted by the RTD API 132. The RTD API 132 converts the record to an XML message and published the record to the middleware message bus 140. The message includes the names of the source and destination facilities. The WM API 130 of the other facility subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates the WM database to indicate that the lot has completed in the source facility. If the attributes associated with the lot were changed, then these attributes are also uploaded. The WM system 172 also initiates a goods receipt transaction to place the lot in an ERP storage location for the destination facility. When the lot is physically shipped, a stock transfer order is completed. Then a goods issue transaction updates the ERP storage location of the lot and the WM system database. When the lot is received at the destination, a goods receipt transaction is issued to the ERP system 160. This issue initiates a post shipping remote create lot process. Completion of the post shipping remote create lot process completes the interfacility shipping process flow.Referring to FIG. 6, a process flow for a lot history upload is shown. More specifically, the WM system 172 receives and stores all lot history transactions in its database. If a transaction is relevant for costing, then the WM system 172 uploads the pertinent lot history data to the ERP system 160. For example, the WM system 172 uploads pertinent data to the ERP system 160 for move out (MVOU) transactions that occur at reporting points. Likewise, the WM system 172 uploads data for ship lot. (SHLT) transactions that indicate movement across bill of material (BOM) levels. However, the actual transactions are not necessarily uploaded to the ERP system 160.When uploading lot history, a lot based transaction occurs in the facility MES 120 and is written to a WIP Lot History (WIPLTH) table. The RTD API 132 intercepts the lot based transaction via the RTD 122. The RTD API 132 converts the record to an XML message and publishes the message to the message bus 140. The WM system 172 subscribes to the message and updates its database. If the transaction is relevant for costing, the WM system 172 sends the relevant data to the ERP 160. The lot history upload process then completes.Referring to FIG. 7, a process flow for a lot attribute upload is shown. More specifically, the WM system 172 receives and stores all lot attribute transactions in its database. If a transaction is relevant for the ERP system 160, then the WM system 172 uploads the pertinent lot attribute data to the ERP system 160. However, the actual transactions are not necessarily uploaded to the ERP system 160.When uploading lot attribute data, a lot attribute is set or changed in the MES 120 via a set lot attribute (SLTA) transaction which in turn updates a WIP lot attribute (WIPLTA) table. The RTD API 132 intercepts the lot attribute transaction via the RTD 122. The RTD API 132 converts the lot attribute transaction record to an XML message and publishes all lot attribute messages to the message bus 140. The WM system 172 subscribes to the message and writes the set lot attribute transaction to its lot history table and updates the value of the attribute in its lot attribute table. If the transaction is relevant to the ERP system 160, then the WM system 172 sends the pertinent data to the ERP system 160. The lot attribute upload process then completes.Referring to FIG. 8, a process flow for a wafer data upload is shown. More specifically, whenever a lot is shipped from an MES facility 120 (e.g., intrafacility shipping, interfacility shipping), the wafer scribes and virtual wafer identifiers (i.e., the original wafer slot positions) are sent to the WM system 172.When uploading wafer data, a lot is shipped from an MES facility 120. A ship lot (SHLT) transaction is received the by RTD 122 and intercepted by the RTD API 132. The RTD API 132 publishes the updated WIPLTH record to the message bus 140. This record includes the names of the source and destination facilities. The WM API 130 subscribes to the WIPLTH record and filters the record to identify the SHLT transaction. The WM API 130 then sends a request for wafer scribes and virtual wafer IDs to the MES 120 via the SR Tester 236 and the it 238. The TL issues a remote MES transaction to obtain the wafer data. The WM API then receives and publishes the wafer data to the message bus 140. The WM system 172 subscribes to the wafer data message and updates its database. The wafer data upload process then completes. The wafer data is not actually sent from the WM system 172 to the ERP system 160, the wafer data is stored in the WM system database until the WM system 172 publishes the data in response to a wafer data request for that lot. The ERP system 160 then subscribes to the published data.Referring to FIG. 9, a process flow for a post shipping lot create in a destination facility is shown. More specifically, whenever a lot is shipped from one MES facility 120 to another MES facility 120 that does not share the same database instance, then the lot is created in the receiving facility. The process begins when a goods receipt transaction is initiated within the ERP 160. The process completes when the lot is created in the destination facility 120, the lot attributes are copies and the WM database is updated to reflect the new lot location.When a lot is shipped from an MES facility 120 to another MES facility 120 that does not share the same database instance (e.g., interfacility shipping of semi-finished goods), a ship lot (SHLT) transaction is initiated. The lot is shipped from the source facility to an ERP specific shipping facility and the lot status is marked as terminated at the shipping facility. When the lot arrives at the destination facility 120, an ERP goods receipt transaction is issued for the lot and published to the message bus 140. The WM system 172 subscribes to the message, updates the lot status information in its database for that lot and publishes the lot ID, the product ID, the operation, the lot quantity, the lot owner a lot indicator and attributes in a lot create message to the message bus 140. The WM API 130 subscribes to the lot create message and sends a request to the MES 120 via the SR Tester 236 and the TL 238. The TL 238 issues a remote create lot transaction. The lot is then created in the destination facility 120 and the lot record WIPLOT, lot history WIPLTH and lot attribute WIPLTA tables of the destination facility are updated with records for the new lot. The WIPLTA table is also populated with attributes associated with the lot in the source facility. The remote create lot transaction is received by the real time dispatcher 122 and intercepted by the RTD API 132. The RTD API publishes the create lot transaction in the WIPLTH record to the message bus 140. The WM system 172 subscribes to the message and updates the lot status in the WM system 172 to reflect the new facility. The process then completes.Referring to FIG. 10, the process flow for creating a product that corresponds to a new ERP material is shown. More specifically, a new material is created in an ERP message and published to the message bus 140. The WM system 172 subscribes to the message and publishes a product create message to the message bus 140. The WM API 130 subscribes to the message and instructs the MES 120 via the SR tester 236 and the TL 238 to initiate a remote update product transaction (RUPR). The remote update product transaction inserts an ERP material ID into an MES product table of the MES 120. The ERP material ID is used for associated MES routes with ERP materials. This ERP material ID is not used to manufacture lots within the MES 120.Referring to FIG. 11, the process flow for associatings an MES route to an ERP material is shown. The monetary value of a lot is based in part upon the position of the lot within a manufacturing route. Thus, for the ERP system 160 to accurately cost materials, each ERP material should be associated with a routing that includes reporting points. ERP routings are automatically uploaded from the facility routes. This flow does not address the MES route update or on demand routing updates within the ERP system 160.An MES administrator is notified that a new product has been created (See, e.g. FIG. 10) or that a route has been modified in a way that impacts the ERP costing and inventory. For a route that has been modified in a way that impacts the ERP costing and inventory, the MES administrator initiates an associate route to product (ARTP) transaction to associate the ERP material to an MES route. The ARTP transaction updates the WIP product description WIPPRD table in the MES 120 to map the route to the new material. When a new product is created (or after the ARTP transaction is initiated), the MES administrator initiates an update product (UPRD) transaction. The update product transaction identifies ERP within a material type field. By submitting an update product with an ERP material type triggers a route upload to the WM system 172. When the route upload occurs, the RTD API 132 receives a WIP product WIPPRD record from the real time dispatcher 122, converts the record to XML and publishes the message to the message bus 140. The WM API 130 subscribes to the message and forward the request to the MES 120 via the SR Tester 236 and the TL 238. The WM API 130receives the route information, converts the route information to XML and publishes the route information to the message bus 140. The routing information is provided in a plurality of records: a WIPPRD record, which includes a product ID, a facility, and an ERP material ID; a WIP route information (WIPRTE) record, which includes the route name, the route description and a BOM level for the route; the and sequence operation steps of the route (WIPRTO) records, which include the route name and operation for every operation on the route; and, WIP operation product (WIPOPR) records, which include the operation number, short and long description, work centers and reporting point indicators for each operation on the route. The WM system 172 subscribes to these messages, populates the relevant WM database tables and uploads the route, operation, BOM levels, work centers and reporting points to the ERP system 160. This completes the route upload process.Referring again to FIG. 2, the various MES facilities 120 are initialized to correspond to a plurality of ERP related functions. These facilities are initialized to include an ERP material create function, a routing upload function, a product validation at lot crate function, a product mapping function and a shipping facility function.The material create function is when a new material is created within the ERP system 160, that material is then associates with an MES route. This association enables the ERP system 160 to accurately cost a product based upon the location of the product in the processing stream. Relevant points within the route include the BOM level, the reporting points and the work centers. The MES system 120 passes this information to the ERP system 160. For the BOM level, this information is accomplished via the route. For the reporting points and the work centers, this information is accomplished via the operation.The BOM level for a lot does not change as long as the lot remains within the same facility 10. Thus the BOM level is associated with an MES route. There are at least three BOM levels, a FAB BOM level (which corresponds to a processed wafer) a BUMP BOM level (which corresponds to a bumped wafer) and a SORT BOM level (which corresponds to a sorted die). When setting up routes with BOM levels, an update user defined facility fields (UUFF) transaction specifies a user defined field as the BOM level field for all routes. The update user route field (UURF) transaction defines the value of the BOM level for each route and the BOM level is defined for each route within a facility.The work centers and reporting points may vary with each MES operation. Thus two user defined fields are associated with the operations and are used to specify these values. To set up the MES operations with work centers and reporting points, the UUFF transaction is used to specify a user defined field as a work center field and a user defined field as an ERP reporting field for operations. An update user defined operation field (UUOF) transaction is used to define the value of the work center and the reporting point for each operation. If the operation is an ERP reporting point, the value is yes (Y), otherwise the value is no (N). The UUOF transaction is repeated for each operation within a facility.During lot create, if a product is salable, then a check is performed to confirm that the MES product corresponds to a valid ERP material. To determine whether a product is salable, an update table entry general (UTEG) transaction is used to update a general tables (GTS) owners table to include a salable indicate field. The GTS owners table is updated to specify a value of 1 for owner codes engineering (ENG) and product (PROD) (i.e., salable vs. not salable).Other EmbodimentsThe present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only, and are not exhaustive of the scope of the invention.For example, referring to FIGS. 12A and 12B, an alternate fabrication architecture may include front end WM API 1210 and a back end WM API 1220 which are coupled directly to the MES 120.Also for example, referring again to FIG. 11, an alternate route upload process may be used. In the alternate route upload process an ERP material is created in the ERP system 160. The WM system 170 publishes the ERP material to the middleware component 140. The WM API 130 receives the message and initiates a remote transaction (RSAP) to store the ERP material (referred to as a ERP ID) in an MES database table (called the ERP PRODUCTS database). The association between the ERP ID and the MES product is via a web based product mapping application. With the product mapping application, a user goes to the web based product mapping application and associates the ERP ID with an MES product, routes and other characteristics. Once the mapping association is completed, the user can select from a list of ERP IDs and trigger a route upload from the MES 120 to the ERP 160 for every select ID by actuating a Route Upload button on the product mapping application.Once triggered, the product mapping application publishes a message for a route upload for the selected ERP ID to the middleware component 140. The message contains the ERP ID and its associated Product. The WM API 130 receives the message and starts the upload process by initiating a remote transaction. Upon completion of the remote transaction, the WM API 130 initiates another remote transaction (RGOR) to obtain all operations for each returned route from the remote transaction. Once the WM API 130 has all of the operations for each route, the WM API 130 publishes all routes and operations for an ERP ID to the middleware component 140. The WM system 170 receives the message and performs the appropriate actions to update its database tables and uploads the route and operations, the BOM and the work center information to the ERP system 160.Also for example, the above-discussed embodiments include software modules that perform certain tasks. The software modules discussed herein may include script, batch, or other executable files. The software modules may be stored on a machine-readable or computer-readable storage medium such as a disk drive. Storage devices used for storing software modules in accordance with an embodiment of the invention may be magnetic floppy disks, hard disks, or optical discs such as CD-ROMs or CD-Rs, for example. A storage device used for storing firmware or hardware modules in accordance with an embodiment of the invention may also include a semiconductor-based memory, which may be permanently, removably or remotely coupled to a microprocessor/memory system. Thus, the modules may be stored within a computer system memory to configure the computer system to perform the functions of the module. Other new and various types of computer-readable storage media may be used to store the modules discussed herein. Additionally, those skilled in the art will recognize that the separation of functionality into modules is for illustrative purposes. Alternative embodiments may merge the functionality of multiple modules into a single module or may impose an alternate decomposition of functionality of modules. For example, a software module for calling sub-modules may be decomposed so that each sub-module performs its function and passes control directly to another sub-module.Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
An integrated circuit package includes a substrate/interposer assembly having a plurality of conductive contacts and a plurality of conductive posts, such as copper posts, electrically coupled to at least some of the conductive contacts in the substrate/interposer assembly. The conductive posts are surrounded by a protective dielectric, such as a photoimageable dielectric (PID). An integrated circuit die may be disposed on the substrate/interposer assembly within an interior space surrounded by the dielectric. An additional integrated circuit die may be provided in a package-on-package (POP) configuration.
1.A device comprising:A first substrate including first and second surfaces;A plurality of conductive contacts on the first surface of the first substrate;A dielectric on the first surface of the first substrate and having a plurality of openings; andA plurality of conductive posts coupled to at least some of the conductive contacts, the dielectric at least partially surrounding the plurality of conductive posts.2.The device of claim 1, wherein the first substrate comprises a first interposer.3.The device of claim 1, wherein the dielectric comprises a photoimageable dielectric (PID).4.The device of claim 1, wherein the conductive pillar comprises a copper pillar.5.The device of claim 1, further comprising a plurality of conductive pads respectively coupled to the conductive posts.6.The device of claim 4, wherein the conductive pad comprises a copper pad.7.The device of claim 1, further comprising:A second substrate having first and second surfaces; andA plurality of conductive contacts on the first surface of the second substrate.8.The device of claim 7, wherein the second substrate comprises a second interposer.9.The device of claim 7, wherein the conductive pillar is electrically coupled to at least some of the conductive contacts.10.The device of claim 9, further comprising a plurality of solder balls respectively coupled between the at least some of the conductive posts and the conductive contacts.11.The device of claim 7, further comprising an integrated circuit die within an opening in the dielectric.12.The device of claim 11, wherein the integrated circuit die is disposed on a first surface of the second substrate.13.The device of claim 12, further comprising a second integrated circuit die disposed on the second surface of the first substrate.14.The device of claim 13, further comprising a molding on a second surface of the first substrate and on the second integrated circuit die.15.An integrated circuit package, comprising:A first substrate having first and second surfaces;A first plurality of conductive contacts on a first surface of the first substrate;A second substrate having first and second surfaces;A second plurality of conductive contacts on the first surface of the second substrate;A dielectric disposed between the first surface of the first substrate and the first surface of the second substrate, the dielectric having a plurality of openings;A plurality of conductive posts arranged in some, but not all, of the openings in the dielectric that are electrically coupled to the first plurality of first contacts on the first surface of the first substrate At least some of the conductive contacts, and at least some of the second plurality of conductive contacts on the first surface of the second substrate; andAn integrated circuit die disposed on the first surface of the second substrate within an opening of the dielectric that is not occupied by one of the conductive pillars.16.The integrated circuit package of claim 15, wherein the first substrate comprises a first interposer and the second substrate comprises a second interposer.17.The integrated circuit package of claim 15, wherein the dielectric comprises a photoimageable dielectric (PID).18.The integrated circuit package of claim 15, wherein the conductive pillar comprises a copper pillar.19.The integrated circuit package of claim 15, further comprising a plurality of conductive pads respectively coupled to the conductive posts.20.The integrated circuit package of claim 19, wherein the conductive pad comprises a copper pad.21.The integrated circuit package of claim 15 further comprising at least said at least one of said second plurality of conductive contacts coupled respectively on said conductive posts and a first surface of said second substrate Some solder balls between some.22.The integrated circuit package of claim 15, further comprising a second integrated circuit die disposed on the second surface of the first substrate.23.The integrated circuit package of claim 22, further comprising a molding on the second surface of the first substrate and the second integrated circuit die.24.A method of manufacturing a device, comprising:Providing a substrate having first and second surfaces;Forming at least a plurality of conductive contacts on the first surface of the substrate;Forming a dielectric on the first surface of the substrate;Forming a plurality of openings in the dielectric; andA plurality of conductive posts are formed in some, but not all, of the openings in the dielectric.25.The method of claim 24, wherein the substrate comprises an interposer.26.The method of claim 24 wherein forming a plurality of conductive posts in some but not all of the openings in the dielectric comprises using a metal for plating.27.The method of claim 26, wherein the metal comprises copper.28.The method of claim 24, wherein the dielectric comprises a photoimageable dielectric (PID).29.The method of claim 24, further comprising forming a plurality of conductive pads on the conductive posts, respectively.30.The method of claim 29, wherein the conductive pads are each formed as an integral extension of the conductive posts.
Conductive pillar protection for integrated circuit packagingAccording to the priority claim of 35 U.S.C. $ 119This application claims the benefit of Provisional Application No. 62 / 118,886, entitled "COPPER POST PROTECTION FOR INTEGRATED CIRCUIT PACKAGES," filed on February 20, 2015, which is assigned To the assignee of this patent application and hereby expressly incorporated by reference herein.Open fieldVarious examples described herein relate to integrated circuit packages, and more particularly to conductive pillar protection for integrated circuit packages.Background techniqueIn conventional integrated circuit packages, such as conventional flip-chip packages, a plurality of conductive posts may be provided between the top and bottom substrates or interposers. An integrated circuit die, such as a flip chip die, may be attached to one of these substrates or intermediates, such as a bottom substrate or an interposer, and positioned between two substrates or interposers. A plurality of conductive posts are provided between the top and bottom substrates or interposers to provide mechanical support and electrical connections for the integrated circuit package. These conductive posts can be copper posts to achieve good conductivity. Each of the top and bottom substrates or interposers may include a plurality of conductive contacts and the conductive posts may be coupled between some of the conductive contacts in the bottom substrate or interposer and some of the conductive contacts in the top substrate or interposer between.In conventional processes for fabricating such integrated circuit packages, a conductive pillar, such as a copper pillar, is provided directly on the corresponding conductive contact in one of the substrates or intermediates, such as a top substrate or an interposer. A plurality of solder balls may be provided on respective conductive contacts in another substrate or interposer, such as a bottom substrate or interposer. The copper posts attached to the corresponding conductive contacts in the top substrate or interposer are soldered to the corresponding conductive contacts in the bottom substrate or interposer using the corresponding solder balls. After assembling the top and bottom substrates or interposers with copper posts to form integrated circuit packages (which may be seated on the bottom substrate or interposer), only mechanical support of the top substrate or interposer may be provided by the copper posts.In such conventional integrated circuit packages, copper pillars may have a tendency to crack or crack during the manufacture of copper-clad top substrates or interposers. The tendency of such copper studs to crack or crack may lead to low yields in the fabrication of copper studs, low yields in the assembly of copper studs, and high risk of failing reliability tests, thereby increasing manufacturing costs. Various schemes have been devised to try to increase the yield of copper pillars and reduce the risk of failure of the reliability test. One such solution is to provide mold or epoxy flux that fills the entire interior space between the top and bottom substrates or interposers of the integrated circuit package to protect the copper posts. However, such solutions may involve expensive manufacturing processes and may still not improve the yield of copper substrate fabrication.OverviewExamples of the present disclosure relate to integrated circuit devices and methods of making these integrated circuit devices. Integrated circuit devices and methods according to various examples of the present disclosure desire to increase the yield of manufacturing a conductive pillar substrate and assembling a conductive pillar package. In addition, tighter interposer pitch and tighter POP pitch in a stacked package (POP) configuration may be achieved using the devices and methods according to various examples of the present disclosure, thereby allowing the overall package size to be reduced.In one example, a device is provided that includes a first substrate including first and second surfaces, a plurality of conductive contacts on a first surface of the first substrate, a dielectric disposed between A first surface of the first substrate and having a plurality of openings; and a plurality of conductive posts coupled to at least some of the conductive contacts, the dielectric at least partially surrounding the plurality of conductive posts.In another example, an integrated circuit package is provided that includes a first substrate having first and second surfaces, a first plurality of conductive contacts on a first surface of the first substrate ; A second substrate having first and second surfaces; a second plurality of conductive contacts on a first surface of the second substrate; a dielectric disposed between the first surface of the first substrate and the The dielectric having a plurality of openings; a plurality of conductive posts disposed in some, but not all, of the openings in the dielectric, the conductive posts being electrically coupled to the first At least some of the first plurality of conductive contacts on the first surface of the substrate, and at least some of the second plurality of conductive contacts on the first surface of the second substrate; and an integrated circuit die Is disposed on the first surface of the second substrate within an opening of the dielectric that is not occupied by one of the conductive pillars.In still another example, a method of fabricating a device is provided, the method comprising: providing a substrate having first and second surfaces; forming a plurality of conductive contacts on at least a first surface of the substrate; Forming a dielectric on a first surface; forming a plurality of openings in the dielectric; and forming a plurality of conductive posts within some but not all of the openings in the dielectric.Brief Description of the FiguresThe drawings are provided to assist in the description of examples of the disclosure, and are provided for the purpose of illustration only and not for the purpose of limiting the same.1 is a cross-sectional view illustrating an example of a substrate / interposer assembly having a conductive pillar protected by a dielectric.2 is a cross-sectional view illustrating another example of a substrate / interposer assembly having conductive pads added to a conductive pillar.3 is a cross-sectional view illustrating an example of an integrated circuit package having a conductive pillar protected by a dielectric.4 is a cross-sectional view illustrating another example of an integrated circuit package similar to the example shown in FIG. 3, except that a conductive pillar is provided on the bottom substrate / interposer assembly (instead of the top substrate / interposer Assembly) on the conductive contacts.5 is a cross-sectional view illustrating another example of an integrated circuit package in which the top and bottom substrate / interposer assemblies are each provided with a separate set of conductive posts.6 is a cross-sectional view illustrating yet another example in which top and bottom substrate / interposer assemblies are provided in a stacked package (POP) configuration.7A-7I are cross-sectional views illustrating an example of a process of fabricating a structure having a substrate / interposer assembly and a conductive pillar protected by a dielectric as shown in FIG. 1.8A-8J are cross-sectional views illustrating an example of a process of fabricating a structure that has a substrate / interposer assembly and a dielectric-protected conductive pillar with associated conductive pads as shown in FIG. 2.A detailed descriptionAspects of the disclosure are described in the following description of specific examples and related drawings. Alternative examples can be devised without departing from the scope of this disclosure. In addition, well-known elements will not be described in detail or will be omitted so as not to be confused with the relevant details of the present disclosure.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any example described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other examples. Also, the term "example" does not require that all examples include the feature, advantage, or mode of operation in question.The terminology used herein is for the purpose of describing particular examples, and is not intended to be limiting of these examples. As used herein, the singular form "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the terms "comprises," "having," "including" and / or "containing" as used herein specify the presence of stated features, integers, steps, operations, elements, and / or components, but also The existence or addition of one or more other features, integers, steps, operations, elements, components, and / or groups thereof is not excluded. Also, it is to be understood that the word "or" has the same meaning as the boolean "OR", ie it covers the possibilities of "either" and "both" and is not limited to XOR "), Unless expressly stated otherwise. It is also to be understood that the notation "/" between two adjacent words has the same meaning as "or," unless expressly stated otherwise. Additionally, phrases such as "connected to", "coupled to" or "in communication" are not limited to direct connections unless expressly stated otherwise. In addition, language such as "top," "bottom," "upper," "lower," "left," or "right" is used to describe only the relative position or orientation in the figures, Any element is positioned or oriented in a particular manner when it is manufactured or used. For example, if the device is flipped up and down, the "bottom substrate" in the device can become a "top substrate" and vice versa.1 is a cross-sectional view illustrating an example of a substrate / interposer assembly 100 having a conductive pillar protected by a dielectric. In FIG. 1, a substrate or interposer structure 104 is provided, and a plurality of conductive contacts (eg, through substrate vias) including conductive contacts 106 a, 106 b, 106 c and 106 d are implemented in the substrate or interposer structure 104 Provide electrical connections between the opposing surfaces of the substrate or interposer structure 104. In an example, the substrate or interposer structure 104 may include a solder resist 102. Such substrate / interposer assembly 100 can be fabricated in a conventional manner. In an example, a plurality of conductive posts 108a, 108b, 108c, and 108d, such as copper posts, are respectively coupled to the conductive contacts 106a, 106b, 106c, and 106d in the substrate / interposer assembly 100. In an example, a dielectric 110 is provided that at least partially surrounds and protects conductive pillars 108a, 108b, 108c and 108d. In a further example, the dielectric 110 includes a photoimageable dielectric (PID) material.In the example shown in FIG. 1, the height h1 of the dielectric 110 is slightly greater than the height h2 of the conductive pillars 108 a, 108 b, 108 c, and 108 d such that the conductive pillars 108 a, 108 b, 108 c, and 108 d are slightly recessed within the surrounding dielectric 110. In addition, in the example shown in FIG. 1, the dielectric 110 does not cover the entire area 112 of the substrate / interposer assembly 100 between the conductive pillars 108 b and 108 c. In contrast, some of the inner walls of the dielectric 110 vacate the interior space 112 for receiving one or more integrated circuit dies (not shown in FIG. 1), as will be described in further detail below.2 is a cross-sectional view illustrating another example of a substrate / interposer assembly 100 having conductive pillars 108a, 108b, 108c and 108d protected by a dielectric 110 in a manner similar to that of FIG. 1, except that a plurality of leads Electrical pads 202a, 202b, 202c and 202d, such as copper pads, are added to the conductive pillars 108a, 108b, 108c and 108d, respectively. Similar to FIG. 1, a plurality of conductive contacts (including conductive contacts 106 a, 106 b, 106 c, and 106 d) are provided in the substrate / interposer assembly 100 in FIG. 2. Also, such substrate / interposer assembly 100 can be fabricated in a conventional manner. A plurality of conductive posts 108a, 108b, 108c, and 108d, such as copper posts, are respectively coupled to the conductive contacts 106a, 106b, 106c, and 106d in the substrate / interposer assembly 100. In an example, a dielectric 110 is provided that surrounds and protects conductive pillars 108a, 108b, 108c and 108d. In a further example, the dielectric 110 includes a photoimageable dielectric (PID) material.In the example shown in FIG. 2, conductive pads 202a, 202b, 202c and 202d, such as copper pads, are directly coupled to conductive pillars 108a, 108b, 108c and 108d, respectively, such as copper pillars. In an example, the conductive pads 202a, 202b, 202c, and 202d may be fabricated as integrated portions of the conductive pillars 108a, 108b, 108c, and 108d, respectively, that extend beyond the height h1 of the dielectric 110. The conductive pads 202a-d may extend beyond the dielectric 110. As shown in FIG. 2, the height h1 of the dielectric 110 is slightly smaller than the combined height h3 of the conductive pillars 108a, 108b, 108c and 108d and the conductive pads 202a, 202b, 202c and 202d. In addition, similarly to the example shown in FIG. 1, the dielectric 110 in the example shown in FIG. 2 also does not cover the entire area 112 of the substrate / interposer assembly 100 between the conductive pillars 108 b and 108 c. In contrast, some of the inner walls of the dielectric 110 vacate the interior space 112 for receiving one or more integrated circuit dies (not shown in FIG. 2), as will be described in further detail below.3 is a cross-sectional view illustrating an example of an integrated circuit package having a conductive pillar protected by a dielectric. In FIG. 3, a first (eg, bottom) substrate / interposer assembly 302, an integrated circuit die 304 positioned on the first substrate / interposer assembly 302, and a second (eg, top) substrate / interposer Body assembly 306. In the example shown in FIG. 3, a plurality of conductive posts 308 a, 308 b, 308 c, and 308 d, such as copper posts, are in contact with conductive contacts 310 a, 310 b, 310 c, and 310 d, respectively, in the top substrate / interposer assembly 306. In an example, a dielectric 312 (such as a PID) is provided that surrounds and protects each of the conductive posts 308a, 308b, 308c, and 308d. In the example shown in FIG. 3, the combination of top substrate / interposer assembly 306, conductive posts 308a, 308b, 308c, and 308d, and dielectric 312 that protect conductive posts 308a, 308b, 308c, and 308d is similar to that of FIG. 1 An example of the assembly shown and described above.In the example shown in FIG. 3, a plurality of solder balls 314 a, 314 b, 314 c, and 314 d are provided on the conductive contacts 316 a, 316 b, 316 c, and 316 d of the bottom substrate / interposer assembly 302, respectively. In an example, the bottom substrate / interposer assembly 302 with the integrated circuit die 304 and the top substrate / interposer assembly 306 with the conductive posts 308a, 308b, 308c, and 308d, and the protective dielectric 312 may be bonded The balls 314a, 314b, 314c and 314d are separately manufactured before the conductive posts 308a, 308b, 308c and 308d are soldered to the conductive contacts 316a, 316b, 316c and 316d of the bottom substrate / interposer assembly 302. In the example shown in FIG. 3, the bottom substrate / interposer assembly 302 may be considered a substrate of an integrated circuit package while its associated conductive posts 308a, 308b, 308c, and 308d are surrounded by a top substrate / Intermediary assembly 306 may be considered an encapsulated intermediary.In FIG. 3, conductive posts 308a, 308b, 308c, and 308d, which are soldered to the conductive contacts 314a, 314b, 314c, and 314d of the bottom substrate / interposer assembly 302, provide structural support and the bottom substrate / interposer assembly 302 with Electrical connection between the top substrate / interposer assembly 306. In addition, in the example shown in FIG. 3, the integrated circuit die 304 is housed within the interior space between the bottom substrate / interposer assembly 302 and the top substrate / interposer assembly 306 and the protective dielectric 312. Conductive pillars 308a, 308b, 308c, and 308d (such as copper pillars) and protective dielectric 312 (such as PID) may be fabricated such that the copper pillars have a relatively large aspect ratio (ie, height to diameter ratio) Of the internal space for die 304. Although FIG. 3 illustrates an example in which solder balls are used to couple conductive posts to corresponding conductive contacts on a substrate, other implementations of a substrate / interposer assembly with or without solder balls may also be within the scope of the present disclosure design.4 is a cross-sectional view illustrating another example of an integrated circuit package similar to the example shown in FIG. 3 and described above, except that conductive pillars 408a, 408b, 408c and 408d are provided on the bottom substrate / The conductive contacts 416a, 416b, 416c and 416d of the interposer assembly 402 (instead of the top substrate / interposer assembly 406). In the example shown in FIG. 4, the integrated circuit die 404 is provided on the bottom substrate / interposer assembly 402 in the space between the dielectrics 412 that protect the conductive posts 408 b and 408 c. The protective dielectric 412 may include, for example, a PID material. In the example shown in FIG. 4, a plurality of solder balls 414a, 414b, 414c and 414d are provided on the conductive contacts 410a, 410b, 410c and 410d, respectively, of the top substrate / interposer assembly 406. The conductive posts 408a, 408b, 408c and 408d, such as copper posts, may be soldered to the conductive contacts 416a, 416b, 416c and 416d of the top substrate / interposer assembly 406, respectively, using solder balls 410a, 410b, 410c and 410d. After the bottom substrate / interposer assembly 402 and the top substrate / interposer assembly 406 are assembled together, an interior space 418 containing the integrated circuit die 404 is formed.5 is a cross-sectional view illustrating another example of an integrated circuit package in which the top and bottom substrate / interposer assemblies are each separately provided with a separate set of conductive posts. In FIG. 5, the integrated circuit die 504 is attached to the bottom substrate / interposer assembly 502, and the top substrate / interposer assembly 506 is initially provided separately from the bottom substrate / interposer assembly 502. In an example, a first plurality of conductive posts 508a, 508b, 508c, and 508d (such as copper posts) are provided on the conductive contacts 510a, 510b, 510c, and 510d of the bottom substrate / interposer assembly 502, respectively. In a similar manner, a second plurality of conductive posts 512a, 512b, 512c and 512d (such as copper posts) are provided on the conductive contacts 514a, 514b, 514c and 514d, respectively, of the top substrate / interposer assembly 506.In addition, a dielectric 516, such as a PID, surrounds and protects each of the first plurality of conductive posts 508a, 508b, 508c, and 508d coupled to the bottom substrate / interposer assembly 502 while another dielectric 518, such as a PID, Surround and protect each of the second plurality of conductive posts 512a, 512b, 512c and 512d coupled to the top substrate / interposer assembly 506. In an example, a plurality of solder balls 520a, 520b, 520c, and 520d are provided to solder the first plurality of conductive posts 508a, 508b, 508c, and 508d with the second plurality of conductive posts 512a, 512b, 512c, and 512d, respectively. As shown in FIG. 5, the dielectric layer 516 is formed of a bottom substrate / interposer assembly 502, a top substrate / interposer assembly 506, a dielectric 516 that protects the first plurality of conductive posts 508a, 508b, 508c, and 508d and a second dielectric The interior space 522 formed by the dielectric 518 of the posts 512a, 512b, 512c, and 512d houses the integrated circuit die 504.6 is a cross-sectional view illustrating yet another example in which top and bottom substrate / interposer assemblies are provided in a stacked package (POP) configuration. In FIG. 6, a plurality of conductive posts 608a, 608b, 608c, and 608d (such as copper posts) are provided on the conductive contacts 616a, 616b, 616c, and 616d of the bottom substrate / interposer assembly 602, respectively, The body assembly 602 also supports the first integrated circuit die 604. Dielectric 612, which in one example includes a PID material, surrounds and protects each of conductive posts 608a, 608b, 608c, and 608d. A plurality of solder balls 614a, 614b, 614c and 614d are provided on the conductive contacts 610a, 610b, 610c and 610d of the top substrate / interposer 606. The interior space 618 formed by the bottom substrate / interposer assembly 602 and the top substrate / interposer assembly 606 and the dielectric 612 houses the first integrated circuit die 604. Prior to implementing the second die to form the POP, the bottom substrate / interposer assembly 602 and the top substrate / interposer assembly 602 with the conductive posts 608a, 608b, 608c and 608d surrounded by the dielectric 612 for receiving the first integrated circuit die 604, The mediator assembly 606 is similar to the configuration shown in FIG. 4 and described above.In the example shown in FIG. 6, the second integrated circuit die 620 is formed on top of the top substrate / interposer assembly 606 outside the space 618 for receiving the first integrated circuit die 604. In a further example, the molded or epoxy flux 622 may be provided on the second integrated circuit die 620 in a conventional manner to form a POP configuration as shown in FIG. 6. Alternatively, additional conductive posts, such as copper (Cu), may be provided on top of the top substrate / interposer assembly 606 in a manner similar to providing the conductive posts 608a, 608b, 608c, and 608d to the base substrate / interposer assembly 602 and surrounding the dielectric 612 Column) and a surrounding dielectric (such as a PID) for protecting the copper column. In still further examples, one or more additional substrates or interposers may be provided on the second integrated circuit die 622 in a similar manner for stacking one or more additional dies in various POP configurations.7A-7I are cross-sectional views illustrating an example of a process of manufacturing a structure having a substrate / interposer assembly and a conductive pillar protected by a dielectric as shown in FIG. 1 and described above. The process illustrated in FIGS. 7A-7I is merely one of many examples of processes for fabricating the structure of FIG. 1. Referring to FIG. 7A, a top metal layer 704 and a bottom metal layer 706 on the opposite surfaces of the core material 702 and the core material 702 are initially provided in preparation for fabricating an interposer. Figure 7B illustrates the next stage of making an interposer after patterning the top and bottom metal layers and forming vias through the core material. In the example illustrated in FIG. 7B, top metal layer 704 has been patterned and etched to form conductive contacts 704a, 704b, ..., 704f, and bottom metal layer 706 has been patterned and etched to form conductive contacts 706a, 706b, ..., 706f.A plurality of through holes 708a, 708d, and 708e are provided through the core material 702. Conductors, such as metal, are provided in vias 708a, 708d and 708e to form electrical connections between conductive contacts 704a and 706a, conductive contacts 704d and 706d, and 704e and 706e, respectively. In the example shown in FIG. 7B, some but not all of the top metal layers are electrically coupled with corresponding conductive contacts in the bottom metal layer through the vias. FIG. 7C illustrates the substrate / interposer assembly 714 after the substrate materials 710 and 712 have been formed on the top and bottom surfaces of the core material 702. In an example, the substrate / interposer assembly 714 can be manufactured in a conventional manner up to the stage as shown in FIG. 7C.7D is a cross-sectional view illustrating the dielectric coating 716 formed on the bottom of the substrate / interposer assembly. In an example, the dielectric coating 716 includes a PID material. Among many types of dielectric materials that may be suitable for protecting conductive posts, such as copper posts, PIDs may be considered to have the advantage of relatively low material cost and ease of manufacture. After the dielectric coating 716 is applied to the substrate / interposer assembly 714, portions of the dielectric coating 716 are removed to form openings 718a, 718b, ..., 718e, as illustrated in FIG. 7E. In the example shown in FIG. 7E, the openings 718a, 718b, 718d, and 718e are the spaces reserved for conductive posts, such as copper posts, while the central opening 718c is retained as part of the interior space for receiving the integrated circuit die .7F is a cross-sectional view illustrating an example in which a backside mask is applied to the substrate / interposer assembly 714. In FIG. 7F, the backside mask 720 is applied to the top of the substrate / interposer assembly 714. 7G is an illustration illustrating the plating of metal (such as copper) within openings 718a, 718b, 718d and 718e, respectively, in dielectric coating 716 as shown in FIG. 7E to form conductive posts 722a, 722b, 722c and 722d, ) Cross-sectional view. As shown in FIG. 7G, the height of the conductive posts 722a, 722b, 722c, and 722d is slightly less than the height of the dielectric coating 716, as illustrated in FIG. 1 and described above, such that the conductive posts 722a, 722b, 722c, and 722d are slightly Recessed in the dielectric coating 716. 7H is a cross-sectional view illustrating the substrate / interposer assembly after the backside mask has been peeled off or removed. FIG. 7I is a cross-sectional view of the substrate / interposer assembly illustrating further processing, such as, for example, a surface finish process on the top of the substrate / interposer assembly 714.8A-8J are cross-sectional views illustrating an example of a process of fabricating a structure of a conductive pillar having a substrate / interposer assembly and a dielectric-protected having an associated conductive pad as shown in FIG. 2 and described above. The process illustrated in FIGS. 8A-8J is but one of many examples of processes for fabricating the structure of FIG. 2. Referring to FIG. 8A, a top metal layer 704 and a bottom metal layer 706 on the opposite surfaces of the core material 702 and the core material 702 are initially provided in preparation for fabricating an interposer. Figure 8B illustrates the next stage of making an interposer after patterning the top and bottom metal layers and forming vias through the core material. In the example illustrated in FIG. 7B, top metal layer 704 has been patterned and etched to form conductive contacts 704a, 704b, ..., 704f, and bottom metal layer 706 has been patterned and etched to form conductive contacts 706a, 706b, ..., 706f.A plurality of through holes 708a, 708d, and 708e are provided through the core material 702. Conductors, such as metal, are provided in vias 708a, 708d and 708e to form electrical connections between conductive contacts 704a and 706a, conductive contacts 704d and 706d, and 704e and 706e, respectively. In the example shown in FIG. 8B, some but not all of the top metal layers are electrically coupled with corresponding conductive contacts in the bottom metal layer through the vias. FIG. 8C illustrates the substrate / interposer assembly 714 after the substrate materials 710 and 712 have been formed on the top and bottom surfaces of the core material 702. In the example shown in FIG. 8C, the substrate / interposer assembly 714 includes solder resists 710 and 712 and a substrate / interposer core 702.8D is a cross-sectional view illustrating the dielectric coating 716 formed on the bottom of the substrate / interposer assembly. In an example, the dielectric coating 716 includes a PID material. In an example, the PID may be selected as the material for the dielectric coating 716 because of its relatively low material cost and ease of manufacture. Alternatively, another material, such as a phenolic resin, a polyimide resin, an acrylic resin, or polyhydroxystyrene, may also be used as a material for the dielectric coating 716. After the dielectric coating 716 is applied to the substrate / interposer assembly 714, portions of the dielectric coating 716 are removed to form openings 718a, 718b, ..., 718e, as illustrated in FIG. 8E. In the example shown in FIG. 8E, the openings 718a, 718b, 718d, and 718e are the spaces reserved for conductive posts (such as copper posts) while the central opening 718c is retained as part of the interior space for receiving the integrated circuit die .8F is a cross-sectional view illustrating an example in which the seed layer 802 is coated on the top and bottom surfaces of the substrate / interposer assembly 714 and on the dielectric coating 716. FIG. 8G is a cross-sectional view illustrating an example in which a backside mask is applied to the substrate / interposer assembly 714. In FIG. 8G, a backside mask 804 is applied over the seed layer 802 on top of the substrate / interposer assembly 714 and another mask 806 is also applied on the bottom of the substrate / interposer assembly 714 And seed layer 802 on dielectric coating 716. 8H is an illustration of plating metal (such as copper) within openings 718a, 718b, 718d and 718e, respectively, in dielectric coating 716 as shown in FIG. 8E to form conductive pillars 806a, 806b, 806c and 806d, ) Cross-sectional view. In FIG. 8H, metal plating is applied to form conductive pillars 806a, 806b, 806c, and 806d while masks 804 and 806 remain on the substrate / interposer assembly 714.In the example shown in FIG. 8H, the height of the conductive posts 806a, 806b, 806c, and 806d is slightly greater than the height of the dielectric coating 716, as illustrated in FIG. 2 and described above. In one example, the masks 804 above and below the substrate / interposer assembly 714 are stripped or removed after forming the conductive posts 806a, 806b, 806c, and 806d (such as copper posts) within the openings of the dielectric coating 716 And 806, followed by removal of the seed layer 802. FIG. 8I is a cross-sectional view illustrating the substrate / interposer assembly after the masks 804 and 806 and the seed layer 802 have been removed. 8I is a cross-sectional view of the substrate / interposer assembly illustrating further processing, such as, for example, a surface finish process on the top of the substrate / interposer assembly 714. In the example shown in FIG. 8J, bottom portions 808a, 808b, 808c and 808d of conductive posts 806a, 806b, 806c and 806d respectively extend beyond dielectric coating 716 and serve as conductive pads for the respective conductive posts. In this example, the conductive pads are formed as an integral extension of the respective conductive pillar that is protected by the dielectric coating. Alternatively, the conductive pads that extend beyond the dielectric coating may be formed on the respective conductive pads in a separate process.Although the foregoing disclosure shows illustrative examples, it should be noted that various changes and modifications could be made herein without departing from the scope of the appended claims. The functions or acts of the method claims in accordance with the examples described herein need not be performed in any particular order unless expressly stated otherwise. In addition, while an element may be described or claimed as being singular, the plural is also contemplated unless the singular is explicitly stated.
In an embodiment, a processor a plurality of cores to independently execute instructions, the cores including a plurality of counters to store performance information, and a power controller coupled to the plurality of cores, the power controller having a logic to receive performance information from at least some of the plurality of counters, determine a number of cores to be active and a performance state for the number of cores for a next operation interval, based at least in part on the performance information and model information, and cause the number of cores to be active during the next operation interval, the performance information associated with execution of a workload on one or more of the plurality of cores. Other embodiments are described and claimed.
What is claimed is: 1 . A processor comprising:a plurality of cores to independently execute instructions, each of the plurality of cores including a plurality of counters to store performance information; anda power controller coupled to the plurality of cores, the power controller including:a logic to receive performance information from at least some of the plurality of counters, determine a number of cores to be active and a performance state for the number of cores for a next operation interval, based at least in part on the performance information and model information, and cause the number of cores to be active during the next operation interval, the performance information associated with execution of a workload on one or more of the plurality of cores. 2. The processor of claim 1 , further comprising a configuration storage including a plurality of entries each to store a number of cores to be enabled and one or more pairs of voltage/frequency at which the number of cores are to operate. 3. The processor of claim 2, wherein the logic is coupled to the configuration storage to access one or more of the plurality of entries and determine the number of cores to be active for the next operation interval based at least in part thereon. 4. The processor of claim 1 , wherein the logic is to classify a workload based at least in part on the performance information and determine the number of cores to be active for the next operation interval based on the workload classification. 5. The processor of claim 4, wherein if the workload classification indicates a memory bound workload, the logic is to determine the number of cores to be active for the next operation interval to be less than a current number of active cores.6. The processor of claim 4, wherein if the workload classification indicates a memory bound workload, the logic is to cause one or more threads to be migrated from a first type of core to a second type of core for the next operation interval. 7. The processor of claim 1 , wherein the logic comprises a heuristic logic, and the model information is to be obtained from a heuristic storage of the processor to store power configuration information associated with the workload. 8. The processor of claim 1 , wherein the logic comprises a machine learning logic, and the model information comprises training information to be stored in a storage of the processor during manufacture of the processor. 9. The processor of claim 8, wherein the machine learning logic includes an update logic to update at least some of the training information based on a history of operation of the processor and one or more configuration predictions by the machine learning logic during a lifetime of the processor. 10. The processor of claim 1 , wherein the logic includes a history logic to receive a prediction of the number of cores to be active in the next operation interval and to enable the logic to cause the number of cores to be active in the next operation interval based on a history of prior predictions. 1 1 . The processor of claim 10, wherein the history logic comprises a counter to maintain a count of a number of consecutive predictions for a first number of cores to be active in the next operation interval, wherein the history logic is to enable the logic to cause the first number of cores to be active in the next operation interval when the count exceeds a threshold, and otherwise to not enable the first number of cores to be active in the next operation interval. 12. The processor of claim 1 , wherein the logic is to maintain a current number of active cores for the next operation interval if a performance impact of execution of the workload on the determined number of cores would exceed a threshold level.13. A system comprising:a processor including:a plurality of cores to independently execute instructions; and a power controller coupled to the plurality of cores to:receive workload characteristic information of a workload executed on a first number of active cores in a first operation interval, configuration information regarding the first number of active cores, and power state information of the first number of active cores;classify the workload based on the workload characteristic information, the configuration information, and the power state information; andschedule one or more threads to a different number of active cores for a next operation interval based at least in part on the workloadclassification, and update a power state of one or more of the plurality of cores to enable the different number of active cores for the next operation interval; anda dynamic random access memory (DRAM) coupled to the processor. 14. The system of claim 13, wherein the power controller is to generate a power configuration prediction having a reduced number of active cores for the next operation interval if the workload is classified as a memory bounded workload. 15. The system of claim 14, wherein the power controller is to determine whether the power configuration prediction is consistent with history information, and if so schedule the one or more threads to the reduced number of active cores for the next operation interval, and otherwise maintain the first number of active cores for the next operation interval. 16. The system of claim 13, wherein the power controller is to obtain trained model parameter information from a storage of the processor based at least in part on the workload characteristic information.17. The system of claim 16, wherein the power controller is to:generate a power configuration prediction from the trained model parameter information, the power configuration prediction including a number of cores to be active in the next operation interval, a number of threads to be active in the next operation interval, and a performance state of the number of cores;estimate a performance/energy impact of the power configuration prediction; update at least some of the trained model parameter information for a classified workload type to reduce a performance impact if the estimatedperformance/energy impact exceeds a first impact threshold; andupdate at least some of the trained model parameter information for the classified workload type to increase power savings if the estimatedperformance/energy impact is less than a second impact threshold. 18. A method comprising:classifying, via a workload classifier, a workload executed on a multicore processor including a plurality of cores, and causing a reduced number of cores of the plurality of cores to be active in a next operation interval based at least in part on the workload classification;determining an impact of the reduced number of cores on a performance metric of the multicore processor; andif the impact is greater than a first threshold, updating one or more trained model parameters associated with the workload classifier for a workload type associated with the workload, wherein the updated trained model parameters are to enable a reduction of the impact on the performance metric. 19. The method of claim 18, further comprising if the impact is less than a second threshold, updating the one or more trained model parameters associated with the workload classifier for the workload type associated with the workload, wherein the updated trained model parameters are to enable a reduction in power consumption, wherein the second threshold is less than the first threshold.20. The method of claim 18, wherein classifying the workload comprises obtaining trained model parameters from a storage of the multicore processor based at least in part on workload characteristic information obtained from one or more of the plurality of cores. 21 . The method of claim 20, further comprising:generating a power configuration prediction from the trained modelparameters, the power configuration prediction to identify the reduced number of cores to be active in the next operation interval, a number of threads to be active in the next operation interval, and a performance state of the reduced number of cores; anddetermining whether to enable a power management controller to cause the reduced number of cores to be active in the next operation interval based at least in part on history information. 22. The method of claim 18, wherein updating the one or more trained model parameters comprises increasing a number of cores to be active for the workload type. 23. The method of claim 18, further comprising causing one or more threads of the workload to be migrated from one or more first cores to at least one second core, wherein the at least one second core comprises a memory-biased core and the one or more first cores comprises a compute-biased core. 24. A machine-readable storage medium including machine-readable instructions, when executed, to implement a method as claimed in any one of claims 18 to 23. 25. An apparatus comprising means to perform a method as claimed in any one of claims 18 to 23.
PERFORMING POWER MANAGEMENT IN A MULTICORE PROCESSORTechnical Field[0001 ] Embodiments relate to power management of a system, and more particularly to power management of a multicore processor.Background[0002] Advances in semiconductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a result, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple hardware threads, multiple cores, multiple devices, and/or complete systems on individual integrated circuits. Additionally, as the density of integrated circuits has grown, the power requirements for computing systems (from embedded systems to servers) have also escalated. Furthermore, software inefficiencies, and its requirements of hardware, have also caused an increase in computing device energy consumption. In fact, some studies indicate that computing devices consume a sizeable percentage of the entire electricity supply for a country, such as the United States of America. As a result, there is a vital need for energy efficiency and conservation associated with integrated circuits. These needs will increase as servers, desktop computers, notebooks, Ultrabooks™, tablets, mobile phones, processors, embedded systems, etc. become even more prevalent (from inclusion in the typical computer, automobiles, and televisions to biotechnology).Brief Description of the Drawings[0003] FIG. 1 is a block diagram of a portion of a system in accordance with an embodiment of the present invention.[0004] FIG. 2 is a block diagram of a processor in accordance with an embodiment of the present invention.[0005] FIG. 3 is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention.[0006] FIG. 4 is an embodiment of a processor including multiple cores. [0007] FIG. 5 is a block diagram of a micro-architecture of a processor core in accordance with one embodiment of the present invention.[0008] FIG. 6 is a block diagram of a micro-architecture of a processor core in accordance with another embodiment.[0009] FIG. 7 is a block diagram of a micro-architecture of a processor core in accordance with yet another embodiment.[0010] FIG. 8 is a block diagram of a micro-architecture of a processor core in accordance with a still further embodiment.[001 1 ] FIG. 9 is a block diagram of a processor in accordance with another embodiment of the present invention.[0012] FIG. 10 is a block diagram of a representative SoC in accordance with an embodiment of the present invention.[0013] FIG. 1 1 is a block diagram of another example SoC in accordance with an embodiment of the present invention.[0014] FIG. 12 is a block diagram of an example system with which embodiments can be used.[0015] FIG. 13 is a block diagram of another example system with whichembodiments may be used.[0016] FIG. 14 is a block diagram of a representative computer system.[0017] FIG. 15 is a block diagram of a system in accordance with an embodiment of the present invention.[0018] FIG. 16 is a block diagram of a power control logic in accordance with an embodiment of the present invention.[0019] FIG. 17 is a block diagram of a processor including a hardware power control logic in accordance with another embodiment of the present invention.[0020] FIG. 18 is a flow diagram of a method for controlling power consumption of a processor in accordance with an embodiment of the present invention. [0021 ] FIG. 19 is a flow diagram of a method for controlling power consumption of a processor in accordance with another embodiment of the present invention.[0022] FIG. 20 is a flow diagram of a method for updating trained modelparameters in accordance with an embodiment of the present invention.Detailed Description[0023] In various embodiments, an intelligent multi-core power management controller for a processor is provided that learns workload characteristics on-the-fly and dynamically adjusts power configurations to provide optimal performance per energy. In one embodiment, such power configurations include the number of active cores and threads, as well as an optimal voltage and frequency for each active core. In various embodiments, a machine learning-based performance and energy model identifies particular workload behaviors such as intensive memory accesses and predicts optimal power control, including placing one or more cores into an idle or low power state while saturating memory resources.[0024] In an embodiment, a power management controller is configured with a policy that determines an optimal power configuration and a mechanism to apply the decided configuration to the underlying system. Such policies may include heuristics developed by experts, and/or offline/online machine learning schemes, and may further include a number of user-level and operating system (OS)-level core-to- thread management mechanisms.[0025] A power management controller as described herein may be configured to allocate only needed resources to a workload, so that performance and energy efficiency can be maximized. As an example, memory bound workloads saturate memory resources (such as bandwidth or queues) before all compute resources are fully utilized. If such workloads are executed with all threads and cores active, poor efficiency will result. Some compute bound workloads also suffer from compromised scalability due to various reasons such as increased synchronization overhead. Embodiments apply to equally to other workloads that create slack in a core, such that the core becomes underutilized. Other example workloads include I/O or network bounded workloads. Embodiments may thus identify a best power configuration for different workloads. For example, particular workloads may be identified and underutilized resources for the workloads can be powered off or operate at reduced consumption levels to enable significant energy savings without adversely affecting performance.[0026] In an embodiment, the best power configuration for a workload defines the optimal number of threads and cores, execution units, voltages and frequencies, and so forth. This power configuration depends on many parameters, including both runtime workload behaviors and system power status. In addition, when considering the overheads incurred during transitions between power states, the selection process becomes even more complex. A single, fixed control policy is hard to adapt to various workloads and different systems. Embodiments thus provide a set of different models to evaluate, and an intelligent selector chooses from the identified models. This enables multiple control policies and a flexible selection at runtime. Thus embodiments may be used to determine an optimal power configuration (e.g., number of cores/threads, and voltage/frequency) concurrently for each workload, rather than a predefined control policy based on a single performance/energy prediction model.[0027] Embodiments operate to save energy without adversely affectingperformance for memory-intensive workloads, which saturate memory resources before fully utilizing compute resources, which can waste energy in a multicore processor. Embodiments may identify such behaviors and turn off underutilized cores to provide energy savings without a performance sacrifice.[0028] In some embodiments, a heterogeneous multiprocessor may include two different types of cores: one core type optimized for computation and another core type optimized for memory accesses. In one example, both types of cores implement the same instruction set architecture (ISA) but have differentmicroarchitectures, which may facilitate thread migration between the core types.[0029] Compute and memory bounded phases of a program may have very different processor requirements that cannot be optimized by a single core type. For example, a homogeneous multiprocessor optimized for compute workloads may target for a highest core count running at a frequency that can sustain one fused multiply add (FMA) per cycle per core. However, this multiprocessor may not be very energy efficient during a program phase that is mostly waiting for memory return. This is so, as during memory bounded phases, the cores are mostly idle waiting for memory accesses, yet the idle time may not be long enough to warrant placing the core into a low power state. As a result, the idling core at a high frequency can consume unnecessary power.[0030] As such, embodiments provide a heterogeneous multiprocessor that includes two or more specialized core types that are optimized for different operating points. In the examples described herein, two core types, a compute-optimized core (also referred to as a compute-biased core) and a memory-optimized core (also referred to as a memory-biased core) are provided. However understand the scope of the present invention is not limited to 2 core types, and in other cases additional core types optimized for other workload types may be present.[0031 ] Although the following embodiments are described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or processors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to any particular type of computer systems. That is, disclosed embodiments can be used in many different system types, ranging from server computers (e.g., tower, rack, blade, micro-server and so forth), communications systems, storage systems, desktop computers of any configuration, laptop, notebook, and tablet computers (including 2:1 tablets, phablets and so forth), and may be also used in other devices, such as handheld devices, systems on chip (SoCs), and embedded applications. Some examples of handheld devices include cellular phones such as smartphones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may typically include a microcontroller, a digital signal processor (DSP), network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, wearable devices, or any other system that can perform the functions and operations taught below. More so, embodiments may be implemented in mobile terminals having standard voice functionality such as mobile phones, smartphones and phablets, and/or in non-mobile terminals without a standard wireless voice function communication capability, such as many wearables, tablets, notebooks, desktops, micro-servers, servers and so forth. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatuses, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future, such as for power conservation and energy efficiency in products that encompass a large portion of the US economy.[0032] Referring now to FIG. 1 , shown is a block diagram of a portion of a system in accordance with an embodiment of the present invention. As shown in FIG. 1 , system 100 may include various components, including a processor 1 10 which as shown is a multicore processor. Processor 1 10 may be coupled to a power supply 150 via an external voltage regulator 160, which may perform a first voltage conversion to provide a primary regulated voltage to processor 1 10.[0033] As seen, processor 1 10 may be a single die processor including multiple cores 120a- 120n. In addition, each core may be associated with an integrated voltage regulator (IVR) 125a- 125nwhich receives the primary regulated voltage and generates an operating voltage to be provided to one or more agents of the processor associated with the IVR. Accordingly, an IVR implementation may be provided to allow for fine-grained control of voltage and thus power and performance of each individual core. As such, each core can operate at an independent voltage and frequency, enabling great flexibility and affording wide opportunities for balancing power consumption with performance. In some embodiments, the use of multiple IVRs enables the grouping of components into separate power planes, such that power is regulated and supplied by the IVR to only those components in the group. During power management, a given power plane of one IVR may be powered down or off when the processor is placed into a certain low power state, while another power plane of another IVR remains active, or fully powered.[0034] Still referring to FIG. 1 , additional components may be present within the processor including an input/output interface 132, another interface 134, and an integrated memory controller 136. As seen, each of these components may be powered by another integrated voltage regulator 125x. In one embodiment, interface 132 may be enable operation for an Intel® Quick Path Interconnect (QPI)interconnect, which provides for point-to-point (PtP) links in a cache coherent protocol that includes multiple layers including a physical layer, a link layer and a protocol layer. In turn, interface 134 may communicate via a Peripheral Component Interconnect Express (PCIe™) protocol.[0035] Also shown is a power control unit (PCU) 138, which may include hardware, software and/or firmware to perform power management operations with regard to processor 1 10. As seen, PCU 138 provides control information to external voltage regulator 160 via a digital interface to cause the voltage regulator to generate the appropriate regulated voltage. PCU 138 also provides control information to IVRs 125 via another digital interface to control the operating voltage generated (or to cause a corresponding IVR to be disabled in a low power mode). In various embodiments, PCU 138 may include a variety of power management logic units to perform hardware-based power management. Such power management may be wholly processor controlled (e.g., by various processor hardware, and which may be triggered by workload and/or power, thermal or other processor constraints) and/or the power management may be performed responsive to external sources (such as a platform or management power management source or system software). As described further herein, PCU 138 may include control logic to perform a workload classification based on a type of workload being executed, and cause the workload to be executed on a potentially different number of cores (and at potentially different performance states) based at least in part on the workload type.[0036] While not shown for ease of illustration, understand that additional components may be present within processor 1 10 such as uncore logic, and other components such as internal memories, e.g., one or more levels of a cache memory hierarchy and so forth. Furthermore, while shown in the implementation of FIG. 1 with an integrated voltage regulator, embodiments are not so limited.[0037] Note that the power management techniques described herein may be independent of and complementary to an operating system (OS)-based power management (OSPM) mechanism. According to one example OSPM technique, a processor can operate at various performance states or levels, so-called P-states, namely from PO to PN. In general, the P1 performance state may correspond to the highest guaranteed performance state that can be requested by an OS. In addition to this P1 state, the OS can further request a higher performance state, namely a PO state. This PO state may thus be an opportunistic or turbo mode state in which, when power and/or thermal budget is available, processor hardware can configure the processor or at least portions thereof to operate at a higher than guaranteed frequency. In many implementations a processor can include multiple so-called bin frequencies above the P1 guaranteed maximum frequency, exceeding to a maximum peak frequency of the particular processor, as fused or otherwise written into the processor during manufacture. In addition, according to one OSPM mechanism, a processor can operate at various power states or levels. With regard to power states, an OSPM mechanism may specify different power consumption states, generally referred to as C-states, CO, C1 to Cn states. When a core is active, it runs at a CO state, and when the core is idle it may be placed in a core low power state, also called a core non-zero C-state (e.g., C1 -C6 states), with each C-state being at a lower power consumption level (such that C6 is a deeper low power state than C1 , and so forth).[0038] Understand that many different types of power management techniques may be used individually or in combination in different embodiments. Asrepresentative examples, a power controller may control the processor to be power managed by some form of dynamic voltage frequency scaling (DVFS) in which an operating voltage and/or operating frequency of one or more cores or other processor logic may be dynamically controlled to reduce power consumption in certain situations. In an example, DVFS may be performed using Enhanced Intel SpeedStep™ technology available from Intel Corporation, Santa Clara, CA, to provide optimal performance at a lowest power consumption level. In another example, DVFS may be performed using Intel TurboBoost™ technology to enable one or more cores or other compute engines to operate at a higher than guaranteed operating frequency based on conditions (e.g., workload and availability).[0039] Another power management technique that may be used in certain examples is dynamic swapping of workloads between different compute engines. For example, the processor may include asymmetric cores or other processing engines that operate at different power consumption levels, such that in a power constrained situation, one or more workloads can be dynamically switched to execute on a lower power core or other compute engine. Another exemplary power management technique is hardware duty cycling (HDC), which may cause cores and/or other compute engines to be periodically enabled and disabled according to a duty cycle, such that one or more cores may be made inactive during an inactive period of the duty cycle and made active during an active period of the duty cycle. Although described with these particular examples, understand that many other power management techniques may be used in particular embodiments.[0040] Embodiments can be implemented in processors for various markets including server processors, desktop processors, mobile processors and so forth. Referring now to FIG. 2, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 2, processor 200 may be a multicore processor including a plurality of cores 210a- 210n. In one embodiment, each such core may be of an independent power domain and can be configured to enter and exit active states and/or maximum performance states based on workload. The various cores may be coupled via an interconnect 215 to a system agent or uncore 220 that includes various components. As seen, the uncore 220 may include a shared cache 230 which may be a last level cache. In addition, the uncore may include an integrated memory controller 240 to communicate with a system memory (not shown in FIG. 2), e.g., via a memory bus. Uncore 220 also includes various interfaces 250 and a power control unit 255, which may include a workload classification logic 256 (that may include or be associated with machine learning logic) to classify a workload being executed and perform dynamic control of a number of cores and/or performance state based at least in part thereon, as described herein.[0041 ] In addition, by interfaces 250a-250n, connection can be made to various off- chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 2, the scope of the present invention is not limited in this regard.[0042] Referring now to FIG. 3, shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. As shown in the embodiment of FIG. 3, processor 300 includes multiple domains.Specifically, a core domain 310 can include a plurality of cores 3100-310n, a graphics domain 320 can include one or more graphics engines, and a system agent domain 350 may further be present. In some embodiments, system agent domain 350 may execute at an independent frequency than the core domain and may remain powered on at all times to handle power control events and powermanagement such that domains 310 and 320 can be controlled to dynamically enter into and exit high power and low power states. Each of domains 310 and 320 may operate at different voltage and/or power. Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core.[0043] In general, each core 310 may further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC) 3400- 340n. In various embodiments, LLC 340 may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. As seen, a ring interconnect 330 thus couples the cores together, and provides interconnection between the cores, graphics domain 320 and system agent circuitry 350. In one embodiment, interconnect 330 can be part of the core domain. However in other embodiments the ring interconnect can be of its own domain. [0044] As further seen, system agent domain 350 may include display controller 352 which may provide control of and an interface to an associated display. As further seen, system agent domain 350 may include a power control unit 355 which can include a workload classification logic 356 (itself including machine learning logic) to perform the workload classification-based thread migration and power control techniques as described herein.[0045] As further seen in FIG. 3, processor 300 can further include an integrated memory controller (IMC) 370 that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces 3800- 380nmay be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI) interface may be provided as well as one or more PCIe™ interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more QPI interfaces may also be provided. Although shown at this high level in the embodiment of FIG. 3, understand the scope of the present invention is not limited in this regard.[0046] Referring to FIG. 4, an embodiment of a processor including multiple cores is illustrated. Processor 400 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SoC), or other device to execute code. Processor 400, in one embodiment, includes at least two cores— cores 401 and 402, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 400 may include any number of processing elements that may be symmetric or asymmetric.[0047] In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.[0048] A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.[0049] Physical processor 400, as illustrated in FIG. 4, includes two cores, cores 401 and 402. Here, cores 401 and 402 are considered symmetric cores, i.e., cores with the same configurations, functional units, and/or logic. In another embodiment, core 401 includes an out-of-order processor core, while core 402 includes an in- order processor core. However, cores 401 and 402 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a translated ISA, a co-designed core, or other known core. Yet to further the discussion, the functional units illustrated in core 401 are described in further detail below, as the units in core 402 operate in a similar manner.[0050] As depicted, core 401 includes two hardware threads 401 a and 401 b, which may also be referred to as hardware thread slots 401 a and 401 b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 400 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 401 a, a second thread is associated with architecture state registers 401 b, a third thread may be associated with architecture state registers 402a, and a fourth thread may be associated with architecture state registers 402b. Here, each of the architecture state registers (401 a, 401 b, 402a, and 402b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers 401 a are replicated in architecture state registers 401 b, so individual architecture states/contexts are capable of being stored for logical processor 401 a and logical processor 401 b. In core 401 , other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 430 may also be replicated for threads 401 a and 401 b. Some resources, such as reorder buffers in reorder/retirement unit 435, ILTB 420, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 415, execution unit(s) 440, and portions of out-of-order unit 435 are potentially fully shared.[0051 ] Processor 400 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 4, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 401 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 420 to predict branches to be executed/taken and an instruction-translation buffer (l-TLB) 420 to store address translation entries for instructions.[0052] Core 401 further includes decode module 425 coupled to fetch unit 420 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 401 a, 401 b, respectively. Usually core 401 is associated with a first ISA, which defines/specifies instructions executable on processor 400. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 425 includes circuitry that recognizes these instructions from their opcodes and passes the decodedinstructions on in the pipeline for processing as defined by the first ISA. For example, decoders 425, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders 425, the architecture or core 401 takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions.[0053] In one example, allocator and renamer block 430 includes an allocator to reserve resources, such as register files to store instruction processing results.However, threads 401 a and 401 b are potentially capable of out-of-order execution, where allocator and renamer block 430 also reserves other resources, such as reorder buffers to track instruction results. Unit 430 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 400. Reorder/retirement unit 435 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of- order execution and later in-order retirement of instructions executed out-of-order.[0054] Scheduler and execution unit(s) block 440, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.[0055] Lower level data cache and data translation buffer (D-TLB) 450 are coupled to execution unit(s) 440. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.[0056] Here, cores 401 and 402 share access to higher-level or further-out cache 410, which is to cache recently fetched elements. Note that higher-level or further- out refers to cache levels increasing or getting further away from the execution unit(s). In one embodiment, higher-level cache 410 is a last-level data cache— last cache in the memory hierarchy on processor 400— such as a second or third level data cache. However, higher level cache 410 is not so limited, as it may be associated with or includes an instruction cache. A trace cache— a type of instruction cache— instead may be coupled after decoder 425 to store recently decoded traces.[0057] In the depicted configuration, processor 400 also includes bus interface module 405 and a power controller 460, which may perform power management in accordance with an embodiment of the present invention. In this scenario, bus interface 405 is to communicate with devices external to processor 400, such as system memory and other components.[0058] A memory controller 470 may interface with other devices such as one or many memories. In an example, bus interface 405 includes a ring interconnect with a memory controller for interfacing with a memory and a graphics controller for interfacing with a graphics processor. In an SoC environment, even more devices, such as a network interface, coprocessors, memory, graphics processor, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.[0059] Referring now to FIG. 5, shown is a block diagram of a micro-architecture of a processor core in accordance with one embodiment of the present invention. As shown in FIG. 5, processor core 500 may be a multi-stage pipelined out-of-order processor. Core 500 may operate at various voltages based on a received operating voltage, which may be received from an integrated voltage regulator or external voltage regulator. [0060] As seen in FIG. 5, core 500 includes front end units 510, which may be used to fetch instructions to be executed and prepare them for use later in the processor pipeline. For example, front end units 510 may include a fetch unit 501 , an instruction cache 503, and an instruction decoder 505. In some implementations, front end units 510 may further include a trace cache, along with microcode storage as well as a micro-operation storage. Fetch unit 501 may fetch macro-instructions, e.g., from memory or instruction cache 503, and feed them to instruction decoder 505 to decode them into primitives, i.e., micro-operations for execution by the processor.[0061 ] Coupled between front end units 510 and execution units 520 is an out-of- order (OOO) engine 515 that may be used to receive the micro-instructions and prepare them for execution. More specifically OOO engine 515 may include various buffers to re-order micro-instruction flow and allocate various resources needed for execution, as well as to provide renaming of logical registers onto storage locations within various register files such as register file 530 and extended register file 535. Register file 530 may include separate register files for integer and floating point operations. For purposes of configuration, control, and additional operations, a set of machine specific registers (MSRs) 538 may also be present and accessible to various logic within core 500 (and external to the core). For example, power limit information may be stored in one or more MSR and be dynamically updated as described herein.[0062] Various resources may be present in execution units 520, including, for example, various integer, floating point, and single instruction multiple data (SIMD) logic units, among other specialized hardware. For example, such execution units may include one or more arithmetic logic units (ALUs) 522 and one or more vector execution units 524, among other such execution units.[0063] Results from the execution units may be provided to retirement logic, namely a reorder buffer (ROB) 540. More specifically, ROB 540 may include various arrays and logic to receive information associated with instructions that are executed. This information is then examined by ROB 540 to determine whether the instructions can be validly retired and result data committed to the architectural state of the processor, or whether one or more exceptions occurred that prevent a proper retirement of the instructions. Of course, ROB 540 may handle other operations associated with retirement.[0064] As shown in FIG. 5, ROB 540 is coupled to a cache 550 which, in one embodiment may be a low level cache (e.g., an L1 cache) although the scope of the present invention is not limited in this regard. Also, execution units 520 can be directly coupled to cache 550. From cache 550, data communication may occur with higher level caches, system memory and so forth. While shown with this high level in the embodiment of FIG. 5, understand the scope of the present invention is not limited in this regard. For example, while the implementation of FIG. 5 is with regard to an out-of-order machine such as of an Intel® x86 instruction set architecture (ISA), the scope of the present invention is not limited in this regard. That is, other embodiments may be implemented in an in-order processor, a reduced instruction set computing (RISC) processor such as an ARM-based processor, or a processor of another type of ISA that can emulate instructions and operations of a different ISA via an emulation engine and associated logic circuitry.[0065] Referring now to FIG. 6, shown is a block diagram of a micro-architecture of a processor core in accordance with another embodiment. In the embodiment of FIG. 6, core 600 may be a low power core of a different micro-architecture, such as an Intel® Atom™-based processor having a relatively limited pipeline depth designed to reduce power consumption. As seen, core 600 includes an instruction cache 610 coupled to provide instructions to an instruction decoder 615. A branch predictor 605 may be coupled to instruction cache 610. Note that instruction cache 610 may further be coupled to another level of a cache memory, such as an L2 cache (not shown for ease of illustration in FIG. 6). In turn, instruction decoder 615 provides decoded instructions to an issue queue 620 for storage and delivery to a given execution pipeline. A microcode ROM 618 is coupled to instruction decoder 615.[0066] A floating point pipeline 630 includes a floating point register file 632 which may include a plurality of architectural registers of a given bit with such as 128, 256 or 512 bits. Pipeline 630 includes a floating point scheduler 634 to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an ALU 635, a shuffle unit 636, and a floating point adder 638. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file 632. Of course understand while shown with these few example execution units, additional or different floating point execution units may be present in another embodiment.[0067] An integer pipeline 640 also may be provided. In the embodiment shown, pipeline 640 includes an integer register file 642 which may include a plurality of architectural registers of a given bit with such as 128 or 256 bits. Pipeline 640 includes an integer scheduler 644 to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an ALU 645, a shifter unit 646, and a jump execution unit 648. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file 642. Of course understand while shown with these few example execution units, additional or different integer execution units may be present in another embodiment.[0068] A memory execution scheduler 650 may schedule memory operations for execution in an address generation unit 652, which is also coupled to a TLB 654. As seen, these structures may couple to a data cache 660, which may be a L0 and/or L1 data cache that in turn couples to additional levels of a cache memory hierarchy, including an L2 cache memory.[0069] To provide support for out-of-order execution, an allocator/renamer 670 may be provided, in addition to a reorder buffer 680, which is configured to reorder instructions executed out of order for retirement in order. Although shown with this particular pipeline architecture in the illustration of FIG. 6, understand that many variations and alternatives are possible.[0070] Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures of FIGs. 5 and 6, workloads may be dynamically swapped between the cores for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also).[0071 ] Referring to FIG. 7, shown is a block diagram of a micro-architecture of a processor core in accordance with yet another embodiment. As illustrated in FIG. 7, a core 700 may include a multi-staged in-order pipeline to execute at very low power consumption levels. As one such example, processor 700 may have a microarchitecture in accordance with an ARM Cortex A53 design available from ARM Holdings, LTD., Sunnyvale, CA. In an implementation, an 8-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. Core 700 includes a fetch unit 710 that is configured to fetch instructions and provide them to a decode unit 715, which may decode the instructions, e.g., macro-instructions of a given ISA such as an ARMv8 ISA. Note further that a queue 730 may couple to decode unit 715 to store decoded instructions. Decoded instructions are provided to an issue logic 725, where the decoded instructions may be issued to a given one of multiple execution units.[0072] With further reference to FIG. 7, issue logic 725 may issue instructions to one of multiple execution units. In the embodiment shown, these execution units include an integer unit 735, a multiply unit 740, a floating point/vector unit 750, a dual issue unit 760, and a load/store unit 770. The results of these different execution units may be provided to a writeback unit 780. Understand that while a single writeback unit is shown for ease of illustration, in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown in FIG. 7 is represented at a high level, a particular implementation may include more or different structures. A processor designed using one or more cores having a pipeline as in FIG. 7 may be implemented in many different end products, extending from mobile devices to server systems.[0073] Referring to FIG. 8, shown is a block diagram of a micro-architecture of a processor core in accordance with a still further embodiment. As illustrated in FIG. 8, a core 800 may include a multi-stage multi-issue out-of-order pipeline to execute at very high performance levels (which may occur at higher power consumption levels than core 700 of FIG. 7). As one such example, processor 800 may have a microarchitecture in accordance with an ARM Cortex A57 design. In animplementation, a 15 (or greater)-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. In addition, the pipeline may provide for 3 (or greater)-wide and 3 (or greater)-issue operation. Core 800 includes a fetch unit 810 that is configured to fetch instructions and provide them to adecoder/renamer/dispatcher 815, which may decode the instructions, e.g., macro- instructions of an ARMv8 instruction set architecture, rename register references within the instructions, and dispatch the instructions (eventually) to a selected execution unit. Decoded instructions may be stored in a queue 825. Note that while a single queue structure is shown for ease of illustration in FIG 8, understand that separate queues may be provided for each of the multiple different types of execution units.[0074] Also shown in FIG. 8 is an issue logic 830 from which decoded instructions stored in queue 825 may be issued to a selected execution unit. Issue logic 830 also may be implemented in a particular embodiment with a separate issue logic for each of the multiple different types of execution units to which issue logic 830 couples.[0075] Decoded instructions may be issued to a given one of multiple execution units. In the embodiment shown, these execution units include one or more integer units 835, a multiply unit 840, a floating point/vector unit 850, a branch unit 860, and a load/store unit 870. In an embodiment, floating point/vector unit 850 may be configured to handle SIMD or vector data of 128 or 256 bits. Still further, floating point/vector execution unit 850 may perform IEEE-754 double precision floating-point operations. The results of these different execution units may be provided to a writeback unit 880. Note that in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown in FIG. 8 is represented at a high level, a particular implementation may include more or different structures.[0076] Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures of FIGs. 7 and 8, workloads may be dynamically swapped for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also).[0077] A processor designed using one or more cores having pipelines as in any one or more of FIGs. 5-8 may be implemented in many different end products, extending from mobile devices to server systems. Referring now to FIG. 9, shown is a block diagram of a processor in accordance with another embodiment of the present invention. In the embodiment of FIG. 9, processor 900 may be a SoC including multiple domains, each of which may be controlled to operate at an independent operating voltage and operating frequency. As a specific illustrative example, processor 900 may be an Intel® Architecture Core™-based processor such as an i3, i5, i7 or another such processor available from Intel Corporation. However, other low power processors such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, an ARM-based design from ARM Holdings, Ltd. or licensee thereof or a MlPS-based design from MIPS Technologies, Inc. of Sunnyvale, CA, or their licensees or adopters may instead be present in other embodiments such as an Apple A7 processor, a Qualcomm Snapdragon processor, or Texas Instruments OMAP processor. Such SoC may be used in a low power system such as a smartphone, tablet computer, phablet computer, Ultrabook™ computer or other portable computing device.[0078] In the high level view shown in FIG. 9, processor 900 includes a plurality of core units 9100-910n. Each core unit may include one or more processor cores, one or more cache memories and other circuitry. Each core unit 910 may support one or more instructions sets (e.g., an x86 instruction set (with some extensions that have been added with newer versions); a MIPS instruction set; an ARM instruction set (with optional additional extensions such as NEON)) or other instruction set or combinations thereof. Note that some of the core units may be heterogeneous resources (e.g., of a different design). In addition, each such core may be coupled to a cache memory (not shown) which in an embodiment may be a shared level (L2) cache memory. A non-volatile storage 930 may be used to store various program and other data. For example, this storage may be used to store at least portions of microcode, boot information such as a BIOS, other system software or so forth.[0079] Each core unit 910 may also include an interface such as a bus interface unit to enable interconnection to additional circuitry of the processor. In an embodiment, each core unit 910 couples to a coherent fabric that may act as a primary cache coherent on-die interconnect that in turn couples to a memory controller 935. In turn, memory controller 935 controls communications with a memory such as a DRAM (not shown for ease of illustration in FIG. 9).[0080] In addition to core units, additional processing engines are present within the processor, including at least one graphics unit 920 which may include one or more graphics processing units (GPUs) to perform graphics processing as well as to possibly execute general purpose operations on the graphics processor (so-called GPGPU operation). In addition, at least one image signal processor 925 may be present. Signal processor 925 may be configured to process incoming image data received from one or more capture devices, either internal to the SoC or off-chip.[0081 ] Other accelerators also may be present. In the illustration of FIG. 9, a video coder 950 may perform coding operations including encoding and decoding for video information, e.g., providing hardware acceleration support for high definition video content. A display controller 955 further may be provided to accelerate display operations including providing support for internal and external displays of a system. In addition, a security processor 945 may be present to perform security operations such as secure boot operations, various cryptography operations and so forth.[0082] Each of the units may have its power consumption controlled via a power manager 940, which may include control logic to perform the various power management techniques described herein. For example, power manager 940 may include a machine learning/workload classification logic to classify a workload being executed and migration logic, which may cause at least some threads of the workload to be dynamically migrated to different cores (and/or core types), such that a different number of cores may be active in a next operating interval. [0083] In some embodiments, SoC 900 may further include a non-coherent fabric coupled to the coherent fabric to which various peripheral devices may couple. One or more interfaces 960a-960d enable communication with one or more off-chip devices. Such communications may be via a variety of communication protocols such as PCIe™, GPIO, USB, l2C, UART, MIPI, SDIO, DDR, SPI, HDMI, among other types of communication protocols. Although shown at this high level in the embodiment of FIG. 9, understand the scope of the present invention is not limited in this regard.[0084] Referring now to FIG. 10, shown is a block diagram of a representative SoC. In the embodiment shown, SoC 1000 may be a multi-core SoC configured for low power operation to be optimized for incorporation into a smartphone or other low power device such as a tablet computer or other portable computing device. As an example, SoC 1000 may be implemented using asymmetric or different types of cores, such as combinations of higher power and/or low power cores, e.g., out-of- order cores and in-order cores. In different embodiments, these cores may be based on an Intel® Architecture™ core design or an ARM architecture design. In yet other embodiments, a mix of Intel and ARM cores may be implemented in a given SoC.[0085] As seen in FIG. 10, SoC 1000 includes a first core domain 1010 having a plurality of first cores 10120— 10123. In an example, these cores may be low power cores such as in-order cores. In one embodiment these first cores may beimplemented as ARM Cortex A53 cores. In turn, these cores couple to a cache memory 1015 of core domain 1010. In addition, SoC 1000 includes a second core domain 1020. In the illustration of FIG. 10, second core domain 1020 has a plurality of second cores 10220- 10223. In an example, these cores may be higher power- consuming cores than first cores 1012. In an embodiment, the second cores may be out-of-order cores, which may be implemented as ARM Cortex A57 cores. In turn, these cores couple to a cache memory 1025 of core domain 1020. Note that while the example shown in FIG. 10 includes 4 cores in each domain, understand that more or fewer cores may be present in a given domain in other examples.[0086] With further reference to FIG. 10, a graphics domain 1030 also is provided, which may include one or more graphics processing units (GPUs) configured to independently execute graphics workloads, e.g., provided by one or more cores of core domains 1010 and 1020. As an example, GPU domain 1030 may be used to provide display support for a variety of screen sizes, in addition to providing graphics and display rendering operations.[0087] As seen, the various domains couple to a coherent interconnect 1040, which in an embodiment may be a cache coherent interconnect fabric that in turn couples to an integrated memory controller 1050. Coherent interconnect 1040 may include a shared cache memory, such as an L3 cache, in some examples. In an embodiment, memory controller 1050 may be a direct memory controller to provide for multiple channels of communication with an off-chip memory, such as multiple channels of a DRAM (not shown for ease of illustration in FIG. 10).[0088] In different examples, the number of the core domains may vary. For example, for a low power SoC suitable for incorporation into a mobile computing device, a limited number of core domains such as shown in FIG. 10 may be present. Still further, in such low power SoCs, core domain 1020 including higher power cores may have fewer numbers of such cores. For example, in one implementation two cores 1022 may be provided to enable operation at reduced power consumption levels. In addition, the different core domains may also be coupled to an interrupt controller to enable dynamic swapping of workloads between the different domains.[0089] In yet other embodiments, a greater number of core domains, as well as additional optional IP logic may be present, in that an SoC can be scaled to higher performance (and power) levels for incorporation into other computing devices, such as desktops, servers, high performance computing systems, base stations forth. As one such example, 4 core domains each having a given number of out-of-order cores may be provided. Still further, in addition to optional GPU support (which as an example may take the form of a GPGPU), one or more accelerators to provide optimized hardware support for particular functions (e.g. web serving, network processing, switching or so forth) also may be provided. In addition, an input/output interface may be present to couple such accelerators to off-chip components. [0090] Referring now to FIG. 1 1 , shown is a block diagram of another example SoC. In the embodiment of FIG. 1 1 , SoC 1 100 may include various circuitry to enable high performance for multimedia applications, communications and other functions. As such, SoC 1 100 is suitable for incorporation into a wide variety of portable and other devices, such as smartphones, tablet computers, smart TVs and so forth. In the example shown, SoC 1 100 includes a central processor unit (CPU) domain 1 1 10. In an embodiment, a plurality of individual processor cores may be present in CPU domain 1 1 10. As one example, CPU domain 1 1 10 may be a quad core processor having 4 multithreaded cores. Such processors may behomogeneous or heterogeneous processors, e.g., a mix of low power and high power processor cores.[0091 ] In turn, a GPU domain 1 120 is provided to perform advanced graphics processing in one or more GPUs to handle graphics and compute APIs. A DSP unit 1 130 may provide one or more low power DSPs for handling low-power multimedia applications such as music playback, audio/video and so forth, in addition to advanced calculations that may occur during execution of multimedia instructions. In turn, a communication unit 1 140 may include various components to provide connectivity via various wireless protocols, such as cellular communications(including 3G/4G LTE), wireless local area protocols such as Bluetooth™, IEEE 802.1 1 , and so forth.[0092] Still further, a multimedia processor 1 150 may be used to perform capture and playback of high definition video and audio content, including processing of user gestures. A sensor unit 1 160 may include a plurality of sensors and/or a sensor controller to interface to various off-chip sensors present in a given platform. An image signal processor 1 170 may be provided with one or more separate ISPs to perform image processing with regard to captured content from one or more cameras of a platform, including still and video cameras.[0093] A display processor 1 180 may provide support for connection to a high definition display of a given pixel density, including the ability to wirelesslycommunicate content for playback on such display. Still further, a location unit 1 190 may include a GPS receiver with support for multiple GPS constellations to provide applications highly accurate positioning information obtained using as such GPS receiver. Understand that while shown with this particular set of components in the example of FIG. 1 1 , many variations and alternatives are possible.[0094] Referring now to FIG. 12, shown is a block diagram of an example system with which embodiments can be used. As seen, system 1200 may be a smartphone or other wireless communicator. A baseband processor 1205 is configured to perform various signal processing with regard to communication signals to be transmitted from or received by the system. In turn, baseband processor 1205 is coupled to an application processor 1210, which may be a main CPU of the system to execute an OS and other system software, in addition to user applications such as many well-known social media and multimedia apps. Application processor 1210 may further be configured to perform a variety of other computing operations for the device.[0095] In turn, application processor 1210 can couple to a user interface/display 1220, e.g., a touch screen display. In addition, application processor 1210 may couple to a memory system including a non-volatile memory, namely a flash memory 1230 and a system memory, namely a dynamic random access memory (DRAM) 1235. As further seen, application processor 1210 further couples to a capture device 1240 such as one or more image capture devices that can record video and/or still images.[0096] Still referring to FIG. 12, a universal integrated circuit card (UICC) 1240 comprising a subscriber identity module and possibly a secure storage and cryptoprocessor is also coupled to application processor 1210. System 1200 may further include a security processor 1250 that may couple to application processor 1210. A plurality of sensors 1225 may couple to application processor 1210 to enable input of a variety of sensed information such as accelerometer and other environmental information. An audio output device 1295 may provide an interface to output sound, e.g., in the form of voice communications, played or streaming audio data and so forth. [0097] As further illustrated, a near field communication (NFC) contactless interface 1260 is provided that communicates in a NFC near field via an NFC antenna 1265. While separate antennae are shown in FIG. 12, understand that in someimplementations one antenna or a different set of antennae may be provided to enable various wireless functionality.[0098] A power management integrated circuit (PMIC) 1215 couples to application processor 1210 to perform platform level power management. To this end, PMIC 1215 may issue power management requests to application processor 1210 to enter certain low power states as desired. Furthermore, based on platform constraints, PMIC 1215 may also control the power level of other components of system 1200.[0099] To enable communications to be transmitted and received, various circuitry may be coupled between baseband processor 1205 and an antenna 1290.Specifically, a radio frequency (RF) transceiver 1270 and a wireless local area network (WLAN) transceiver 1275 may be present. In general, RF transceiver 1270 may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol. In addition a GPS sensor 1280 may be present. Other wireless communications such as receipt or transmission of radio signals, e.g., AM/FM and other signals may also be provided. In addition, via WLAN transceiver 1275, local wireless communications can also be realized.[0100] Referring now to FIG. 13, shown is a block diagram of another example system with which embodiments may be used. In the illustration of FIG. 13, system 1300 may be mobile low-power system such as a tablet computer, 2: 1 tablet, phablet or other convertible or standalone tablet system. As illustrated, a SoC 1310 is present and may be configured to operate as an application processor for the device.[0101 ] A variety of devices may couple to SoC 1310. In the illustration shown, a memory subsystem includes a flash memory 1340 and a DRAM 1345 coupled to SoC 1310. In addition, a touch panel 1320 is coupled to the SoC 1310 to provide display capability and user input via touch, including provision of a virtual keyboard on a display of touch panel 1320. To provide wired network connectivity, SoC 1310 couples to an Ethernet interface 1330. A peripheral hub 1325 is coupled to SoC 1310 to enable interfacing with various peripheral devices, such as may be coupled to system 1300 by any of various ports or other connectors.[0102] In addition to internal power management circuitry and functionality within SoC 1310, a PMIC 1380 is coupled to SoC 1310 to provide platform-based power management, e.g., based on whether the system is powered by a battery 1390 or AC power via an AC adapter 1395. In addition to this power source-based power management, PMIC 1380 may further perform platform power management activities based on environmental and usage conditions. Still further, PMIC 1380 may communicate control and status information to SoC 1310 to cause various power management actions within SoC 1310.[0103] Still referring to FIG. 13, to provide for wireless capabilities, a WLAN unit 1350 is coupled to SoC 1310 and in turn to an antenna 1355. In variousimplementations, WLAN unit 1350 may provide for communication according to one or more wireless protocols.[0104] As further illustrated, a plurality of sensors 1360 may couple to SoC 1310. These sensors may include various accelerometer, environmental and other sensors, including user gesture sensors. Finally, an audio codec 1365 is coupled to SoC 1310 to provide an interface to an audio output device 1370. Of course understand that while shown with this particular implementation in FIG. 13, many variations and alternatives are possible.[0105] Referring now to FIG. 14, shown is a block diagram of a representative computer system such as notebook, Ultrabook™ or other small form factor system. A processor 1410, in one embodiment, includes a microprocessor, multi-core processor, multithreaded processor, an ultra low voltage processor, an embedded processor, or other known processing element. In the illustrated implementation, processor 1410 acts as a main processing unit and central hub for communication with many of the various components of the system 1400. As one example, processor 1400 is implemented as a SoC.[0106] Processor 1410, in one embodiment, communicates with a system memory 1415. As an illustrative example, the system memory 1415 is implemented via multiple memory devices or modules to provide for a given amount of system memory.[0107] To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage 1420 may also couple to processor 1410. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a SSD or the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also shown in FIG. 14, a flash device 1422 may be coupled to processor 1410, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system.[0108] Various input/output (I/O) devices may be present within system 1400. Specifically shown in the embodiment of FIG. 14 is a display 1424 which may be a high definition LCD or LED panel that further provides for a touch screen 1425. In one embodiment, display 1424 may be coupled to processor 1410 via a display interconnect that can be implemented as a high performance graphics interconnect. Touch screen 1425 may be coupled to processor 1410 via another interconnect, which in an embodiment can be an l2C interconnect. As further shown in FIG. 14, in addition to touch screen 1425, user input by way of touch can also occur via a touch pad 1430 which may be configured within the chassis and may also be coupled to the same l2C interconnect as touch screen 1425.[0109] For perceptual computing and other purposes, various sensors may be present within the system and may be coupled to processor 1410 in different manners. Certain inertial and environmental sensors may couple to processor 1410 through a sensor hub 1440, e.g., via an l2C interconnect. In the embodiment shown in FIG. 14, these sensors may include an accelerometer 1441 , an ambient light sensor (ALS) 1442, a compass 1443 and a gyroscope 1444. Other environmental sensors may include one or more thermal sensors 1446 which in some embodiments couple to processor 1410 via a system management bus (SMBus) bus.[01 10] Also seen in FIG. 14, various peripheral devices may couple to processor 1410 via a low pin count (LPC) interconnect. In the embodiment shown, various components can be coupled through an embedded controller 1435. Suchcomponents can include a keyboard 1436 (e.g., coupled via a PS2 interface), a fan 1437, and a thermal sensor 1439. In some embodiments, touch pad 1430 may also couple to EC 1435 via a PS2 interface. In addition, a security processor such as a trusted platform module (TPM) 1438 may also couple to processor 1410 via this LPC interconnect.[01 1 1 ] System 1400 can communicate with external devices in a variety of manners, including wirelessly. In the embodiment shown in FIG. 14, various wireless modules, each of which can correspond to a radio configured for a particular wireless communication protocol, are present. One manner for wirelesscommunication in a short range such as a near field may be via a NFC unit 1445 which may communicate, in one embodiment with processor 1410 via an SMBus. Note that via this NFC unit 1445, devices in close proximity to each other can communicate.[01 12] As further seen in FIG. 14, additional wireless units can include other short range wireless engines including a WLAN unit 1450 and a Bluetooth unit 1452.Using WLAN unit 1450, Wi-Fi™ communications can be realized, while via Bluetooth unit 1452, short range Bluetooth™ communications can occur. These units may communicate with processor 1410 via a given link.[01 13] In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, can occur via a WWAN unit 1456 which in turn may couple to a subscriber identity module (SIM) 1457. In addition, to enable receipt and use of location information, a GPS module 1455 may also be present. Note that in the embodiment shown in FIG. 14, WWAN unit 1456 and an integrated capture device such as a camera module 1454 may communicate via a given link.[01 14] An integrated camera module 1454 can be incorporated in the lid. To provide for audio inputs and outputs, an audio processor can be implemented via a digital signal processor (DSP) 1460, which may couple to processor 1410 via a high definition audio (HDA) link. Similarly, DSP 1460 may communicate with an integrated coder/decoder (CODEC) and amplifier 1462 that in turn may couple to output speakers 1463 which may be implemented within the chassis. Similarly, amplifier and CODEC 1462 can be coupled to receive audio inputs from amicrophone 1465 which in an embodiment can be implemented via dual array microphones (such as a digital microphone array) to provide for high quality audio inputs to enable voice-activated control of various operations within the system. Note also that audio outputs can be provided from amplifier/CODEC 1462 to a headphone jack 1464. Although shown with these particular components in the embodiment of FIG. 14, understand the scope of the present invention is not limited in this regard.[01 15] Embodiments may be implemented in many different system types.Referring now to FIG. 15, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 15, multiprocessor system 1500 is a point-to-point interconnect system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. As shown in FIG. 15, each of processors 1570 and 1580 may be multicore processors, including first and second processor cores (i.e., processor cores 1574a and 1574b and processor cores 1584a and 1584b), although potentially many more cores may be present in the processors. Each of the processors can include a PCU or other power management logic to perform processor-based power management as described herein, including workload classification and dynamic thread migration and core performance control based at least in part thereon.[01 16] Still referring to FIG. 15, first processor 1570 further includes a memory controller hub (MCH) 1572 and point-to-point (P-P) interfaces 1576 and 1578. Similarly, second processor 1580 includes a MCH 1582 and P-P interfaces 1586 and 1588. As shown in FIG. 15, MCH's 1572 and 1582 couple the processors to respective memories, namely a memory 1532 and a memory 1534, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 1570 and second processor 1580 may be coupled to a chipset 1590 via P-P interconnects 1562 and 1564, respectively. As shown in FIG. 15, chipset 1590 includes P-P interfaces 1594 and 1598.[01 17] Furthermore, chipset 1590 includes an interface 1592 to couple chipset 1590 with a high performance graphics engine 1538, by a P-P interconnect 1539. In turn, chipset 1590 may be coupled to a first bus 1516 via an interface 1596. As shown in FIG. 15, various input/output (I/O) devices 1514 may be coupled to first bus 1516, along with a bus bridge 1518 which couples first bus 1516 to a second bus 1520. Various devices may be coupled to second bus 1520 including, for example, a keyboard/mouse 1522, communication devices 1526 and a data storage unit 1528 such as a disk drive or other mass storage device which may include code 1530, in one embodiment. Further, an audio I/O 1524 may be coupled to second bus 1520. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, Ultrabook™, or so forth.[01 18] In an embodiment, a prediction logic is adapted to determine an optimal power configuration, and a control logic is adapted to apply the determined power configuration. The prediction logic collects input statistics periodically by reading performance/energy counters, and then using an intelligent model to evaluate the current status and predict the next optimal power configuration. The control logic applies this power control decision to the underlying system via thread migration and dynamic voltage and frequency scaling, in an embodiment. In some embodiments, one or more heterogeneous cores may be present that are specialized for memory- intensive workloads. Threads can migrate to such core(s) to save more energy during a memory-bound phase. A hardware thread migration technique may be used to migrate threads between compute-biased cores and memory-biased cores, which does not involve any software intervention, both from user applications and operating system standpoint.[01 19] Because compute-bound and memory-bound workloads have very different system requirements, the model solves a classification problem. Given a set of power configuration parameters (e.g., number of cores, number of threads, clock frequency and voltage) and runtime statistics (e.g., various performance/energy counters) at a current time sample, the goal is to find the optimal power configuration to maximize performance and energy efficiency for the next time interval. In different embodiments, two types of prediction models, expert heuristic and machine learning, may be used.[0120] Referring now to FIG. 16, shown is a block diagram of a power control logic in accordance with an embodiment of the present invention. As shown in FIG. 16, power control logic 1600, which may be formed of combinations of hardware, software, and/or firmware, may be implemented as part of a larger power control unit of a processor such as a PCU. Such PCU in turn itself may be implemented as one or more microcontrollers or other control logic. In the embodiment shown, power control logic 1600 includes a performance characteristics buffer 1610. As described herein, buffer 1610 may be configured to store incoming performancecharacteristics, e.g., obtained from performance/energy counters of the various cores and other logic of a processor and/or system having such processor.[0121 ] In turn, the characteristics information is provided to a prediction logic 1620. In different implementations, different types of prediction logic may be present. As described herein, such logic may include one or more models based on expert heuristics and/or machine learning. From the incoming characteristics information regarding current processor operation, prediction logic 1620 may generate a power configuration definition. Such definition may be used to define appropriate processor operation controls for a next operation interval. In an embodiment, such power configuration definition information may include a number of cores to be active in the next operation interval, corresponding threads to be scheduled to such cores, and voltage/frequency at which the selected cores are to operate. [0122] As seen, this power configuration definition information is provided to a control logic 1630, which may be configured to enforce the definition in the next operation interval. As such, control logic 1630 may be configured to provide control information to various processor entities, including clock generators such as one or more phase locked loops, one or more voltage regulators such as integrated voltage regulators or so forth. Still further, control information may be provided to the individual cores to indicate whether the given cores are to be powered on or off in the next operation interval. If a core is to be powered off, the core may take various actions, including saving its state to an appropriate storage to enable the core to enter into the requested low power state, gating one or more clock signals, and gating one or more power gates. For a core to become powered on in the next operation interval, such control may include ungating one or more clock signals, ungating one or more power gates, and restoring a state of the core from a given storage. Of course understand that in various embodiments, additional operations for such core low power state entry and exit may occur. Still further understand that in different implementations, a variety of different configurations of power control logic may be present.[0123] For an expert heuristic model, experts identify the most relevant parameters of workloads and systems, create a prediction model to classify compute or memory bound workloads, and navigate toward the most energy efficient configurations for the identified workloads. For example, two intuitive heuristic models may be used. Due to design complexity, heuristics may only select a subset of many possible parameters, called hot states, that are featured from the most popular configurations in the design space exploration.[0124] For a simple classification heuristic model (H1 ), two parameters may be used to make decisions, e.g., instruction per second (IPS) and memory bandwidth (memBW). IPS is an indicator of core utilization while memBW directly shows memory occupancy. Workloads with low IPS and high memBW are classified as memory-bound, while high IPS and low memBW workloads are classified as compute-bound. The thresholds of high and low are adjustable, e.g., by a user in some embodiments. The decision strategy may be as follows: enable a weak power configuration (fewer cores/threads and lower voltage/frequency) for memory-bound workloads and enable a strong power configuration (which may be a baseline configuration) for compute-bound workloads.[0125] A feedback classification heuristic model (H2) first classifies workloads similar to H1 . And, each classification has a preferred action; for example, memory- bound workloads will cause a weaker configuration to be enabled (first fewer cores/threads, then lower voltage/frequency), and compute-bound applications will cause a stronger configuration to be enabled. Mixed workloads have certain probabilities to go weaker or stronger. Then, an energy efficiency metric (EE) is calculated as IPS*IPS/PCL to estimate the current energy efficiency, where PCL is an estimated power consumption level derived from dominant power components such as leakage. Based on the positive or negative EE feedback from the last action, a new decision is made either to take the preferred action, to take same action as the last action, or to do nothing.[0126] A comprehensive set of runtime workloads and system statistics can be as large as hundreds or thousands of features, such as the occupancy of many queue structures, the utilization of each functional unit, and so forth. To take all these combinations into account, the design space grows explosively. Therefore, embodiments may use machine learning techniques for data processing and decision making.[0127] In a machine learning model, runtime statistics may be included as attributes. A multi-dimensional record is a collection of average statistical values for all attributes during a sampling time period. The most energy efficient configuration for each time interval is assigned a label with the information of cores, threads, voltage, and frequency. The model predicts the next optimal power configuration.[0128] During the machine learning training process, certain performance/energy characteristics may be identified from a large amount of data and used to make intelligent automatic decisions. The offline training process takes large dimension workloads/system statistics at a given time as inputs to predict the best power configuration. The offline training process may generate a set of coefficients (or weights) that can be programmed into the power controller (e.g., by storage in a nonvolatile memory) for use in real time.[0129] Thus during a design or configuration process, an offline data collection occurs. This offline data collection process may be performed, e.g., by a processor designer, during the design phase and/or after manufacture. In the process, a variety of different workloads may be executed (and/or simulated) such asrepresentative benchmark workloads to enable characterization of the workload on the processor (as determined by performance and energy parameter information). In some cases, a large number of benchmark workloads may be used to appropriately train a machine learning classifier.[0130] In an example, during an offline data collection of a representative workload, multiple workload phases are identified, and during each workload phase, a data point may be generated for a particular power configuration. Then an exhaustive search for all possible configurations is performed to collect a set of representative workload data. The calculated performance and energy are used to preprocess the best energy efficient configurations, while a performance drop constraint may be enforced to filter out too much performance sacrifice. Then, each data point is labeled with its best configuration name for a later supervised learning process.[0131 ] In different embodiments, a given set of relevant attributes may be considered. In one embodiment, these attributes include IPS, memBW, average roundtrip memory latency (memLat), memory instruction percentage (mem Inst), floating point instruction percentage (FPinst), ALU instruction percentage (ALUinst), pending outgoing memory requests queue occupancy (memQ), and last level cache miss rate (LLCMiss). Such attributes may best represent the workload behaviors and effectively help reduce the number of performance/energy counters.[0132] In different embodiments, various offline supervised models may be used. In one example, a multi-class logistic regression model with a ridge estimator may be used to measure the relationship between more than two categorical dependent or independent variables, which exhibit good accuracy and simple implementation. In another example, a multilayer perceptron may be used. This model is an artificial neural network classifier that uses backpropagation, which may be suitable for abundant data with non-linear behaviors. The nodes are all sigmoid (logistic functions) and have at least three layers. The input layer takes all the selected attributes, and the output layer produces the optimal power configurations. As another example, a decision tree model may be used, which maintains a flowchartlike tree structure in which leaves represent labels (all possible optimalconfigurations), and branches are conjunctions of attributes. At each node of the tree, an attribute is selected to effectively split data points into subsets in one class and the other.[0133] Once the models are trained, they are implemented into a power controller for online prediction. At runtime, performance/energy counters corresponding to the selected attributes are collected and the trained machine learning model predicts the next optimal power configuration. To improve accuracy, runtime feedbackmechanisms can be used and the machine learning decision to update a power configuration may be overridden to take no action. This is so, as it is possible for the machine learning model to switch back and forth between several powerconfigurations very frequently, resulting in high switching overhead. A saturation counter for history tracking may be used to avoid such over-sensitive reaction. Also, the machine learning model may tend to choose high energy savings while sacrificing performance too much, especially for compute-bound workloads, where weaker system resources correlate to performance directly. An IPS history register per core may be used to detect a sudden performance drop due to a power configuration change.[0134] The control logic is responsible for applying the optimal power control decision to the underlying system, which involves hardware controls (e.g., voltage and frequency scaling), and software controls (e.g., cores and threads mapping). In different embodiments, various thread mapping mechanisms may be used.[0135] In one embodiment, a software layer (user applications, runtime libraries, and/or operating systems) may be modified. The prediction logic provides an interface to communicate the optimal power control decision, then software adjusts thread-to-core mapping via user-level threading (fork/join), dynamic runtime libraries such as Posix threads, etc., task queue scheduler, or operating system thread migration.[0136] An application programming interface (API) may be provided so that user applications or runtime libraries can directly access the power controller to retrieve the optimal power configuration decision. If it is better to change the power configuration, applications or runtimes adjust thread-to-core mapping.[0137] For example, if an application has a large parallel loop, a programmer or a compiler can split the loop into sections and insert power-query and threadmanagement API calls to obtain the optimal configuration for each section and parallelize the next section accordingly. A runtime framework such as OpenMP and task queue will free the programming overhead from user applications. For example, when a user application parallelizes a loop with OpenMP, the runtime can call the power-query API to obtain the optimal configuration for the loop and manage threads and cores appropriately via a thread pinning API. A task queue library can also apply similar approaches. A task queue scheduler may be used to migrate tasks automatically based on the optimal power decision.[0138] Another mechanism is to augment an operating system to provide thread management for all user applications. The operating system periodically interacts with the power controller and obtains the optimal configuration. It then schedules software threads to appropriate hardware contexts. For example, assume that a user application is running eight threads on four cores; two threads per core, but the operating system is informed that a single core is the optimal configuration. Then, the operating system assigns all eight threads only on one core even though a total of four cores are available. It migrates six threads from the other three cores to the selected one core and turns off the unoccupied three cores to save energy.[0139] Embodiments may also use a transparent thread migration. In some cases, a processor may include heterogeneous cores, including compute-biased cores and memory-optimized cores. Such design is based on the observation that a single core architecture cannot optimize for compute-bounded and memory-bounded workloads simultaneously. For example, conventional multiprocessors are usually optimized for compute workloads targeting for many core counts running at high frequency. However, such multiprocessors may not be energy efficient during the phases that are mostly waiting for memory accesses. Therefore, one or more cores specialized for memory-intensive phases (a memory-biased or memory-optimized core) may be provided that run at low frequency to saturate memory resources, providing the same memory performance as many compute cores.[0140] A thread migration mechanism may be used to switch execution between the two types of cores transparently to software, both user applications and operating systems. For example, assume that a system has one memory-optimized core and four compute-optimized cores. If each compute core is two-way SMT capable, the memory-optimized core is equipped with total eight hardware contexts. The memory-optimized core contexts are visible only to hardware and completely hidden from software. And, thread migration onto the memory-optimized core contexts is handled solely by hardware. A hardware controller clones the exact same thread context between the compute core and the memory-optimized core and then resumes the executing on the new core silently. Even the operating system cannot detect the underlying thread migration. In addition, because the memory- optimized core is just another core on the same cache coherency domain, it does not give rise to any memory consistency issue at all, though it might suffer from data migrating overhead from the context migration. Note that a memory-optimized core architecture provides new opportunities to optimize energy efficiency. Because only memory-intensive workloads are executed on it, its architecture can be specialized to the extreme to achieve order of magnitude higher efficiency than a general purpose core. In some cases, the memory-optimized core implements the same ISA as compute cores.[0141 ] Referring now to FIG. 17, shown is a block diagram of a processor including a hardware power control logic in accordance with another embodiment of the present invention. As illustrated in FIG. 17, processor 1700 is a multicore processor, which in a particular implementation may be a SoC having a variety of cores including at least some heterogeneous cores with different compute and memory capabilities. In this way, by appropriate control of active cores, a workload may be allocated amongst the cores to provide an efficient mix of power and performance. As seen, processor 1700 includes a plurality of cores 17100- 1710n. In different embodiments, these cores may be a mix of in-order and out-of-order cores, as well as compute-biased cores and one or more memory-biased cores to provide for memory bounded operation at efficient and low core power.[0142] As further illustrated in FIG. 17, cores 1710 couple to a cache memory 1720, which in an embodiment may be implemented as a shared cache memory such as a last level cache. Still further, processor 1700 may include an internal memory 1730, such as may be located on a separate die of a multi-die processor package.[0143] During operation, performance/energy counters included within the various cores may be configured to provide characteristic information to a power controller 1750. In various embodiments, power controller 1750 may be implemented as a PCU that includes specialized hardware logic for performing model-based power control as described herein.[0144] In the illustration of FIG. 17, power controller 1750 includes a characteristics buffer 1760 which may store the information incoming from the various cores.Although the scope of the present invention is not limited in this regard, such performance/energy characteristics may include instructions per cycle information, instruction mix information, load/store queue information, cache hit/miss information, memory latency information, and memory bandwidth information, among other such information.[0145] As seen, buffer 1760 is coupled to a performance/energy model logic 1770, which may, based on the incoming characteristics information, determine aperformance and energy prediction. In different embodiments, performance/energy model logic 1770 may include hardware to implement a given model such as an expert heuristics-based model, a machine learning-based model and/orcombinations thereof. Still with reference to FIG. 17, note that performance/energy logic 1770 further receives input from a configuration storage 1780. In general, configuration storage 1780 includes information regarding possible operating characteristics, including a number of active cores (e.g., from 0 to n) and the voltage/frequency pairs for the given number of active cores. In embodiments in which performance/energy logic 1770 is a heuristic-based logic, configuration storage 1780 may further provide a heuristic storage to store one or more lookup tables having a plurality of entries each of which may be used to store workload characteristics information associated with a particular workload classification and corresponding power configuration. In embodiments in which performance/energy logic 1770 is a machine learning-based logic, configuration storage 1780 may further provide a trained model parameter storage (e.g., via a non-volatile storage) to store one or more tables having a plurality of entries each of which may be used to store trained model parameters obtained during an offline training process.[0146] In some embodiments, a model for both machine learning and heuristic approaches may be implemented as an equation that includes input parameters (workload characteristics such as IPS, etc.) and a set of coefficients to provide a weight for each parameter. The equation may be used to compute a numerical value that represents an optimal configuration state for the processor for a next operation interval. This value may be used as an index to a table with different configuration parameters. In these cases, configuration storage 1780 provides a table of coefficients for each parameter. In general, the equation (or the format of the equation) is fixed. Therefore, only the coefficients may change. In the heuristic approach, the coefficients are fixed once during manufacturing. In the machine learning approach, the coefficients are obtained during offline training andprogrammed into the storage. When operating, the coefficients can be updated online based on machine learning.[0147] Still with reference to FIG. 17, power controller 1750 further includes a power configuration selector 1790 which may receive a given performance energy prediction for a next operation interval and determine an appropriate next power configuration to provide operation controls to cores 1710 for the next operation interval. In the embodiment shown in FIG. 17, power configuration selector 1790 includes a history logic 1792, an impact logic 1794, and an update logic 1795.[0148] In general, history logic 1792 may be configured to analyze an incoming prediction and determine based on short-term and/or long-term history whether the prediction is appropriate and should be provided to control the cores for the next operation interval, or not. For example, a number of predictions of a particular configuration (e.g., for a reduced number of cores) may be consecutively made (e.g., 3 or more) before a configuration change to effect a reduced number of cores occurs. History logic 1792 may also be used avoid over-sensitive control of power management by reference to historical states is necessary. In one embodiment, history logic 1792 may implement a lazy switch scheme via a counter (e.g., a 2-bit saturated counter) for each possible decision. This counter is incremented when a power gating decision is made, for example. Only when the same decision is made during a predetermined number of consecutive time intervals (e.g., 3), by reference to the counter, is a power configuration update executed. In other cases, history logic 1792 may implement a history table to record prior power configuration decisions and system states.[0149] Impact logic 1794 may be configured to determine an impact that predictions and configuration updates are having on the power and/or performance of the processor and to constrain or prevent predictions from being provided as a control to the cores.[0150] As further shown in FIG. 17, power configuration selector 1790 further includes an update logic 1795. Update logic 1795 may determine whether to enable a power configuration update based on near past decisions (e.g., with reference to entries in the history table) and far past decisions (via a long term accumulative function). Thus update logic 1795 may be configured to perform self-learning during a lifetime of processor operation (from an initial boot of the processor through multiple boot cycles (and possibly many years of operation)), by updating trained model parameters based on actual usage of the processor in the field. Understand while shown at this high level in the illustration of FIG. 17, many variations and alternatives are possible.[0151 ] Referring now to FIG. 18, shown is a flow diagram of a method for controlling power consumption of a processor in accordance with an embodiment of the present invention. In the embodiment shown in FIG. 18, method 1800 may be performed by appropriate combinations of hardware, software, and/or firmware, such as a hardware power control logic as described herein, which may be implemented as part of a power controller, itself implemented as one or more microcontrollers. In one embodiment, method 1800 may be performed by a model logic thatimplemented a heuristic-based model. As seen, method 1800 begins by receiving workload characteristics information (block 1810). As described, such information may be received from performance/energy counters of various cores and other logic of a processor. In addition, configuration and power state information is also received (block 1820). This configuration information may correspond to a number of actively powered cores in a current operation interval and the power state thereof (e.g., a given power level at which the core is operating). In some embodiments, the power state information may further include performance state information, which may correspond to a particular voltage/frequency level at which the cores are operating. In some cases the performance state for a core may be at a guaranteed operating frequency (which is a maximum operating frequency at which a processor is guaranteed to operate), a turbo mode frequency (which is an opportunistic operating frequency higher than this guaranteed frequency), or an efficient frequency level, which is an operating frequency lower than the guaranteed frequency.[0152] Still with reference to FIG. 18, next control passes to block 1830, where the workload may be classified. More specifically, the workload can be classified based on some or all of the received workload characteristics information and power state information. At a high level, this workload classification may be determined to be at one of a limited number of possible levels, including a memory bounded level, a compute bounded level, or a typical operating level, which may generally correspond to a workload that is operating between a memory bounded level and a compute bounded level. Control next passes to diamond 1835 to determine whether the workload is memory bounded. Although the scope of the present invention is not limited in this regard, in an embodiment the determination of workload memory boundedness may be based on an aggregate of latency information obtained from the various cores in operation, bandwidth information associated with the memory bandwidth or so forth. [0153] If it is determined that the workload is not memory bounded, control passes directly to block 1850 where current settings may be maintained. As such, in the next operation interval, the same number of cores may be powered on and these cores may continue to execute the current threads at current frequency and voltage levels.[0154] Instead if it is determined that the workload is memory bounded, control passes from diamond 1835 to block 1840, where a configuration prediction may be generated. More specifically, this configuration prediction may be for a scenario with reduced core activity, in that one or more currently active cores can be powered down and/or compute-intensive cores may be powered down in favor of one or more memory-intensive cores, to enable improved handling of the memory bound condition. Control next passes to diamond 1845 to determine whether the configuration prediction is consistent with history information. Such determination can be based on history information indicating of the appropriateness of prior predictions. If the prediction is not consistent, control passes to block 1850 where the current settings may be maintained. Note that in some cases, in addition to this history analysis a determination also may be made as to whether the predicted configuration would adversely affect performance. For example, if the configuration prediction if effected would increase a performance penalty greater than a threshold level (e.g., 5%), the new configuration may not be effected and control thus passes to block 1850.[0155] If instead it is determined that the configuration prediction is consistent with history information (and/or is not predicted to adversely affect performance by more than a threshold level), control passes to block 1860, where threads to be executed in the next operation interval may be scheduled/rescheduled to the reduced number of cores. For example, threads currently executing on a core to be powered down may be rescheduled to cores that are to remain powered on. This is the case as in various embodiments, the cores may be multi-threaded such that multiple threads can concurrently execute on a single core, e.g., via multiple logical processors, which provide appropriate hardware thread resources to enable concurrent execution. In other cases, threads may be rescheduled from certain active cores to other active cores to better accommodate and provide for efficient operation (such as moving from compute cores to memory-biased cores). Next control passes to block 1870 where the power state of the cores may be updated. That is, various control signals are sent to the cores, clock generators, voltage regulators and so forth to enable appropriate activation/deactivation of given cores. Thereafter, operation occurs in the next interval operation interval in which the threads execute on the active cores (block 1880). Understand while shown at this high level in the embodiment of FIG. 18, many variations and alternatives are possible.[0156] Referring now to FIG. 19, shown is a flow diagram of a method for controlling power consumption of a processor in accordance with anotherembodiment of the present invention. As seen in FIG. 19, method 1900 may be performed by appropriate combinations of hardware, software, and/or firmware, such as a hardware power control logic as described herein. In one embodiment, method 1900 may be performed by a model logic that implements a machine learning-based model. Method 1900 begins by receiving workload characteristics information, e.g., from performance/energy counters within the processor (block 1910). In addition, configuration and power state information is also received (block 1920). This configuration information may correspond to a number of actively powered cores in a current operation interval and the power state thereof.[0157] Next, control passes to block 1930 where trained model parameters may be obtained from a configuration storage based on the workload characteristics and power state information. More specifically, the machine learning control logic may perform a classification of the workload based on the workload characteristics information and power state information to obtain trained model parameters. These parameters may correspond to a given type of classified workload, such as compute bounded, memory bounded, typical workload or so forth. Furthermore, given the higher computing capabilities of a machine learning classifier-based logic (as compared to a heuristic-based logic), more fine-grained analysis and classification of workload level can occur. As such, in addition to compute bounded, memory bounded and typical classifications, a plurality of levels of the compute bounded classification, a plurality of levels of the memory bounded classification, in addition to a plurality of levels of a normal operation classification also can be identified and classified.[0158] Still with reference to FIG. 19, control passes to block 1940, where a configuration prediction can be generated from the trained model parameters. Note that in some cases, before a power configuration is updated, the machine learning logic may estimate long term and short term performance/energy impact due to previous actions taken. In an embodiment, two reward functions are implemented in the power controller for short term and long term impact evaluation. For example, if the overall reward values (a weighted combination of short term and long term rewards) indicate the model is making too aggressive predictions (and adversely affecting performance), this power configuration update does not occur. Further, as described, with regard to FIG. 20, self-learning may occur in which trained model parameters can be adjusted by update to the trained model parameters, e.g., to be more conservative with regard to performance impact. Thereafter, control passes to block 1950, where threads may be scheduled (and rescheduled if needed) to the determined number of cores. In some embodiments, such thread scheduling may be performed in a hardware manner transparent to an OS or other software. In other cases, the OS itself may perform thread scheduling on the indicated number of cores.[0159] Control next passes to block 1960, where the power state of the cores may be updated. For example, various control signals are sent to the cores, clock generators, voltage regulators and so forth to enable appropriateactivation/deactivation of given cores. Thereafter, operation occurs in the next interval operation interval in which the threads execute on the active cores (block 1970). Understand while shown at this high level in the embodiment of FIG. 19, many variations and alternatives are possible.[0160] Understand that a machine learning logic in accordance with anembodiment of the present invention provides the ability for machine-based self- learning, such that a baseline of trained model parameters, e.g., provided upon manufacture of a processor, can be updated based on actual usage of the processor in the field, e.g., by a particular end user that operates a system including the processor with generally similar workloads during the lifetime of the processor.Furthermore, as the workload and usage of the processor varies over its lifetime, these trained model parameters can be updated based on such self-learning, to enable optimized performance and power management during processor lifetime.[0161 ] Referring now to FIG. 20, shown is a flow diagram of a method for updating trained model parameters in accordance with an embodiment of the present invention. As shown in FIG. 20, method 2000 may be performed by appropriate combinations of hardware, software, and/or firmware, such as a hardware power control logic as described herein that implements a machine learning-based model.[0162] As seen, method 2000 begins by estimating a performance/energy impact of prior configuration predictions for a workload under analysis (block 2010). As an example, this impact may be with regard to performance loss as a result of a reduced number of active cores (and/or reduced frequency of core operation). Next, control passes to diamond 2020 to determine whether this impact is greater than a first impact threshold. For example, in some embodiments this impact threshold may be a high impact threshold and may be set at a predetermined level, e.g., a performance impact measured in percentage of performance loss. As an example, this threshold level may be less than approximately a 10% performance loss. If it is determined that the impact is greater than the first impact threshold, control passes to block 2030 where one or more trained model parameters may be updated. More specifically, for a particular workload type under analysis, one or more trained model parameters may be updated to limit the performance impact of the powermanagement configurations identified in the trained model parameters. As such, dynamic machine or self-learning occurs over processor lifetime, such that an appropriate balance of performance and power management can be realized.[0163] Still with reference to FIG. 20, if instead it is determined at block 2020 that the impact is not greater than this first impact threshold, control passes next to diamond 2040 to determine whether the impact is less than a second impact threshold. As an example, this second impact threshold may a low impact threshold and may be set a different predetermined level, e.g., less than approximately a 2-3% performance loss, in an embodiment. If it is determined that the impact is above this threshold, no update or self-learning occurs, as appropriate trained model parameters for current execution exist. Otherwise, if it is determined that the impact is less than the second impact threshold, control passes to block 2050, where one or more trained model parameters may be updated. More specifically, for a particular workload type under analysis, one or more trained model parameters may be updated to provide for further power savings control by way of the powermanagement configurations identified in the trained model parameters. As such, dynamic machine or self-learning occurs over processor lifetime, such that an appropriate balance of performance and power management can be realized.Understand while shown at this high level in the embodiment of FIG. 20, many variations and alternatives are possible.[0164] Based on the particular model logic implemented (and self-learning performed during processor lifetime), many memory bound and compute bound workloads can realize optimal performance and energy efficiency with fewer active compute resources and/or lower frequencies. By identifying and selecting an optimal configuration for each particular workload, embodiments may provide energy savings of approximately 4x, while maintaining performance loss within less thanapproximately 5%.[0165] The following examples pertain to further embodiments.[0166] In one example, a processor comprises: a plurality of cores to independently execute instructions, each of the plurality of cores including a plurality of counters to store performance information; and a power controller coupled to the plurality of cores. The power controller may include a logic to receive performance information from at least some of the plurality of counters, determine a number of cores to be active and a performance state for the number of cores for a next operation interval, based at least in part on the performance information and model information, and cause the number of cores to be active during the next operation interval, where the performance information associated with execution of a workload on one or more of the plurality of cores. [0167] In an example, the processor further includes a configuration storage having a plurality of entries each to store a number of cores to be enabled and one or more pairs of voltage/frequency at which the number of cores are to operate.[0168] In an example, the logic is coupled to the configuration storage to access one or more of the plurality of entries and determine the number of cores to be active for the next operation interval based at least in part thereon.[0169] In an example, the logic is to classify a workload based at least in part on the performance information and determine the number of cores to be active for the next operation interval based on the workload classification.[0170] In an example, if the workload classification indicates a memory bound workload, the logic is to determine the number of cores to be active for the next operation interval to be less than a current number of active cores.[0171 ] In an example, if the workload classification indicates a memory bound workload, the logic is to cause one or more threads to be migrated from a first type of core to a second type of core for the next operation interval.[0172] In an example, the logic comprises a heuristic logic, and the model information is to be obtained from a heuristic storage of the processor to store power configuration information associated with the workload.[0173] In an example, the logic comprises a machine learning logic, and the model information comprises training information to be stored in a storage of the processor during manufacture of the processor.[0174] In an example, the machine learning logic includes an update logic to update at least some of the training information based on a history of operation of the processor and one or more configuration predictions by the machine learning logic during a lifetime of the processor.[0175] In an example, the logic includes a history logic to receive a prediction of the number of cores to be active in the next operation interval and to enable the logic to cause the number of cores to be active in the next operation interval based on a history of prior predictions. [0176] In an example, the history logic comprises a counter to maintain a count of a number of consecutive predictions for a first number of cores to be active in the next operation interval, where the history logic is to enable the logic to cause the first number of cores to be active in the next operation interval when the count exceeds a threshold, and otherwise to not enable the first number of cores to be active in the next operation interval.[0177] In an example, the logic is to maintain a current number of active cores for the next operation interval if a performance impact of execution of the workload on the determined number of cores would exceed a threshold level.[0178] Note that the above processor can be implemented using various means.[0179] In an example, the processor comprises a SoC incorporated in a user equipment touch-enabled device.[0180] In another example, a system comprises a display and a memory, and includes the processor of one or more of the above examples.[0181 ] In another example, a system comprises a processor including: a plurality of cores to independently execute instructions; and a power controller coupled to the plurality of cores to: receive workload characteristic information of a workload executed on a first number of active cores in a first operation interval, configuration information regarding the first number of active cores, and power state information of the first number of active cores; classify the workload based on the workload characteristic information, the configuration information, and the power state information; and schedule one or more threads to a different number of active cores for a next operation interval based at least in part on the workload classification, and update a power state of one or more of the plurality of cores to enable the different number of active cores for the next operation interval. The system may further include a DRAM coupled to the processor.[0182] In an example, the power controller is to generate a power configuration prediction having a reduced number of active cores for the next operation interval if the workload is classified as a memory bounded workload. [0183] In an example, the power controller is to determine whether the power configuration prediction is consistent with history information, and if so schedule the one or more threads to the reduced number of active cores for the next operation interval, and otherwise maintain the first number of active cores for the next operation interval.[0184] In an example, the power controller is to obtain trained model parameter information from a storage of the processor based at least in part on the workload characteristic information.[0185] In an example, the power controller is to: generate a power configuration prediction from the trained model parameter information, the power configuration prediction including a number of cores to be active in the next operation interval, a number of threads to be active in the next operation interval, and a performance state of the number of cores; estimate a performance/energy impact of the power configuration prediction; update at least some of the trained model parameter information for a classified workload type to reduce a performance impact if the estimated performance/energy impact exceeds a first impact threshold; and update at least some of the trained model parameter information for the classified workload type to increase power savings if the estimated performance/energy impact is less than a second impact threshold.[0186] In another example, a method comprises: classifying, via a workload classifier, a workload executed on a multicore processor including a plurality of cores, and causing a reduced number of cores of the plurality of cores to be active in a next operation interval based at least in part on the workload classification;determining an impact of the reduced number of cores on a performance metric of the multicore processor; and if the impact is greater than a first threshold, updating one or more trained model parameters associated with the workload classifier for a workload type associated with the workload, where the updated trained model parameters are to enable a reduction of the impact on the performance metric.[0187] In an example, the method further comprises if the impact is less than a second threshold, updating the one or more trained model parameters associated with the workload classifier for the workload type associated with the workload, where the updated trained model parameters are to enable a reduction in power consumption, where the second threshold is less than the first threshold.[0188] In an example, classifying the workload comprises obtaining trained model parameters from a storage of the multicore processor based at least in part on workload characteristic information obtained from one or more of the plurality of cores.[0189] In an example, the method further comprises: generating a power configuration prediction from the trained model parameters, the power configuration prediction to identify the reduced number of cores to be active in the next operation interval, a number of threads to be active in the next operation interval, and a performance state of the reduced number of cores; and determining whether to enable a power management controller to cause the reduced number of cores to be active in the next operation interval based at least in part on history information.[0190] In an example, updating the one or more trained model parameters comprises increasing a number of cores to be active for the workload type.[0191 ] In an example, the method further comprises causing one or more threads of the workload to be migrated from one or more first cores to at least one second core, where the at least one second core comprises a memory-biased core and the one or more first cores comprises a compute-biased core.[0192] In another example, a computer readable medium including instructions is to perform the method of any of the above examples.[0193] In another example, a computer readable medium including data is to be used by at least one machine to fabricate at least one integrated circuit to perform the method of any one of the above examples.[0194] In another example, an apparatus comprises means for performing the method of any one of the above examples.[0195] Understand that various combinations of the above examples are possible. [0196] Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.[0197] Embodiments may be implemented in code and may be stored on a non- transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may beimplemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0198] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerousmodifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
An interface input (fig.2: 220) has an input circuit (fig.10: 221) adapted to receive input signal levels (padloc) higher than a maximum signal level that a host circuitry's electronic components can reliably handle. The input circuit (fig.10: 221) shifts the level of the input signal (padloc) to a desired signal level. A keeper circuit (keeper: pull up 1011; pull down 1012) is coupled to the input circuit and maintains trigger levels of the shifted signals (lyl_dn_int) consistent with the input signal level (padloc).
1.An interface input that includes:An input circuit adapted to receive an input signal level higher than the maximum signal level that the electronic components of the host circuit can reliably handle, and to shift the input signal level to a desired interface signal level; andA keeper circuit coupled to the input circuit is operable to maintain the level of the level-shifted signal consistent with the input signal level.2.The interface input according to claim 1, wherein the input circuit comprises:A transistor arranged in a transmission gate configuration, the transistor including one of the electronic components, the transistor having a node exposed to the signal level higher than a maximum signal level that the transistor can reliably handle And a node coupled to the holder circuit.3.The interface input according to claim 2, wherein the holder circuit comprises:A plurality of transistors arranged in a stacked configuration, the plurality of transistors including components in the electronic component.4.The interface input of claim 1, wherein the holder circuit comprises:A pull-up holder circuit portion operable to pull up the output signal input by the interface to a first predetermined signal level in response to the input signal received at the input circuit.5.The interface input according to claim 4, wherein the first predetermined signal level is a first signal level that is less than a second signal level, and wherein the second signal level includes a higher level than the interface input The electronic component can reliably handle the input signal level of the maximum signal level.6.The interface input according to claim 5, wherein the pull-up holder circuit portion is operable to be at the input signal when the input signal received at the input circuit is at the first signal level and at the input circuit When the received input signal is at the second signal level, the output signal is pulled up to the first signal level.7.The interface input according to claim 4, wherein the pull-up holder circuit section includes:A plurality of transistors arranged in a stacked configuration operable to promote the rapid rise of the output signal level to the first predetermined signal level, thereby maintaining the trigger level of the level shifted signal and the The input signal level is consistent, wherein the plurality of transistors include components in the electronic components.8.The interface input according to claim 4, wherein the holder circuit further comprises:A pull-down holder circuit portion operable to pull down the output signal input by the interface to a second predetermined signal level in response to the input signal received at the input circuit.9.The interface input according to claim 8, wherein the pull-down holder circuit portion includes:A plurality of transistors arranged in a stacked configuration, operable to divide the rising edge of the output signal, thereby maintaining the trigger level of the level-shifted signal consistent with the input signal level, wherein The plurality of transistors include components in the electronic components.10.The interface input according to claim 1, further comprising:A Schmitt trigger, which is coupled to the holder circuit, is operable to provide an output signal of the interface input at a first predetermined signal level in response to the holder circuit.11.The interface input according to claim 10, wherein the first predetermined signal level is a first signal level that is less than a second signal level, and wherein the second signal level includes a higher level than the interface input The input signal level of the maximum signal level that the electronic component can reliably handle, and wherein the output signal is when the input signal received at the input circuit is at the first signal level and The input signal received at the input circuit is output at the first signal level when it is at the second signal level.12.The interface input according to claim 1, wherein the input signal level includes a voltage level higher than a maximum voltage level that the electronic component can reliably handle.13.The interface input according to claim 13, wherein the interface input is adapted to receive an input signal level of both 1.8V or less and 2.6V or more than 2.6V, and to provide an input signal of 1.8V or less Output signal level.14.An input circuit including:An input node for receiving inputs at a plurality of different signal levels, the plurality of different signal levels including a signal level greater than a maximum signal level that transistors of the input circuit can reliably handle;A keeper circuit operable to maintain the trigger point of the voltage-shifted input signal, the keeper circuit including a plurality of transistors having reliability limits below the maximum signal level; andA transmission gate, which is placed between the input node and the holder circuit and is operable to control the input signal as provided to the holder circuit to a value consistent with the reliability limit, the transmission gate Include transistors with reliability limits below the maximum signal level.15.The input circuit according to claim 14, further comprising:A Schmitt trigger, which is coupled to the holder circuit, is operable to provide an output signal of the interface input at a first predetermined signal level in response to the holder circuit.16.The input circuit according to claim 15, wherein the first predetermined signal level is a first signal level less than the maximum signal level, and wherein the output signal is at an input received at the input node The signal is output at the first signal level when the signal is at the first signal level and when the input signal received at the input node is at the maximum signal level.17.The input circuit according to claim 15, further comprising:A transistor, which is coupled to the output of the Schmitt trigger, and is operable to selectively enable the output of the input circuit.18.The input circuit according to claim 14, wherein the holder circuit comprises:A pull-up holder circuit portion operable to pull up the output signal of the input circuit to the first data level in response to the input signal received at the input node; andA pull-down keeper circuit portion operable to pull down the output signal of the input circuit to a second data level in response to the input signal received at the input node.19.The input circuit of claim 18, wherein the pull-up keeper circuit comprises:A plurality of transistors arranged in a stacked configuration are operable to facilitate the rapid rise of the output signal level to the first data level, thereby maintaining the trigger level consistent with the input signal level.20.The input circuit of claim 18, wherein the pull-down keeper circuit comprises:A plurality of transistors arranged in a stacked configuration are operable to divide the rising edge of the output signal, thereby maintaining the trigger level consistent with the input signal level.21.A method that includes:Provide a signal path to facilitate data communication using multiple different signal levels, where the different signal levels include a maximum signal level;Placing a transmission gate at the input node of the signal path to isolate components of the signal path from the maximum signal level; andCoupling a plurality of transistors to terminals of the transfer gate other than the input node, the plurality of transistors arranged in a stacked configuration is operable to provide a level shifted according to a signal received at the input node Output.22.The method according to claim 21, wherein the plurality of transistors have a reliability limit smaller than the maximum signal level, and wherein the transfer gate is provided at the terminal except the input node at the terminal A signal with a signal level within reliability limits.23.The method of claim 21, wherein the transmission gate includes a transistor having a reliability limit less than the maximum signal level, and the transmission gate is configured such that the signal received at the input node is The maximum signal level prevents the transistor from experiencing a terminal-to-terminal signal level that exceeds the reliability limit.24.The method of claim 21, further comprising:A Schmitt trigger is coupled to the plurality of transistors, the Schmitt trigger is operable to provide a level shifted output of the signal path.
Input / output circuit conforming to high signal levelTechnical fieldThe present invention relates generally to input / output circuits, and more specifically, to input / output circuits compatible with high signal levels.Background techniqueThe use of various electronic devices has become almost ubiquitous in modern society. For example, office workers and professionals usually use desktop and portable electronic devices every day at work. These people often use electronic devices such as personal computer systems, personal digital assistants (PDAs), cellular phones, pagers, digital voice and / or image recorders regularly. These electronic devices are often used in combination with one or more peripheral devices such as external display devices, memory devices, printers, docking stations, network interfaces, and so on. However, in order to properly interface with the peripheral device, the electronic device should not only provide proper physical connection and basic interface protocol, but the electronic device must generally adapt to the signal level (eg, voltage level) native to the peripheral interface.Different peripheral devices tend to utilize different signal levels at their associated peripheral interfaces. For example, a memory device provided by one manufacturer and / or operating according to one standard may utilize a peripheral interface signal level of about 1.8V, while another manufacturer provides and / or operates according to another standard A similar memory device may utilize peripheral interface signal levels of about 2.6V or 3.0V. Although the foregoing example may not initially appear to be a large signal level difference, if it is designed for a lower signal level (eg, 1.8V) and operates at a higher signal level (eg, 2.6V or 3.0V), Then electronic components may experience reliability (the ability of the component to operate for a long period of time without performance degradation).The reliability of individual electronic components (eg, transistors) may be compromised in many ways, for example, by electrical stress caused by the application of an electric field across the terminals of the transistor for a long time. As these electric fields become higher, the life of electronic components is shortened. For example, the reliability limits of metal-on-silicon (MOS) transistors depend on different breakdown phenomena, including time-dependent dielectric breakdown (TDDB), hot carrier injection (HCI), and negative bias temperature instability (NBTI) ). The reliability limits of 45nm MOS (1.8V) electronic components associated with each of the aforementioned phenomena are provided in the following table. It is easy to understand from this table that the operation of these electronic components using signal levels of 2.6V or 3.0V may have reliability issues.Various techniques have been used to try to accommodate peripheral devices with associated different signal levels. FIG. 1 shows an exemplary prior art electronic device 100 having multiple input / output circuits, each of which is configured to accommodate a specific signal level. For example, the input / output circuit 120 may include electronic components designed to adapt to the first signal level (eg, 1.8V), and the input / output circuit 130 may include electronic components designed to adapt to the second signal level (eg, 2.6V) electronic components. That is, the circuit of the output path 121 and the circuit of the input path 122 can be adapted to reliably operate together with peripheral devices interfaced using 1.8V signals. The circuit of the output path 131 and the circuit of the input path 132 can therefore be adapted to operate reliably with peripheral devices interfaced using 2.6V signals. The host circuit 101 (eg, a host circuit that can provide the core operation function of the device 100) may be adapted to interface with the input / output circuits 120 and 130 using corresponding signal levels.The technique shown in FIG. 1 for adapting peripheral devices with different signal levels has problems related to size and cost. Specifically, the illustrated embodiment provides two separate input / output circuits, and therefore requires additional physical area to accommodate the circuits. In addition, the costs associated with the added components are incurred in the described technology.Another technique for accommodating peripheral devices with different signal levels is to utilize both peripheral devices designed to interface by using higher signal levels and peripheral devices using lower signal levels (eg, 1.8V). To adapt to the input / output circuit (eg, input / output circuit 130 of FIG. 1) of a higher signal level (eg, 2.6V). Operating an electronic device with an electronic field lower than the electronic field that the device is designed to use generally does not cause the aforementioned reliability problems. However, using circuits designed for higher signal levels is generally not energy efficient and also degrades performance. In particular, handling lower signal levels with electronic components designed to accommodate higher signal levels generally consumes more energy than using properly designed electronic components.Today's electronic devices are becoming smaller and smaller, and power management is becoming critical. For example, to maximize battery life in portable devices, even relatively small power consumption savings may be important. Therefore, the use of input / output circuits designed to adapt to higher signal levels when dealing with lower signal levels generally does not present reliability issues, but can lead to undesirable power consumption.Summary of the inventionThe present application discloses an interface input having an input circuit adapted to receive an input signal level higher than a maximum signal level that an electronic component of a host circuit can reliably handle. The input circuit shifts the level of the input signal to the desired signal level. A keeper circuit is coupled to the input circuit and maintains the trigger level of the shifted signal in line with the input signal level.The present application also discloses an input circuit having an input node for receiving signals at a plurality of different signal levels. The different signal levels include greater than the maximum signal level that transistors of the input circuit can reliably handle. The input circuit also has a keeper circuit that maintains the trigger point of the voltage-shifted input signal. The holder circuit has a plurality of transistors. Each transistor has a reliability limit below the maximum signal level. The input circuit also has a transmission gate placed between the input node and the holder circuit, which is used to keep the input signal provided to the holder circuit in accordance with the reliability limit. The transfer gate has a transistor with a reliability limit lower than the maximum signal level.The present application also discloses a method that includes providing a signal path to facilitate data communication using multiple signal levels. The different signal levels have the maximum signal level. The method includes arranging a transmission gate at an input node of the signal path to isolate components of the signal path from the maximum signal level. It also includes a terminal that couples a plurality of transistors to the transmission gate different from the input node. The plurality of transistors are arranged in a stacked configuration to provide a level shifted output according to the signal received at the input node.The foregoing has outlined the features and technical advantages of the invention quite broadly so that the following detailed description of the invention can be better understood. Additional features and advantages of the invention that form the subject matter of the claims of the invention will be described below. Those skilled in the art should understand that the disclosed concepts and specific embodiments can be easily used as a basis for modifying or designing other structures to implement the same purpose of the present invention. Those skilled in the art should also realize that these equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. When considered in conjunction with the drawings, the novel features and other objects and advantages believed to constitute features of the invention in terms of organization and method of operation will be better understood from the following description. However, it should be clearly understood that each of the figures is provided for illustration and description purposes only, and is not intended to define the limitations of the present invention.BRIEF DESCRIPTIONFor a more complete understanding of the present invention, reference is now made to the following description in conjunction with the accompanying drawings, in which:FIG. 1 shows a prior art electronic device with multiple input / output circuits, each of which is configured to adapt to a specific signal level;2 shows a high-level block diagram of an embodiment of an input / output circuit that is compliant with high signal levels;3 shows details about an embodiment of a pre-driver as can be used in the input / output circuit of FIG. 2 compliant with high signal levels;4 shows details about an embodiment of a level shifter as can be used in the pre-driver of FIG. 3;5 shows details about an embodiment of a multi-level buffer as may be used in the pre-driver of FIG. 3;6 shows details about an embodiment of a driver as can be used in the input / output circuit of FIG. 2 compliant with high signal levels;7 shows details about an embodiment of a level detector as can be used in the input / output circuit of FIG. 2 compliant with high signal levels;8 shows details about an embodiment of a mode controller as can be used in the input / output circuit of FIG. 2 compliant with high signal levels;9 shows details about an embodiment of the bias generator as can be used in the mode controller of FIG. 8; andFIG. 10 shows details about an embodiment of a level shift controller as can be used in the high signal level input / output circuit of FIG. 2.detailed description2 shows a high-level block diagram of an embodiment of an input / output circuit that is compliant with high signal levels according to the concepts herein. The input / output circuit 200 of FIG. 2 is suitable for host electronic devices (for example, personal computer systems, personal digital assistants (PDAs), cellular phones, pagers, digital sound recorders, digital cameras, digital video cameras, personal entertainment players, An interface between a host circuit (not shown) of a game device and the like and peripheral devices (for example, a memory device, a display, a printer, an electronic pointer, an inverter, etc.) is provided. Specifically, the input / output circuit 200 is adapted to accommodate peripheral interface signals of both a high level (for example, 2.6V and / or 3.0V) and a low level (for example, 1.8V). When adapting to high signal levels, the input / output circuit 200 utilizes electronic components designed to be used with respect to low signal levels. Embodiments thereby provide efficiency regarding size and power consumption. As will be better understood from the following discussion, when using electronic components designed for low signal levels to accommodate high signal levels, the input / output circuit 200 is adapted to avoid the association with the application of a relatively large electric field across the terminals of the electronic component Reliability issues.The input / output circuit 200 shown in FIG. 2 includes an output path 210 for interfacing signals from the circuit of the host device to the circuit of the peripheral device and an input for interfacing signals from the circuit of the peripheral device to the circuit of the host device Path 220. Although the input / output circuit 200 of the illustrated embodiment includes both the output path 210 and the input path 220, the embodiment may implement the concepts as described herein alone in the input path circuit or separately in the output path circuit. Furthermore, the concepts described herein are applicable to circuits other than input circuits and output circuits, and thus can be provided in a variety of scenarios that will accommodate signal levels that are higher than the signal levels at which specific electrical components are designed to operate Examples consistent with the teachings herein.The output path 210 and the input path 220 of the illustrated embodiment are each adapted to accommodate both high-level (eg, 2.6V or 3.0V) and low-level (eg, 1.8V) signals. Specifically, and as described in detail below, the input path 220 includes a level shift control 221 that includes a low power level design and is adapted to the low power provided by the peripheral device coupled to it Electronic components that reliably operate both flat and high level signals. Similarly, and as described in detail below, the output path 210 includes a pre-driver 211 coupled to the driver 212, each of the driver 212 and the pre-driver 211 includes a design for a low signal level and is adapted to be provided by peripheral devices coupled thereto Electronic components that both low-level signals and high-level signals operate reliably. The mode control 214 of the illustrated embodiment is coupled to the pre-driver 211, and in some embodiments to the driver 212, to provide control of the circuits therein for low signal level operation and high signal level operation.In operation according to certain embodiments, the input / output circuit 200 is adapted to interact with the circuit of the host device using a predetermined low signal level, and to interact with the circuit of the peripheral device using a signal level suitable for the specific peripheral device currently interfaced . In many configurations, the circuitry of the host system will perform power saving operations in order to disconnect one or more power supply outputs (eg, core voltage). To accommodate this power saving operation without causing ambiguous input / output circuit operating states, the mode control 214 of the embodiment includes internal control signal generation utilized during the host circuit power saving operation. That is, when one or more outputs of the host circuit are unavailable due to power saving operations, the mode control 214 of the embodiment operates to internally generate appropriate control of the pre-driver 211 and / or driver 212 to maintain this The circuit is locked in the selected low or high signal level state. Therefore, when the host circuit returns to the operating state from the power saving operation, the input / output circuit 200 is configured to continue to interface with the peripheral device.The input / output circuit 200 illustrated in FIG. 2 is universal because it is operable to automatically and autonomously configure itself to operate with respect to the appropriate signal level. That is, the input / output circuit 200 of the illustrated embodiment is suitable for automatically selecting low signal level operation or high signal level operation when appropriate. Therefore, the level detection 213 of the output path 210 is coupled to the peripheral device being provided with the interface to detect its signal level and provide the mode selection signal to the mode control 214. The mode control 214 may therefore provide control regarding the circuits of the pre-driver 211 and / or the driver 212 according to the mode indicated by the level detection 213 (eg, low signal level or high signal level). The level shift control 221 of the input path 220 in the illustrated embodiment is operable to compensate for high signal level operation without a mode control signal.The operation of the input / output circuit 200 of the illustrated embodiment at a high level has been described, and individual functional blocks according to the embodiment are described in detail below. It should be appreciated that the specific embodiments described herein are exemplary embodiments, and that the concepts described may be implemented in embodiments other than the illustrated embodiments or in alternative embodiments of the illustrated embodiments.Note that referring to FIG. 3, details regarding the embodiment of the pre-driver 211 are shown. The pre-driver 211 of the illustrated embodiment accepts input of data signals from the host circuit for the connected peripheral device, and provides data signals from the signal level inside the host device to the signal level of the specific peripheral device suitable for the connection Level shift and provide an output to drive the driver 212 to provide the data output to the peripheral device at the appropriate signal level. To provide the aforementioned operation, the pre-driver 211 of the illustrated embodiment includes level shifters 311 to 313 and buffers 331 to 335. The level shifters 311 to 313 operate to provide, for example, a data signal level shift from the level provided by the host circuit to the level of the circuit of the peripheral device suitable for interfacing according to the mode selection signal provided by the mode control 214 Bit. The buffers 331 to 335 operate to provide data signal buffering to generate data signals suitable for driving the driver 212 appropriately. Logic gates 321 and 322 are provided in the illustrated embodiment to facilitate controllably enabling and disabling the output of the pre-driver 211. Specifically, the application operation of appropriately enabling signals to the terminals of the logic gate 321 (here NAND gate) and the logic gate 322 (here NOR gate) to selectively enable / disable the output of the pre-driver 211.When adapting to a signal level that is higher than the signal level at which the electronic components of the pre-driver 211 are designed to use, the pre-driver 211 is dealing with higher signal levels (eg, 2.6V and 3.0V pad voltage) A non-zero signal level (for example, a core voltage of 1.1V) is used as a bias supply voltage (for example, provided as a virtual ground). Therefore, the level shift of the pre-driver 211 of the illustrated embodiment is provided in multiple stages. Specifically, the level shifter 311 operates to level-shift the data signal from the host circuit provided at the signal level (for example, a core voltage of 1.1 V) inside the host device to the lowest peripheral device signal that is adapted Level (for example, shown here as 1.8V attenuation voltage). The level shifter 312 disposed in the pdata path of the pre-driver 211 operates to level-shift (if necessary) the data signal as output by the level shifter 311 to a level suitable for interfacing peripheral devices (for example, 2.6V or 3.0V attenuation voltage). The level shifter 312 of the illustrated embodiment does not provide level shifting and effectively acts as a delay when the interfaced peripheral device is operating with respect to the lowest adapted peripheral device signal level (shown here as 1.8V) Device.In the 2.6 / 3.0V operation mode (as selectable by the mode signal received from the mode control 214), the input of the level shifter 312 of the illustrated embodiment toggles between 0V and 1.8V, and The level-shifted output switches back and forth between 1.1V and 2.6V or 3.0V. During the 1.8V operation mode (as selectable by the mode signal received from the mode control 214), the level shifter 312 of the illustrated embodiment does not perform level shifting, and the output level remains the same as the input level ( Between 0V and 1.8V). As will be better understood from the discussion of the embodiment of the level shifter circuit shown in FIG. 4 below, the level shifter therefore shifts its input signal to a level that is consistent in terms of reliability for a given mode of operation.In addition to operating to maintain a good level of reliability for the electronic components therein, there is a need to provide good switching performance relative to the data path. For example, the signal provided by the pre-driver 211 operates to control the electronic components of the driver 212 to pull up to a data high level (eg, 1.8V, 2.6V, or 3.0V, using the pre-driver 211 to output pdata) and control the driver 212 Of the electronic components are pulled down to the data low level (for example, 0V, using the pre-driver 211 to output ndata). Therefore, embodiments operate to terminate the high-or drive signal at the other of the pre-driver outputs (pdata or ndata) before starting the high or drive signal at one of the pre-driver outputs (ndata or pdata), by This establishes "break-before-make" switching control of the driver 212. This switching control avoids ambiguity regarding data output and avoids undesirable current in the driver 212.The aforementioned switching performance is achieved by matching the signal propagation delays associated with the pdata path and the ndata path in the pre-driver 211 according to the illustrated embodiment. For example, although a level shift beyond the level shift provided by the level shifter 311 is not required in the ndata path of the pre-driver 211, a level shifter 313 is provided in the ndata path to provide the pre-driver 211 ’s The delay between the pdata path and the ndata path matches. That is, the illustrated embodiment of the level shifter 313 operates to not only accept and output a signal level at the lowest signal level of the peripheral device adapted (1.8V here) without level shifting the signal Attenuation voltage), and provides a propagation delay that can be used to match the total delay of the pdata path and the ndata path. Additionally or alternatively, additional elements such as additional inverters are used in the output chain of the ndata path (eg, inverters 333 to 335 in the ndata path compared to inverters 331 and 332 in the pdata path) It can be used for the aforementioned delay matching. Delay matching ensures a good duty cycle of the final output signal. The delay can be programmed in each component of the ndata path based on the mode signal received from the mode control 214. It should be understood from the above that a low signal level (eg, 1.8V) is sufficient to provide cut-off with respect to driver 212, and therefore regardless of the particular mode in which output path 210 is operating, the ndata path of the illustrated embodiment is not at a higher signal Operation at a level (for example, 2.6V or 3.0V).According to an embodiment, whether the virtual ground signal provided to the pdata path of the pre-driver 211 is controlled by the mode control 214, that is, based on whether the system is in the 1.8V, 2.6V, or 3.0V operation mode. In one embodiment, when the system is connected to a 1.8V peripheral device, 0V ground is provided, and when the system is operated with a 2.6V or 3.0V peripheral device, 1.1V ground is provided.Note that referring to FIG. 4, details regarding an embodiment of a level shifter as can be used to provide the level shifter 312 are shown. The level shifter 410 shown in FIG. 4 provides a timing-based level shifter configuration to accommodate signal levels that are higher than those whose electronic components are designed to operate reliably. The configuration does not impair the reliability of the electronic components of the level shifter 410.In operation, a digital level shifter such as level shifter 410 converts a full swing digital input between ground and power supply level to a full swing digital output swinging between ground and different power supply levels. Ideally, the level shifter circuit retains the phase information from the input signal to the output signal. The voltage level shifter utilized by the input / output circuit typically shifts the signal from the core voltage (eg, 1.1V) to a single attenuation voltage (eg, 1.8V, 2.6V, or 3.0V). Therefore, in the case of a core voltage of 1.1V and an attenuation voltage of 2.6V or 3.0V, the provided voltage level shift is from 1.1V to 2.6V or 3.0V, respectively. However, for the purpose of meeting the reliability limits of electronic components designed to operate with respect to 1.8V (eg, 45nm 1.8V transistors), the terminals of these electronic components (eg, the gate of the transistor) should not be allowed to Switch back and forth between 2.6V or 3.0V. Thus, in operation according to the illustrated embodiment, the two-level level shifting configuration of FIG. 3 causes level shifters 311 and 313 to operate to switch its output back and forth between 0V and 1.8V, and level shifter 312 operates Switch its output back and forth between 0V and 1.8V (in 1.8V mode) and between 1.1V and 2.6V or 3.0V (in 2.6V or 3.0V mode). For example, in the 2.6V mode, the level shifter 410 level shifts the signal from 1.8V (shown as vdd 18) to 2.6V (shown as vddp) and from 0V (shown as vssx) to 1.1 V (shown as vddc).The virtual ground signal provided by the mode control 214 is used to control the mode of operation of the level shifter 410 of this illustrated embodiment. For example, in 2.6V mode, the virtual ground is set to 1.1V, and in 1.8V mode, the virtual ground is set to 0V. It should be appreciated that the high-level voltage (shown as vddp) used by the components of the level shifter 312 and other components of the input / output circuit 200 changes in each mode due to the attenuation voltage used by the interfaced peripheral device (eg, 1.8V in 1.8V mode or 2.6V in 2.6V mode). For example, in the case where the connected peripheral device provides an attenuated voltage, this voltage changes because the peripheral device has been connected. In the case where the host circuit provides an attenuated voltage, this voltage changes as the host circuit is configured to interface with peripheral devices. For example, a general circuit such as level detection 213 can be utilized in combination with the host circuit to automatically and autonomously provide the host circuit with a choice of appropriate attenuation voltage. Alternatively, the host circuit can be manually switched to provide a damped voltage suitable for a specific interfaced peripheral device.In the 2.6V mode, when the input to the level shifter 410 is 1.8V, the transistors M2 and M1 (shown here as field effect transistors (FETs), more specifically, NFETS) are turned on and the transistors M4 and M3 (also shown as NFET) is turned off. In operation, the gate voltage of the transistor M1 is "high" for a certain time "d" (1.8v input to the level shifter 410), and then decreases, thereby turning off the transistor. The delay "d" is provided by the programmable delay logic 411, which is long enough to pull down the voltage at node output_n below vddc (the core voltage of 1.1V) but short enough to avoid pulling the voltage at node output_n The voltage is pulled down to the selected delay of (0V). Therefore, the voltage at the node output reaches 2.6V (attenuation voltage vddp), and the voltage at the node output_n reaches 1.8V.Contrary to the foregoing operation, when the input to the level shifter 410 is 0V, the transistors M4 and M3 are turned on (note that the inverter 430 is placed between the input to the level shifter 410 and the transistors M3 and M4) And the transistors M2 and M1 are turned off. The gate voltage of the transistor M3 is "high" for the time "d" (0v input to the level shifter 410), and then decreases, thereby turning off the transistor. The delay "d" is provided by the programmable delay logic 421 (eg, a circuit corresponding to the circuit of the programmable delay logic 411), the programmable delay logic 421 provides long enough to pull the voltage at the node output below vddc (1.1 (V core voltage) but short enough to avoid the selected delay of pulling down the voltage at the node output to (0V). Therefore, the voltage at the node output_n reaches 2.6V (attenuation voltage vddp), and the voltage at the node output reaches 1.8V.The relative size of the components of the pull-down stack and inverter control to what level the voltages of the nodes output and output_n are pulled down. For example, it can be controlled by appropriately setting the size of the electronic components of the inverters 412 and 422 and the transistors corresponding to the pull-down stack (transistors M1 and M2 of the inverter 412 and transistors M3 and M4 of the inverter 422) The voltage to which the node output and output_n are pulled down. The main function of the transistors M1 and M2 is to pull down sufficiently to write into the latches 412, 422. Similarly, the transistors M3 and M4 have the same function.The aforementioned timing-based operation of the level shifter 410 avoids exposing the terminals of M1 and the inverter 412 (for example, the gate of a P-type FET (PFET)) to a fully attenuated voltage (for example, vddp = 2.6V) (such as Occurs when output_n is pulled to 0V). This timing-based operation avoids reliability problems because there is no crossing of the terminals of the electronic component and there is a full attenuation voltage greater than the electronic component can reliably withstand.In the 1.8V mode, the level shifter 410 of the illustrated embodiment does not perform the level shift of the voltage level, but instead acts as a buffer. In this mode, when the virtual ground is 0V, the delay logic of the programmable delay logics 411 and 421 does not generate time-shifted pulses, but instead follows the input. Therefore, when the input to the level shifter 410 is 1.8V, both transistors M1 and M2 are turned on (both transistors M3 and M4 are turned off), and remain on as long as the input is "high". Similarly, when the input to the level shifter 410 is 0V, both transistors M3 and M4 are turned on (both transistors M1 and M2 are turned off), and remain on as long as the input is "low". This continuous operation is permitted because there are no reliability constraints, because both the input and output only switch back and forth between 1.8V and 0V.The operation of the level shifter as can be used in the embodiment of the pre-driver 211 has been described, and attention is paid to FIG. 3 again. As mentioned above, the pre-driver 211 of the illustrated embodiment includes buffers 331 to 335 to provide buffering of data signals in order to generate data signals suitable for driving the driver 212 appropriately. The buffering according to the embodiment is performed by a multi-stage buffer (tapered buffer) at a virtual ground (for example, a core voltage vddc of 1.1 V) and an attenuation voltage (for example, 2.6 V) as shown in FIG. 5 vddp) switch back and forth. During the 1.8V mode, the multi-level buffer switches back and forth between 0V and 1.8V. Each buffer in the chain (e.g., buffers 331 to 332 and buffers 333 to 335) provides sufficient buffering (e.g., including larger transistors) to thereby gradually increase the drive of the level-shifted signal in order to fully Drive electronic components of a much larger driver 212.Referring again to FIG. 2, it can be seen that according to the illustrated embodiment, the output of the pre-driver 211 is coupled to the input of the driver 212. As discussed above, the buffered, level shifted signal output by the pre-driver 211 is provided to the driver 212 for driving the signal to the interfaced peripheral device at an appropriate signal level.FIG. 6 shows details about the embodiment of the driver 212. The illustrated embodiment of driver 212 uses a stacked device driver strategy. This stacked driver configuration facilitates the use of electronic components designed for lower signal levels to operate at higher signal levels without reliability issues in order to avoid HCI collapse phenomena as discussed below. In addition, the stacked driver configuration promotes electrostatic discharge (ESD) protection, for example, by preventing snapback in the driver FET.The stacked driver structure shown in FIG. 6 provides the pdata signal from the pre-driver 211 to the transistor M17 (here PFET), the source of the transistor M17 is connected to Vddp, and the drain is closer to the output transistor M18 (here also PFET) is controlled by the bias voltage pbias. During the pull-up, there is a short duration in which transistor M17 is not fully turned on and therefore transistor M18 will experience a higher voltage across its drain and source terminals, which may cause transient HCI problems. However, to avoid the aforementioned HCI problem, the drain of the transistor M18 is coupled to the output node via the resistor Rp. The use of resistor Rp reduces the transient Vds overshoot of transistor M18, thereby keeping the voltage across its terminals within reliability limits.Although the upper half of the data high portion of the exemplary circuit of the driver 212 for providing signal output has been described above, it should be understood that the lower half of the data low portion of the driver 212 for providing signal output operates similarly . Specifically, the ndata signal from the pre-driver 211 is supplied to the transistor M20 (here NFET), the source of the transistor M20 is connected to ground, and the transistor M19 (here also NFET) whose drain is closer to the output is biased Voltage control nbias. During the pull-down, there is a short duration in which transistor M20 is not fully turned on and therefore transistor M19 will experience a higher voltage across its drain and source terminals. Similar to the stacked configuration of the upper half of the driver 212, the drain of the transistor M19 is coupled to the output node via the resistor Rn. The use of resistor Rn reduces the transient Vds overshoot of transistor M19, thereby keeping the voltage across its terminals within reliability limits. In one embodiment, the resistor is about 100 ohms. The selected resistor type should have high current carrying capacity.As discussed above, the pre-driver 211 and the driver 212 provide level shifting and output of data signals provided from the host circuit to the interfaced peripheral circuit. As shown in FIG. 2, the mode control 214 and level detection 213 of the illustrated embodiment are used in the operation of the output path 210 to facilitate the operation of the pre-driver 211 and the driver 212 as described herein. Details regarding the embodiment of the level detection 213 are shown in FIG. 7, and details regarding the embodiment of the mode control 214 are shown in FIG. 8.Note that referring to FIG. 7, details regarding the embodiment of the level detection 213 are shown. The level detection 213 provides general operation with respect to the input / output circuit 200 because the input / output circuit 200 is operable to use the level detection 213 to automatically and autonomously configure itself to operate with respect to an appropriate signal level. As shown in FIG. 7, the level detection 213 is coupled to the peripheral device being provided with an interface to detect its signal level and provide a signal to control the operation mode of the input / output circuit 200 (eg, 1.8V mode, 2.6V mode Or 3.0V mode). For example, the level detection 213 of the embodiment automatically detects the power supply voltage of the interfaced peripheral device, and causes the circuit of the input / output circuit 200 to bias the attenuation voltage accordingly. Therefore, the level detection 213 can automatically detect the voltage of the power supply of the connected peripheral device. Using this level detection circuit, it is possible to avoid using external input or control for mode selection or use separate input / output circuits that adapt to different signal levels in the absence of mode selection.In promoting automatic detection of signal levels, the circuit of level detection 213 is compliant with high signal levels (eg, compliant with high voltages). However, as discussed in further detail below, according to the illustrated embodiment, this high signal level compliance is provided using electronic devices that are themselves designed to be used at lower signal levels. Therefore, although there may be voltage levels in the range of 1.8V and 3.0V applied to the embodiments of transistors M5 to M7 (shown here as FETs), the implementation of transistors M5 to M7 (shown here as FETs) Examples include 1.8V transistors.In operation, the level detection 213 of the illustrated embodiment provides the digital signal level (mode) indicating the appropriate mode to each part of the input / output circuit 200, thereby facilitating the input / output circuit 200 to be connected to it irrespectively. The signal level of the specific peripheral device works seamlessly.To better understand the operation of the level detection 213 of the illustrated embodiment, it is assumed that the voltage level at which the connected peripheral device operates is 2.6V. Therefore, the vddp provided to the transistor M5 is 2.6V. Assuming that vdd_18 is 1.8V, transistor M5 is biased with a gate voltage of 1.8V, which ensures that the gate-to-source voltage (Vgs) of this device is below a reliable voltage level, even when transistor M5 is designed to be at 1.8V The same is true in the case of the next operation, because Vgs minus the threshold voltage (Vth) of the transistor M5 is greater than Vth. This ensures that any two terminals of transistor M5 will not exceed the maximum voltage level acceptable for reliability. In the foregoing example (vddp is 2.6V), the transistor M5 is turned on and charges the node 1 to vddp (2.6V). The size of transistor M5 is set so that it is large enough that when M5 is turned on and M6 and M7 are also turned on, the voltage at node 1 is vddp. In the case where the voltage level of the connected peripheral device is 1.8V (or a voltage compatible with the host circuit), M5 is turned off because vddp is 1.8, and the bias voltage of M5 is 1.8. Therefore, node 1 is pulled down to 0 from M6 and M6. In either case, the latch 710 latches the value related to the value at node 1 (node 3) as described below.In the example when vddp is 2.6, the transistor M6 is subjected to the drain voltage vddp (2.6V) at the node 1. However, similar to transistor M5, the gate of transistor M6 is properly biased (here with vdd_18) to ensure a reliable voltage across its terminals. Regardless of whether transistor M7 is on or off (in accordance with the reset state discussed below), transistor M6 is able to ensure an acceptable voltage at node 2 because transistor M6 is always on and its gate is biased at 1.8V. Therefore, the input stack of the level detection 213 of the illustrated embodiment ensures that all of its transistors do not experience voltages across its terminals that cause reliability problems.As can be seen in FIG. 7, transistor M8 also has its drain coupled to node 1, which was charged to 2.6V in the previous example. Because the transistor M8 of the illustrated embodiment is an NFET, the transistor M8 does not charge the node 3 above Vdd_18 (1.8V) minus the threshold voltage (Vth) of M8. This ensures an acceptable voltage across the terminals of transistor M8. Furthermore, due to the voltage drop at the node 3 associated with the transistor M8, all other electronic components of the level detection 213 are not subjected to a voltage greater than Vdd_18 (1.8V). As can be understood from the above, the circuit of the level detection 213 of the illustrated embodiment can tolerate high voltages by component layout and by properly biasing the components.The high / low stack 710 provides latching of the mode level according to the source voltage of the transistor M8. For example, when vddp is detected to be 2.6V or 3.0V, the high voltage is latched (1.8V in the illustrated embodiment), and when vddp is detected to be 1.8V, the low voltage is latched (in the illustrated 0V in the example). Because transistor M8 controls node 3 to Vdd_18 (1.8V) minus the threshold voltage (Vth), these values appear. The buffers 721 to 723 of the illustrated embodiment operate to provide mode signal buffering to generate mode control signals suitable for properly driving the components of the input / output circuit 200.The level shifter 731, inverter delay 732, and NOR gate 733 of the illustrated embodiment provide mode reset control according to the embodiment of the level detection 213. The level shifter 731 may include a level shifter circuit such as the level shifter circuits described above with respect to the level shifters 311 to 313. The inverter delay 732 may include delay logic such as the delay logic described above with respect to programmable delay logic 411 and 421.In the operation according to the embodiment, the reset signal provided by the host circuit is level-converted by the level shifter 731 into the signal voltage used by the input / output circuit 200 (in the foregoing example, vdd_1p8 (1.8V)) to supply power The level detection 213 circuit is used. The configuration shown in FIG. 7 accommodates a reset signal that changes from high (1.1V) to low (0V) after all host circuit power supplies have been fully powered and stabilized, but other configurations can be used according to the concepts in this article. The inverter delay 732 adds a certain amount of delay to facilitate detection of an appropriate mode and then turns off the circuit of the level detection 213 to save power. Also, according to the illustrated embodiment, the mode control signal output is gated via the NOR gate 733 using the delayed reset signal provided by the inverter delay 732 to ensure that the mode control signal output is forced to reach 0V (2.6V mode) until reset The signal goes low. The aforementioned gating is provided according to embodiments to ensure that the voltage across the electronic device terminals of the input / output circuit 200 is within the reliability limits of these electronic devices. Once the reset signal provided by the host circuit becomes low, the mode control signal is latched by the latch 710.Note that referring to FIG. 8, it shows details about the embodiment of the mode control 214. According to an embodiment, the mode control 214 provides the correct "ground" value to the circuits of the input / output circuit 200 (eg, buffers 331 to 335, level shifters 312 and 313, inverters 412 and 422, etc.) in order to facilitate crossing The voltage of the electronic device terminals of the input / output circuit 200 is within the reliability limit so that these electronic devices meet the reliability limit.During the 1.8V mode (as indicated by the mode control signal provided by the level detection 213), the value of the virtual ground is switched to 0V (here vss) by the switching circuit 810 of the illustrated embodiment because the signal voltage is sufficiently low So reliability will not be a problem. However, during the 2.6V or 3.0V mode (also as indicated by the mode control signal), the virtual ground of the illustrated embodiment is switched by the switching circuit 810 to the core voltage (here 1.1V) because the core voltage is sufficiently high to Avoid voltages across terminals of electronic components that exceed reliability limits.The switching circuit 810 of the embodiment may be provided in various configurations. For example, a solid-state switching device such as FET or the like can be used. Additionally or alternatively, if necessary, a mechanical switching mechanism may be utilized.The mode control 214 of the illustrated embodiment is not only suitable for providing a signal output consistent with the selected operation mode, but also suitable for maintaining the selection of a specific mode (eg, sleep or frozen I / O mode) via the host circuit power saving mode, One or more outputs of the host circuit (for example, power supply voltage) in the host circuit power saving mode are not available for the input / output circuit 200. To accommodate this power saving operation without causing ambiguous input / output circuit operating states, the mode control 214 of the illustrated embodiment includes a bias voltage generation 820. The bias generation of the embodiment operates 820 to generate an appropriate "virtual ground" level during the host circuit power saving operation. That is, when one or more outputs of the host circuit are unavailable due to power saving operations, the bias generation 820 operates to internally generate proper control of the pre-driver 211 and / or driver 212 to keep this circuit locked In the selected low or high signal level state. Therefore, when the host circuit returns to the operating state from the power saving operation, the input / output circuit 200 is configured to continue to interface with the peripheral device.Note that referring to FIG. 9, details regarding the embodiment of the bias voltage generation 820 are shown. In operation, the power supply voltage (eg, core voltage) provided by the host circuit collapses during the power saving mode (as indicated by the freeze io (freezio) mode signal). The circuits of the inverters 911 and 912 and the NOR gate 921 cooperate to control the bias voltage generation 820 to provide the bias voltage during the frozen I / O mode.The bias voltage generation according to the illustrated embodiment is provided by a voltage divider 930 that includes an operable to pull the voltages at nodes vir_grnd_nfet_gate and vir_gnd_pfet_gate to vddp (eg, 2.6V) and vdd_18 (eg, 1.8 V) disconnecting device (shown here as transistors M9 to M12 locked in the off state). Transistors M13 and M14 are turned on through the outputs of inverters 911 and 912 and NOR gate 921 to thereby provide an output at virtual ground, which is the difference between the voltages of nodes vir_gnd_nfet_gate and vir_gnd_pfet_gate. According to an embodiment, the virtual ground node is a node with a relatively high impedance, and therefore it is not desired to act as a charge sink. Therefore, all nodes that will remain in a specific state during the frozen I / O mode are expected to settle to their steady state values before providing the virtual ground bias of the bias generation 820 to it.Bias voltage provided by the voltage divider 930 during a high signal level mode (eg, 2.6V or 3.0V mode) (where the frozen I / O signal provided by the host circuit is 1.1V in the illustrated embodiment) It is roughly the core voltage (for example, 1.1V). According to the illustrated embodiment, the transistors M9 and M10 are PFETs arranged in a stacked configuration. Similarly, the transistors M11 and M12 are PFETs arranged in a stacked configuration. However, the voltage supplied to each of the aforementioned stacks is different. Specifically, vddp (for example, 2.6V) is supplied to the gate of the transistor M9, and vdd_18 (for example, 1.8V) is supplied to the gate of the transistor M11. Using these transistors in the illustrated configuration (and leakage associated with their off state), the difference in voltage at the gates of transistors M15 and M16 settles to a voltage very close to 1.1V. If there is a noise event that draws current from the virtual ground node or draws current to the virtual ground node, once the voltage of the virtual ground node exceeds a certain range from a steady state condition, one of the FETs turns on. At this time, the bias voltage becomes a low-impedance bias voltage, and ensures that the node returns to a steady state condition. Therefore, this voltage as provided at the virtual ground output is used to bias other circuits of the input / output circuit 200 during the host circuit freeze I / O mode when the input / output circuit 200 is operating in the high signal level mode.In the operation according to the embodiment of the mode control 214, the bias voltage generation is activated only when the input / output circuit 200 is in a high signal level mode (for example, 2.6V or 3.0V). In the case where the input / output circuit 200 is in a low signal level mode (eg, 1.8V) (eg, can be indicated by the mode control signal level from the level detection 213), the mode control 214 of the embodiment operates to couple the virtual ground To vss (0V here), regardless of whether the host circuit is in frozen I / O mode or in operation mode.Although the embodiments of the level detection 213 and the mode control 214 have been described above as providing general operation of the output path 210 (where the operation is automatically and autonomously adjusted for high signal level processing or low signal level processing), Embodiments of the input / output circuit 200 can utilize manual mode selection. For example, if necessary, the switching circuit 810 of the embodiment can be manually controlled according to the signal level of the interfaced peripheral device.The details about the functional blocks of the output path 210 of the embodiment have been described. Note that referring to FIG. 10, the details about the embodiment of the input path 221 are shown. To provide a signal level suitable for the host circuit, the input path 220 of the illustrated embodiment includes a level shift control 221. Similar to the operation of the level detection 213, the level shift control preferably operates to accommodate the input of both high-level and low-level signals without causing the voltage across the terminals of its electronic components to exceed reliability limits. Specifically, although a high signal level (for example, 2.6V and / or 3.0V) and a low signal (for example, 1.8V) can be provided at the data input node labeled "padloc" of the level shift control 221 However, the level shift control 221 is configured to automatically adapt to these signals and provide the desired signal level (eg, 1.8V) at the data output node labeled "schm_out".In the high-voltage-compliant configuration of FIG. 10, the always-on NFET transistor M21 arranged in a passgate configuration ensures that the electronic components of the level shift control 221 are not subjected to high voltage levels. More specifically, the transistor M21 operates to reduce the node labeled lvl_dn_int to 1.8-Vt. The first-stage receiver (eg, Schmitt trigger 1020) receives the 1.8-Vt signal and determines whether the peripheral device has transmitted 0 or 1. Because the first stage receiver 1020 may refer to a voltage different from the input signal, it is important to have the correct trippoint. The pull-up keeper circuit 1011 (including transistors M22 and M23 in a stacked configuration (shown here as PFET)) and the pull-down keeper circuit 1012 (including transistors M24 and M25 in a stacked configuration (shown here as NFET)) ensure satisfaction The input trip point (Vih, Vil) and the signal level are supplied with reference to the input path. The weak PFET holder configuration of the pull-up holder circuit 1011 of the illustrated embodiment ensures that the input to the Schmitt trigger 1020 rises up to vdd_18 (1.8V) and blocks any leakage. This ensures that this node rises quickly regardless of whether it is driven by the NFET transfer gate of transistor M21. The NFET pull-down keeper circuit 1012 divides the rising edge and provides a better trip point (Vil) on the rising edge of the signal. This configuration is particularly useful for achieving a good trip point in high signal level modes (eg, 2.6V and / or 3.0V) because the input to the level shift control 221 is at a higher voltage and the level shift control 221 ’s The first stage refers to a lower voltage (for example, 1.8V). Therefore, the foregoing embodiment of the level shift control 221 maintains the desired trip point regardless of whether it is operated at a high signal level or a low signal level. In one embodiment, the core_ie_h signal is provided along with the enable signal to enable the NFET holder when receiving the high voltage signal. An enable signal is also provided to enable the PFET holder when receiving a high voltage signal (eg, 2.6V or 3.0V).The transistor M26 of the illustrated embodiment is provided to facilitate deactivation of peripheral input paths. Specifically, an appropriate signal level (eg, 1.8V) may be used to provide a node labeled "core_ie_h" to disable the output of the level shift control 221, and thus the input path 220.Although various functional blocks have been described herein with reference to the described embodiments, it should be understood that, in addition to or instead of the described circuits, various circuits in accordance with the concepts described herein may be used. For example, ESD may be provided relative to the input / output circuit 200 to provide human body model (HBM) ESD protection at the data output of the output path 210, and a charging device model (CDM) ESD at the data input of the input path 220 protection.In addition, a circuit configuration different from that of the illustrated embodiment can be used according to the concepts herein. For example, although the various illustrated embodiments show a specific number of electronic components (eg, FETs) arranged in a stacked configuration to accommodate the described illustrative voltage levels, different numbers of these in this stacked configuration may be used Electronic components. For example, the stacked driver structure shown in FIG. 6 can utilize a stack of three FETs in a pdata (pull-up) and / or ndata (pull-down) driver stack (eg, to accommodate the higher signal levels discussed above (For example, 4.0V).As can be understood from the above, the input / output circuit 200 facilitates the use of electronic components designed for lower signal levels (eg, 1.8V) and operating at higher signal levels (eg, 2.6V or 3.0V). Therefore, not only can a single input / output interface be used relative to peripheral devices using different signal levels, but the input / output interface can use physically smaller and faster switching electronic components (eg, 45nm MOS, 1.8V electronic components) . Furthermore, the embodiments described herein use a universal device to accommodate these different signal levels, which can be operable to automatically and autonomously configure itself to operate with respect to the appropriate signal level.Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Furthermore, the scope of the present application is not intended to be limited to the specific embodiments of the processes, machines, manufacturing, material composition, devices, methods, and steps described in this specification. As those skilled in the art will readily understand from the disclosure of the present invention, existing or future developments that can perform substantially the same functions or achieve substantially the same results as the corresponding embodiments described herein can be utilized according to the present invention Process, machine, manufacturing, material composition, device, method or step. Therefore, the appended claims are intended to include these processes, machines, manufactures, material compositions, devices, methods, or steps within their scope.
Some embodiments include methods of forming memory arrays. A stack of semiconductor material plates may be patterned to subdivide the plates into pieces. Electrically conductive tiers may be formed along sidewall edges of the pieces. The pieces may then be patterned into an array of wires, with the array having vertical columns and horizontal rows. Individual wires may have first ends joining to the electrically conductive tiers, may have second ends in opposing relation to the first ends, and may have intermediate regions between the first and second ends. Gate material may be formed along the intermediate regions. Memory cell structures may be formed at the second ends of the wires. A plurality of vertically-extending electrical interconnects may be connected to the wires through the memory cell structures, with individual vertically-extending electrical interconnects being along individual columns of the array. Some embodiments include memory arrays incorporated into integrated circuitry.
CLAIMS The invention claimed is: 1. A method of forming a memory array, comprising: forming a stack comprising vertically-spaced semiconductor material plates; etching through the plates to subdivide the plates into planar pieces; forming horizontally-extending electrically conductive tiers along and in electrical connection with sidewall edges of the planar pieces; patterning the planar pieces into an array of wires; the array comprising vertical columns and horizontal rows; the electrically conductive tiers interconnecting wires of individual rows of the array; individual wires having first ends joining to the electrically conductive tiers, having second ends in opposing relation to the first ends, and having intermediate regions between the first and second ends; forming at least one gate material along the intermediate regions of the wires; forming memory cell structures at the second ends of the wires; and forming a plurality of vertically-extending electrical interconnects connected to the wires through the memory cell structures; individual vertically-extending electrical interconnects being along individual columns of the array. 2. The method of claim 1 wherein insulative material sheets are provided between the plates; and wherein the insulative material of the sheets is patterned during the subdividing of the plates into the planar pieces, as well as during the patterning of the planar pieces into the array of wires. 3. The method of claim 1 wherein the forming of the stack comprises: forming alternating layers of first and second semiconductor materials, where the first semiconductor material is selectively removable relative to the second semiconductor material; and selectively removing the first semiconductor material relative to the second semiconductor material. 4. The method of claim 3 wherein the one of the first and second semiconductor materials consists of silicon, and wherein the other of the first and second semiconductor materials consists of silicon/germanium. 5. The method of claim 3 wherein the one of the first and second semiconductor materials is n-type doped, and wherein the other of the first and second semiconductor materials is p-type doped. 6. The method of claim 5 wherein layers of electrically insulative material are provided between the alternating layers of first and second semiconductor materials. 7. The method of claim 6 wherein the layers of insulative material consist of silicon dioxide. 8. The method of claim 1 wherein the forming of the electrically conductive tiers comprises: doping the semiconductor material of the sidewall edges of the planar pieces; and forming metal silicide runners from the doped semiconductor material. 9. The method of claim 1 wherein the forming of the electrically conductive tiers comprises: recessing the semiconductor material of the sidewall edges of the planar pieces; and forming electrically conductive lines within the recesses. 10. A method of forming a memory array, comprising: forming a construction comprising vertically-stacked semiconductor material plates; the plates being vertically spaced from one another by gaps; patterning the plates to subdivide the plates into a plurality of planar pieces having sidewall edges; the planar pieces being vertically stacked; providing insulative material spacers in the gaps; forming electrically conductive tiers along the sidewall edges of the planar pieces; the electrically conductive tiers being vertically spaced from one another; etching through the semiconductor material of the planar pieces, and through the insulative material of the spacers, to form lines that extend orthogonally to the electrically conductive tiers; some of the lines being semiconductor material lines, and others of the lines being insulative material lines; forming gate dielectric along the semiconductor material lines; forming a gate material spaced from the semiconductor material lines by the gate dielectric; forming openings passing through the semiconductor material lines to break each semiconductor material line into a pair of segments; each segment passing through the gate material, having a first end joined to an electrically conductive tier, and having a second end in opposing relation to the first end; the segments being arranged as an array that comprises vertical columns and horizontal rows; the electrically conductive tiers extending along the rows of the array of segments; forming memory cell structures at the second ends of the segments; and forming a plurality of vertically-extending electrical interconnects connected to the segments through the memory cell structures; individual vertically-extending electrical interconnects being along individual columns of the array. 11. The method of claim 10 wherein the memory cell structures include phase change material. 12. The method of claim 10 wherein the memory cell structures include magnetic material. 13. The method of claim 10 wherein the memory cell structures are antifuse structures; and further comprising programming some of the memory cell structures by blowing some of the antifuses. 14. The method of claim 10 wherein the semiconductor material plates are subdivided before the insulative material is provided in the gaps. 15. The method of claim 10 wherein the insulative material is provided in the gaps before the semiconductor material plates are subdivided. 16. The method of claim 10 wherein the semiconductor material is a second semiconductor material, and wherein the forming of the vertically-stacked plates comprises: forming alternating layers of first semiconductor material and the second semiconductor material, where the first semiconductor material is selectively removable relative to the second semiconductor material; and selectively removing the first semiconductor material relative to the second semiconductor material. 17. The method of claim 16 wherein one of the first and second materials comprises p-type doped semiconductor material, and wherein the other of the first and second semiconductor materials comprises n-typc doped semiconductor material. 18. The method of claim 16 wherein one of the first and second materials comprises silicon and does not comprise germanium; and wherein the other of the first and second semiconductor materials comprises germanium and does not comprise silicon. 19. The method of claim 16 wherein one of the first and second materials comprises silicon and does not comprise germanium; and wherein the other of the first and second semiconductor materials comprises both silicon and germanium. 20. The method of claim 10 wherein: the openings are formed through the insulative material lines, as well as through the semiconductor material lines, and the forming of the openings breaks the insulative material lines into insulative material segments; the semiconductor material segments and insulative material segments together form vertically-extending stacks, with such vertically-extending stacks having a pair of opposing sidewalls; the gate dielectric is directly against the semiconductor material segments along the opposing sidewalls of the vertically-extending stacks; and the gate material is formed directly against the gate dielectric along the opposing sidewalls of the vertically-extending stacks. 21. An integrated memory array, comprising: a plurality of horizontally-extending electrically conductive lines supported by a semiconductor substrate, the lines being vertically spaced from one another and extending primarily along a first horizontal axis; a plurality of horizontally-extending semiconductor material wires joined to the lines and extending outwardly from the lines, the wires extending primarily along a second horizontal axis that is orthogonal to the first axis; the wires having first ends adjacent the electrically conductive lines, and having second ends in opposing relation to the first ends; the wires being arranged in a two-dimensional array; one of the dimensions of the two-dimensional array being rows along the first horizontal axis, and the other of the dimensions of the two-dimensional array being columns along a vertical axis orthogonal to the first and second horizontal axes; the horizontally-extending electrically conductive lines interconnecting wires along the rows of the array; gate dielectric along outer edges of the wires; gate material contacting the gate dielectric material along at least two sides of each individual wire, the gate material being comprised by a gate structure that extends primarily along the vertical dimension; memory cell structures at the second ends of the wires; and a plurality of vertically-extending electrical interconnects connected to the wires through the memory cell structures, the vertically-extending electrical interconnects being horizontally spaced from one another; individual vertically-extending electrical interconnects extending along individual columns of the array. 22. The integrated memory array of claim 21 wherein the memory cell structures comprise phase change material. 23. The integrated memory array of claim 21 wherein the memory cell structures comprise magnetic material. 24. The integrated memory array of claim 21 wherein the memory cell structures are antifuse structures. 25. The integrated memory array of claim 21 wherein the gate material contacts the gate dielectric along only two sides of the individual wires. 26. The integrated memory array of claim 21 wherein the wires are square along a cross-section orthogonal to the second horizontal axis. 27. The integrated memory array of claim 21 wherein the horizontally- extending electrically conductive lines comprise metal. 28. The integrated memory array of claim 21 wherein the horizontally- extending electrically conductive lines comprise metal silicide. 29. The integrated memory array of claim 21 wherein the semiconductor material of the wires comprises channel implants adjacent the gate material, and comprises source/drain implants at the first and second ends.
DESCRIPTION INTEGRATED MEMORY ARRAYS, AND METHODS OF FORMING MEMORY ARRAYS TECHNICAL FIELD Integrated memory arrays, and methods of forming memory arrays. BACKGROUND An integrated circuit is a miniature electronic circuit that has been manufactured across a semiconductor material. Memory storage is one of the types of functions that may be achieved by integrated circuitry. Memory storage commonly utilizes large arrays of identical components. A continuing goal in the fabrication of integrated memory is to increase the level of integration of memory components, and thus to increase the amount of memory that may be provided across a given amount of semiconductor real estate. This can enable large amounts of memory to be provided across small chips, which can be valuable in numerous applications, such as, for example, consumer electronics. It is becoming increasingly difficult to reduce the scale of existing memory arrays, and thus it would be desired to develop new arrangements for memory arrays. It would be further desired for such new arrangements to be amenable to fabrication with existing technologies. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1 and 2 are a diagrammatic three-dimensional view, and a diagrammatic cross-sectional side view, respectively, of an example embodiment of an integrated memory array. FIG. 3 is a diagrammatic cross-sectional side view of a construction shown at a processing stage of an example embodiment method of forming a memory array. FIG. 4 is a diagrammatic cross-sectional side view of the construction of FIG. 3 shown at a processing stage subsequent to that of FIG. 3. FIG. 5 is a diagrammatic three-dimensional view of a portion of the constructionof FIG. 4 (specifically, the portion labeled "5" in FIG. 4), shown at the processing stage of FIG. 4. FIGS. 6-15 are diagrammatic three-dimensional views of the portion of FIG. 5 shown at sequential processing stages of an example embodiment method of forming a memory array, with the processing stage of FIG. 6 following that of FIG. 5. FIG. 16 is a diagrammatic three-dimensional view of several of the structures of FIG. 15 that are hidden from view in the illustration of FIG. 15. FIGS. 17-19 are diagrammatic three-dimensional views of the portion of FIG. 5 shown at sequential processing stages of an example embodiment method of forming a memory array, with the processing stage of FIG. 17 following that of FIG. 15. FIG. 20 is a diagrammatic cross-sectional side view along the line 20-20 of FIG. 19. FIG. 21 is a diagrammatic three-dimensional view of the portion of FIG. 5 shown at a processing stage subsequent to that of FIG. 19. FIG. 22 is a diagrammatic cross-sectional side view along the line 22-22 of FIG. 21. FIG. 23 is a diagrammatic three-dimensional view of the portion of FIG. 5 shown at a processing stage subsequent to that of FIG. 21. FIG. 24 is a diagrammatic cross-sectional side view along the line 24-24 of FIG. 23. FIG. 25 is a diagrammatic three-dimensional view of the portion of FIG. 5 shown at a processing stage subsequent to that of FIG. 23. FIG. 26 is a diagrammatic cross-sectional side view along the line 26-26 of FIG. 25. FIG. 27 is a diagrammatic three-dimensional view of the portion of FIG. 5 shown at a processing stage subsequent to that of FIG. 25. FIG. 28 is a diagrammatic cross-sectional side view along the line 28-28 of FIG. 27. FIG. 29 is a diagrammatic three-dimensional view of various conductive structures of the integrated memory array formed at the processing stage of FIG. 27. FIG. 30 is a diagrammatic cross-sectional side view of the construction of FIG.28, shown at a processing stage subsequent to that of FIG. 28 in accordance with an example embodiment method for programming memory cells within a memory cell array. FIG. 31 is a diagrammatic view of a computer embodiment. FIG. 32 is a block diagram showing particular features of the motherboard of the FIG. 31 computer embodiment. FIG. 33 is a high level block diagram of an electronic system embodiment. FIG. 34 is a simplified block diagram of a memory device embodiment. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS Some embodiments pertain to new vertical memory designs suitable for incorporation into integrated circuitry, and to methods of forming vertical memory. The vertical memory may enable higher levels of integration to be achieved than can be achieved with conventional planar memory, and may be suitable for fabrication with existing technologies so that it may be fabricated with relatively low cost. In some embodiments, the vertical memory utilizes field effect transistor (FET) switching devices gatedly connected with semiconductor material wires, and utilizes data storage structures formed at ends of the wires. The wires and data storage structures are together comprised by memory unit cells, and such memory unit cells may be vertically stacked to create a high density of the memory unit cells across a given region of semiconductor real estate. In some embodiments, individual memory unit cells may have feature sizes corresponding to less than or equal to 25 nanometers. Example embodiments of integrated memory arrays, and example methods of forming integrated memory arrays, are described with reference to FIGS. 1-30. FIGS. 1 and 2 show a portion of a construction 10 comprising an example memory array. The construction is shown in three-dimensional view in FIG. 1. The three primary axes utilized for the coordinate system of FIG. 1 are shown in the upper left-hand corner of the figure. The coordinate system has a first horizontal axis 3 corresponding to an "X" axis, a second horizontal axis 5 corresponding to a "Y" axis, and a vertical axis 7 corresponding to a "Z" axis. The three primary axes 3, 5 and 7 are orthogonal to one another. Construction 10 includes a plurality of vertically-spaced, horizontally-extendingtiers 12, 14, 16 and 18. Such tiers comprise electrically conductive lines 20 and 22, with the electrically conductive lines extending along the horizontal direction of axis 5. In some embodiments, such lines may be referred to as extending "primarily" along the direction of axis 5 to indicate that there may be minor variation of the linearity of the lines along such axis. The electrically conductive lines 20 and 22 may comprise any suitable compositions or combinations of compositions. In some embodiments, line 20 may comprise, consist essentially of, or consist of one or more metals and/or one or more metal-containing compounds. For instance, line 20 may comprise, consist essentially of, or consist of metal silicide (for instance, tungsten silicide, tantalum silicide, titanium silicide, cobalt silicide, nickel silicide, etc.). In such embodiments, line 22 may comprise conductively-doped semiconductor material, such as, for example, conductively-doped silicon. Although the electrically conductive tiers 12, 14, 16 and 18 are shown comprising two adjacent lines 20 and 22 of different conductive materials, in other embodiments the tiers may comprise only a single line of conductive material, and in yet other embodiments the tiers may comprise more than two lines of conductive materials. Construction 10 also includes a plurality of wires 24-39 joined to the tiers 12, 14, 16 and 18, and extending horizontally along the direction of axis 3. In some embodiments, the wires may be referred to as extending "primarily" along the direction of axis 3 to indicate that there may be minor variation of the linearity of the wires along such axis. The wires 24-39 comprise semiconductor material, such as, for example, one or both of silicon and germanium. The wires have first ends 40 (only labeled for wire 24) joined to the tiers, and have second ends 42 (only labeled for wire 24) in opposing relation to the first ends. The wires 24-39 are arranged in a two-dimensional array, with one of the dimensions of such array being along horizontal axis 5, and the other of the dimensions of the array being along vertical axis 7. The two-dimensional array may be considered to comprise rows along horizontal axis 5, and to comprise columns along vertical axis 7. The tiers 12, 14, 16 and 18 interconnect wires along the rows of the array (for instance, tier 18 interconnects the wires 24-27 along a row of the array).FIG. 2 shows a cross-section along a plane orthogonal to axis 3 of FIG. 1 (specifically, along a plane parallel to axis 5 of FIG. 1), and shows that the wires 24-39 are square-shaped along such cross section. In other embodiments, the wires may have other shapes along the cross-section of FIG. 2, including, for example, circular, oval, elliptical, rectangular, etc.. Gate dielectric 46 (only some of which is labeled in FIG. 1 , but all of which is labeled in FIG. 2) is along outer edges of the wires 24-39. In the shown embodiment, the wires have a square cross-sectional shape, and the gate dielectric is formed along opposing sidewalls of such square shape. Accordingly, the gate dielectric only partially surrounds the individual wires. In other embodiments, the gate dielectric may entirely surround the individual wires. The gate dielectric 46 may comprise any suitable composition or combination of compositions, and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. The gate dielectric may be homogeneous, as shown, or may comprise multiple different materials. Electrically conductive gate material 48 is provided around the wires 24-39. In the shown embodiment, the gate material 48 forms a gate structure 50 that extends primarily in a vertical direction (i.e., primarily along the axis 7). The gate material 48 is shown contacting the gate dielectric 46 on two opposing sides of each of wires 24-39. In other embodiments, the gate dielectric 46 may entirely surround the individual wires, and the gate material 48 may also entirely surround the individual wires. Although the gate structure is shown comprising a single homogeneous material 48, in other embodiments the gate structure may comprise two or more different materials. The various materials of gate structure 50 may comprise any suitable composition or combination of compositions. In some embodiments, such materials may comprise one or more of various metals (for instance, titanium, tungsten, cobalt, nickel, etc.), metal-containing compositions (for instance, metal nitrides, metal silicides, etc.), and conductively-doped semiconductor materials (for instance, conductively-doped silicon, conductively-doped germanium, etc.). The wires 24-39 may be considered to have intermediate regions 44 (FIG. 2, and labeled only for wire 24) between the first and second ends 40 and 42. The intermediate regions are not labeled in FIG. 1 , due to such regions being hidden by gate structure 50.Memory cell structures 52 (FIG. 1) are formed at the ends of wires 24-39. The memory cell structures may be alternatively referred to as data storage structures, and may be any structures suitable for storing data in a memory cell. Although the gate structures are shown to be homogeneous, in some embodiments the gate structures may comprise multiple different materials. In some embodiments, the memory cell structures 52 may correspond to one time programmable structures, resistance RAMS (i.e., memory that changes resistance upon switching; including phase change memory, oxide RAM, etc.), multi-time programmable devices, etc.. In some embodiments, the memory cell structures may be antifuse structures; such as, for example, structures of the types described in U.S. Patent No. 7,210,224, listing Jigish D. Trivedi as the inventor, and listing Micron Technology, Inc. as the assignee. In some embodiments, the memory cell structures may correspond to MRAM structures; such as, for example, structures of the types described in U.S. Patent No. 7,214,547, listing Joel A. Drewes as the inventor, and listing Micron Technology, Inc. as the assignee. In some embodiments, the memory cell structures may be phase change memory structures; such as, for example, structures of the types described in U.S. Patent Nos. 7,332,735 and 7, 1 1,984, listing Kristy A. Campbell and Jun Liu as the inventors, respectively, and listing Micron Technology, Inc. as the assignee. If the memory cell structures 52 correspond to antifuse structures, they may contain a thin layer of dielectric material between a pair of electrodes. In operation, sufficient voltage may be passed to break down the dielectric and thereby cause the electrodes to electrically contact one another. A programming state of a memory cell structure may be designated by whether the structure is a blown antifuse, or an antifuse which is not blown. The memory cell structures 52 are shown to be homogeneous, and in some embodiments may correspond to the thin dielectric of antifuse structures. In other embodiments, the memory cell structures may not be homogeneous, but may instead comprise a pair of electrically conductive electrodes having a thin layer of dielectric material therebetween. If memory cell structures 52 correspond to MRAM structures, then the memory cell structures may comprise a pair of magnetic materials, and a nonmagnetic material between the magnetic materials. In operation, the orientation of a magnetic moment in one of the magnetic materials may be compared relative to the orientation of a magneticmoment in the other of the magnetic materials to determine a programming state of the memory cell structure. If memory cell structures 52 correspond to phase change memory structures, then the memory cell structures may comprise phase change material, such as, for example, various chalcogenides. A plurality of cell strings are configured as vertically-extending electrical interconnects (specifically, vertically-extending bars) 54, 56, 58 and 60 (FIG. 1) that extend along columns of the wires (for instance, bar 54 extends along a column comprising wires 24, 28, 32 and 36), and that electrically connect to the wires through the memory cell structures 52. The bars 54, 56, 58 and 60 may comprise any suitable electrically conductive material or combination of materials, and may, for example, comprise one or more of various metals (for instance, titanium, tungsten, cobalt, nickel, etc.), metal-containing compositions (for instance, metal nitrides, metal silicides, etc.), and conductively-doped semiconductor materials (for instance, conductively-doped silicon, conductively-doped germanium, etc.). The bars 54, 56, 58 and 60 are shown in phantom view in FIG. 1 so that other structures are visible through the bars. The tiers 12, 14, 16 and 18 are shown electrically connected to circuitry 61-64, respectively; the gate structure 50 is shown electrically connected to circuitry 65; and the vertical bars 54, 56, 58 and 60 are shown electrically connected to circuitry 66-69, respectively. Most of the circuitry is illustrated with boxes, and it is to be understood that the circuitry can be any suitable circuitry. The circuitry may be provided in any suitable locations proximate the various structures of construction 10. For instance, at least some of the circuitry may be under the construction, at least some of the circuitry may be laterally adjacent the construction, and/or at least some of the circuitry may be over the construction. The circuitry corresponds to logic and wiring utilized to read and/or write from the memory array of construction 10. An example circuit is shown for circuitry 69. Such example circuit includes a transistor 70 having a gate 72 and source/drain regions 74 and 76. The gate is electrically connected to a row line 78, one of the source/drain regions is electrically connected to bar 60, and the other of the source/drain regions is connected to a bitline 80. The wires 24-39 may be doped so that such wires, in combination with gatestructure 50, form a plurality of transistor devices. Specifically, the intermediate regions 44 of the wires may be doped to correspond to channel regions of the transistor devices, and the ends 40 and 42 of the wires may be doped to correspond to source/drain regions of the transistor devices. In operation, current passed through gate structure 50 may be used to gatedly couple the source/drain regions at the ends of the wires to one another through the channel regions in the intermediate portions the wires. The various circuitry 61-69 may be utilized to uniquely address individual memory cell structures 52 when current is passed through gate structure 50. For instance, circuitry 61 electrically connects to a memory cell structure 52 at the end of wire 24, and circuitry 66 electrically connects to the same memory cell structure through vertical bar 54. Thus, the circuitries 61 and 66 may be together utilized to program such memory cell structure and/or to read the programmed state of such memory cell structure. If the memory cell structure is an antifuse device, the programming may comprise providing a sufficient voltage differential between circuitry 61 and circuitry 66 to blow the antifuse; and subsequent reading may comprise ascertaining if current flow through the memory structure corresponds to a blown or a not-blown antifuse device. Although construction 10 is shown having gaps between the vertically-spaced tiers 12, 14, 16 and 18, between adjacent wires, and between adjacent vertical bars 54, 56, 58 and 60; any suitable dielectric materials may be provided in such gaps to electrically isolate the various electrical components from one another. Construction 10 may be formed to be integrated circuitry supported by a semiconductor substrate, and may be formed utilizing any suitable fabrication process. Example processes are described with reference to FIGS. 3-30. Referring to FIG. 3, a semiconductor construction 100 comprises alternating layers of first and second materials 102 and 104, respectively. The materials are supported by a substrate 101. Substrate 101 can comprise, consist essentially of, or consist of, for example, monocrystalline silicon lightly-doped with background p-type dopant, and may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone orin assemblies comprising other materials). The term "substrate" means any supporting structure, including, but not limited to, semiconductor substrates. The second material 104 is ultimately patterned into wires analogous to the wires 24-39 of FIG. 1. Accordingly, the second material 104 comprises semiconductor material, and in some embodiments may comprise, consist essentially of, or consist of one or both of silicon and germanium. In some embodiments, the first material 102 is selectively removable relative to the second material 104. In such embodiments, materials 102 and 104 may both correspond to semiconductor materials, but may differ from one another in composition and/or doping. For instance, one of the materials 102 and 104 may comprise silicon and not germanium; while the other comprises germanium and not silicon. As another example, one of the materials 102 and 104 may consist of silicon, while the other comprises, consist essentially of, or consists of a combination of silicon with germanium. As yet another example, both of materials 102 and 104 may correspond to doped silicon, but one of the materials may be p-type doped and the other may be n-type doped. In the shown embodiment, barrier material 106 is provided between the materials 102 and 104. The barrier material may be used to prevent dopant from dispersing between layers 102 and 104 in embodiments in which a difference between materials 102 and 104 is the dopant type and/or concentration. In other embodiments, the barrier material may be omitted. The material 106 may comprise any suitable composition, and in some embodiments may be an electrically insulative material. For instance, material 106 may comprise, consist essentially of, or consist of silicon dioxide. In some embodiments, the first material 102 is an electrically insulative material. For instance, the first material may comprise, consist essentially of, or consist of silicon dioxide. The barrier material 106 may be omitted in such embodiments, so that materials 102 and 104 are stacked directly against one another. In embodiments in which material 102 is an electrically insulative material, the material 102 may be considered to be in the form of electrically insulative sheets provided between vertically-stacked plates of material 104. The alternating materials 102 and 104 may be formed over substrate 101 with any suitable processing. For instance, the alternating materials may be formed by epitaxial growth from over a surface of substrate 101 ; and/or may be deposited over the surface ofsubstrate 101 utilizing chemical vapor deposition (CVD) and/or atomic layer deposition (ALD). In embodiments in which barrier material 106 is provided, such barrier material may be formed utilizing any suitable processing; including for example, one or both of CVD and ALD. In the shown embodiment, materials 102 and 104 are formed within a trench that extends into substrate 101. In other embodiments, materials 102 and 104 may be formed across a non-trenched upper surface of substrate 101, rather than within a trench. Although substrate 101 is shown to be homogeneous, in some embodiments there may be circuitry formed across or within substrate 101 prior to forming the alternating materials 102 and 104. For instance, some of the circuitry 61 -69 of FIG. 1 may be provided over or within substrate 101 prior to forming the alternating materials 102 and 104. Referring to FIG. 4, materials 102 and 106 (FIG. 3) are selectively removed relative to material 04 to leave a stack of vertically-spaced plates 108 of material 104. The plates are spaced from one another by gaps 103. The materials 102 and 106 may be removed by forming openings (not shown) extending through materials 102, 104 and 106, and then providing etchant within such openings; with the etchant being selective for materials 102 and 106 relative to material 104. Although material 106 is shown to have been removed, in other embodiments only material 102 may be removed; and accordingly materials 104 and 106 may remain at the processing stage of FIG. 4. The selective removal of material 102 relative to material 104 may comprise any suitable processing. In some embodiments, material 102 comprises germanium and material 104 consists of silicon; and the removal of material 102 utilizes one or more of hydrofluoric acid, nitric acid, acetic acid, hydrogen peroxide, ammonium hydroxide, ozone and HCl. In some embodiments, material 102 comprises p-type doped silicon, and material 104 comprises n-type doped silicon, and the selective removal of material 102 utilizes tetramethylammonium hydroxide. The shown embodiment has four vertically-spaced plates 108. The number of vertically-spaced plates may be selected to achieve a desired number of wires along a column of a memory array of the type shown in FIG. 1 ; and accordingly may be a number greater than four.An advantage of forming the alternating materials within the trench is that the sidewalls of the trench may assist in supporting the vertically-spaced plates 108. In the shown embodiment, the vertically-spaced plates 108 are supported only by the sidewalls of the trench that the plates have been formed in. In other embodiments, spacers (not shown) may be provided between the plates to support the plates. FIG. 5 shows a three-dimensional view of a portion of FIG. 4 corresponding to the vertically-spaced plates 108 in isolation from substrate 101. The three-dimensional view of FIG. 5 utilizes the same coordinate system discussed above with reference to FIG. 1, and accordingly coordinate axes 3, 5 and 7 are shown in the upper left-hand corner of FIG. 5. The remaining FIGS. 6-30 will be shown in isolation from substrate 101 in order to simplify the drawings, but it is to be understood that the various structures shown in FIGS. 6-30 would be supported by the semiconductor substrate 101. In embodiments in which material 102 (FIG. 3) comprises an electrically insulative material, the processing of FIG. 4 may be omitted, so that the insulative material remains between the vertical plates at subsequent processing steps. Accordingly, in some embodiments, the structure of FIG. 5 will comprises sheets of insulative material 102 within the regions shown as gaps 103 in the figure. Referring to FIG. 6, a patterned mask 110 is formed over the vertically-stacked plates 108. Mask 110 comprises a plurality of features 1 12 which are spaced from one another by gaps 114. The features 1 12 may be formed from any suitable material; including, for example, a hard mask material (for instance, metal nitride, silicon nitride, etc.). If the features 1 12 comprise a hard mask material, such material may be formed into the shown pattern by initially forming a uniform layer of the material across the upper surface of the top plate 108; then forming photolithographically-patterned photoresist over the hard mask material, transferring a pattern from the photoresist into the hard mask material, and subsequently removing the photoresist to leave the shown construction. In other embodiments, the photoresist may remain over the hard mask material at the processing stage of FIG. 6. Referring to FIG. 7, gaps 114 are extended through plates 108 (FIG. 6) with a suitable etch; such as, for example, a reactive ion etch. Such subdivides the plates into a plurality of planar pieces 116. Spacers, lattices, or other supporting structures (notshown) may be provided between and under the plates at various locations, prior to the subdivision of the plates, to support the various planar pieces. In embodiments in which the material 102 of FIG. 3 is not removed (i.e., in the embodiments discussed above with reference to FIGS. 3-5 in which insulative material sheets of material 102 remain in the locations shown as gaps 103), the etching of FIG. 7 will be conducted through a stack comprising alternating materials 102 and 104. Such etching may be considered to subdivide the plates 108 (FIG. 6) into planar pieces 116, and to subdivide the insulative material 102 (FIG. 3) into insulative spacers between the planar sheets (the insulative spacers would be in the locations of gaps 103 in FIG. 7). Referring to FIG. 8, mask 110 (FIG. 7) is removed, and replaced with a new mask 1 18. Mask 118 comprises a plurality of features 120 which are spaced from one another by gaps 122. Gaps 122 are wider than the gaps 114 (FIG. 6) that had been defined by the previous mask 110 (FIG. 6). Mask 118 may be formed of any suitable material or combination of materials; including, for example, one or both of a hard mask material and photoresist. After mask 1 18 is provided, dopant is implanted through gaps 122 to form implant regions 124 along sidewalls of the semiconductor material 104 of the planar pieces 116. In some embodiments, the dopant may be n-type. In such embodiments the implant regions 124 may comprise an "n" dopant level or an "n+" dopant level, and in either event will be conductively-doped regions. After the implant regions 124 are formed, the mask 1 18 may be removed to leave the construction shown in FIG. 9. Referring to FIG. 10, insulative material 126 is formed between the planar pieces 106. The insulative material 126 may comprise any suitable composition, and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. Insulative material 126 may be formed with any suitable processing, including, for example, one or both of CVD and ALD. In embodiments in which material 102 (FIG. 3) is insulative material (such as silicon dioxide), and in which the processing of FIG. 4 is omitted so that material 102 remains between the planar pieces 116 at the processing stage of FIG. 8 (instead of the gaps 103), the insulative material between the planar pieces may be material 102 instead of material 126. The insulative material 126 forms spacers 128 between the planar pieces 1 16, andalso forms a spacer 128 over the uppermost planar piece 116. There may also be insulative material along the bottom of the lowermost planar piece 116, although such is not shown in FIG. 10. The shown construction comprises stacks of alternating materials 104 and 126; or alternatively considered, comprises stacks of alternating planar pieces 116 and spacers 128. The gaps 114 remain between the planar pieces 116 after formation of insulative material 126. If the formation of the insulative material fills or partially fills such gaps, additional masking and etching may be conducted to re-establish the gaps and form the construction of FIG. 10. After insulative material 126 is formed, construction 100 is subjected to salicidation conditions to form silicide 130 along outer edges of the doped regions 124. The silicide 130 forms electrically conductive tiers 131 along the sidewall edges of semiconductor material 104, with such tiers being analogous to those described in FIG. 1 as tiers 12, 14, 16, and 18. The tiers 131 are linear, and extend primarily along the horizontal axis 5 of the three-dimensional coordinate system shown in the figures. The silicide 130 may comprise any suitable composition, and may, for example, comprise, consist essentially of, or consist of one or more of cobalt silicide, nickel silicide, titanium silicide, etc.. The salicidation reaction is one of many methods that may be used to form conductive runners along the sidewall edges of the planar pieces 116. Another example method is to laterally recess such sidewall edges to form gaps over the underlying spacers 128, and to then fill such gaps with one or more electrically conductive materials (for instance, one or more of various metals, metal-containing compositions, and conductively-doped semiconductor materials). Referring to FIG. 11, a patterned mask 132 (shown in dashed line) is formed over the stack of materials 104/126, and is used to pattern a fill within gaps 114 so that the gaps become filled with insulative material 134. Insulative material 134 may have any suitable composition, and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. The insulative material may be deposited within the gaps 114 and over the mask 132, and then chemical-mechanical polishing (CMP), or other suitable processing, may be used to remove the insulative material from over the mask. In subsequent processing, the mask may be removed to leave the construction of FIG. 12.Such construction has rails 135 of material 134 extending above the uppermost surfaces of the stacks of materials 104/126. Referring to FIG. 13, masking material 136 is formed over the stacked materials 104/126 and patterned into a mask. The patterned mask has segments 138 extending along rails 135, and has segments 140 extending orthogonally to the segments 138. The segments 138 and 140 may be formed sequentially relative to one another in some embodiments. The masking material 136 may be a hard mask material (for instance, metal nitride, silicon nitride, etc.). The material 136 may be formed in the shown pattern by initially forming a uniform layer of hard mask material across the stacked materials 104/126; then forming photolithographically-patterned photoresist over the hard mask material, transferring a pattern from the photoresist into the hard mask material, and subsequently removing the photoresist to leave the shown construction. In other embodiments, the photoresist may remain over the hard mask at the processing stage of FIG. 13. Referring to FIG. 14, patterned material 136 is used as a mask during an etch into stacked materials 104/126. Such etch may be any suitable etch; such as, for example, a reactive ion etch. The etching through material 104 of the planar pieces 116 (FIG. 13) forms lines 142 of the semiconductor material 104, with such lines extending orthogonally to tiers 131; and specifically extending along the axis 3 of the three-dimensional coordinate system shown in the figures. The lines 142 will ultimately be patterned to form wires analogous to those described in FIG. 1 as wires 24-39. Referring to FIG. 15, masking material 136 (FIG. 14) is removed, and the remaining structure is covered with an insulative material 144. Such insulative material may, for example, comprise, consist essentially of, or consist of silicon dioxide. In some embodiments, at least some of the masking material 136 may not be removed prior to forming insulative material 144. For instance the segments 138 (FIG. 14) of the masking material that are along rails 134 (FIG. 14) may remain at the processing stage of FIG. 15 in some embodiments. FIG. 16 shows the arrangement of the various conductive and semiconductivecomponents at the processing stage of FIG. 15, in isolation from the insulative components of FIG. 15, to assist the reader in visualizing the layout of various structures that are hidden from view in the diagram of FIG. 15. Referring to FIG. 17, masking material 146 (shown in phantom view) is formed over the insulative material 144. The masking material is patterned into a plurality of features 148 which are spaced from one another by gaps 150. Masking material 146 may comprise any suitable composition; including, for example, a hard mask composition. Referring to FIG. 18, gaps 150 are extended through insulative material 144 with one or more suitable etches, and then masking material 146 (FIG. 17) is removed. Referring to FIGS. 19 and 20, gate dielectric 46 (FIG. 20) and gate material 48 are formed within gaps 150 (FIG. 18) and over the stacked materials 104/126. The gate material may then be subjected to planarization, for example CMP, to form the shown planarized surface 151 extending across materials 48, 134 and 144. The gate dielectric 46 and gate material 48 can be identical to the gate dielectric and gate material discussed above with reference to FIGS. 1 and 2. Although the gate dielectric is shown to be homogeneous, in other embodiments (not shown), the gate dielectric may comprise two or more different materials. Also, although only one gate material is shown, in other embodiments (now shown) multiple gate materials may be utilized. FIG. 20 shows that the lines formed from the alternating materials 104 and 126 (such lines extend in and out of the page relative to the cross-sectional view of FIG. 20) create vertically-extending stacks (with a pair of such stacks being shown in FIG. 20, and being labeled as stacks 145 and 147). Each stack has a pair of opposing sidewalls (the opposing sidewalls of stack 145 are labeled 141 and 143). The gate dielectric 46 extends along and directly against the insulative material 126 and the semiconductor material 104 of such sidewalls; and the gate material 48 extends along the sidewalls, and is spaced from the sidewalls by the gate dielectric. Referring to FIGS. 21 and 22, patterned masking material 152 is formed over planarized surface 151. The patterned masking material has openings 154-159 extending therethrough. The patterned masking material may comprise a hard mask composition, and may be patterned utilizing processing analogous to that discussed above with reference to FIG. 6 for patterning the material of mask 110. The patterned maskingmaterial is utilized during etching through materials 104, 126 and 144. Such etching extends openings 154-159 through materials 104, 126 and 144 as shown in FIG. 22. Once that openings 154-159 penetrate through the various lines of semiconductor material 104, the lines are broken into segments; with each segment corresponding to a wire 160. The wires 160 are analogous to the wires 24-39 discussed above with reference to FIGS. 1 and 2. Each of the wires 160 has a first end joined to the tiers comprising silicide 130, and a second end in opposing relation to the first end. The second ends of the wires are along the openings 154- 159. Some of the first ends of the wires 160 are labeled 161 in the cross-sectional view of FIG. 22, and some of the second ends of the wires 160 are labeled 163 in FIG. 22. The wires 160 also have intermediate regions between the first and second ends, with such intermediate regions extending through gate dielectric 46 and gate material 48; analogously to the description provided above with reference to FIGS. 1 and 2. Some of the intermediate regions are labeled 165 in FIG. 22. Analogously to the wires 24-39 discussed above with reference to FIGS. 1 and 2, the wires 160 may have the intermediate regions 165 doped to be channel regions of transistor devices (for example, provided with a threshold voltage dopant), and may have the ends 161 and 163 heavily doped to be source/drain regions. In some embodiments, the doping of the intermediate regions may occur during the initial formation of the semiconductor material in the stack of FIG. 3, and the doping of ends 161 may occur with the heaving doping at the processing stage of FIG. 8. In such embodiments, the doping of ends 163 may occur at the processing stage of FIG. 22 by implanting dopant into openings 154-159 to dope the portions of the wires 160 adjacent such openings. Alternatively, the doping of the ends 163 of wires 160 may occur at other processing stages, such as, for example, by out-diffusion of dopant from structures that are subsequently formed adjacent to the ends 163. Referring to FIGS. 23 and 24, memory cell material 170 is formed within openings 1 4-159, and along the second ends 163 of wires 160. The memory cell material may be any composition suitable to form memory cell stmctures. For instance, if the memory cell structures are to be antifuses, the memory cell material 170 may be dielectric that is to be formed between a first electrode corresponding to an end 163 of awire 160, and a second electrode that will be provided on an opposing side of the dielectric from the first electrode. Although one memory cell material is shown, in some applications there may be multiple memory cell materials formed within the openings. For instance, the memory cell materials may correspond to a stack containing a thin layer of dielectric material sandwiched between a pair of conductive materials, so that the entire stack is provided as antifuse structures against the ends 163 of wires 160. In some embodiments, the memory cell material 170 may comprise phase change material, and may be suitable for forming PCRAM type memory structures. In some embodiments, memory cell materials may be provided to comprise a non-magnetic layer sandwiched between a pair of magnetic layers, and may be suitable for forming MRA -type memory structures. The memory cell material 170 forms a uniform lining within openings 154-159. Such may be accomplished with any suitable methodology, including, for example, one or more of ALD, CVD and physical vapor deposition (PVD). Although the memory cell material 170 is shown forming a uniform lining along the sidewalls of openings 154-159, in other embodiments the memory cell material may be selectively formed only along the exposed ends 163 of the wires 160. Such selective placement of the memory cell material may utilize any suitable methodology, including, for example, selective ALD, electroless plating and/or electrolytic plating. Referring to FIGS. 25 and 26, openings 154-159 (FIGS. 23 and 24) are filled with electrically conductive material 180. The electrically conductive material 180 may comprise any suitable composition, and in some embodiments may comprise one or more of various metals (for instance, titanium, tungsten, cobalt, nickel, etc.), metal- containing compositions (for instance, metal nitrides, metal silicides, etc.), and conductively-doped semiconductor materials (for instance, conductively-doped silicon, conductively-doped germanium, etc.). Although a single homogenous material 180 is shown filling the openings, in other embodiments (not shown) the openings may be filled with multiple materials. The one or more materials utilized to fill the openings may be formed by any suitable method, including, for example, one or more of CVD, ALD and PVD. Referring to FIGS. 27 and 28, materials 152, 170 and 180 (FIGS. 25 and 26) areetched back to about the level of surface 151. Such etchback may be accomplished with CMP. The memory cell material 170 forms a plurality of tubes that extend vertically along the ends of wires 160; and the conductive material 180 forms electrically conductive cores within such tubes. The material 170 forms memory cell structures analogous to the memory cell structures 52 discussed above with reference to FIGS. 1 and 2, and the cores formed from conductive material 180 are vertical interconnects analogous to the bars 54, 56, 58 and 60 discussed above with reference to FIGS. 1 and 2. FIG. 29 shows the arrangement of the various primary components at the processing stage of FIGS. 27 and 28, in isolation from some of the insulative components of FIGS. 27 and 28, to assist the reader in visualizing the layout of various structures that are hidden from view in the diagram of FIG. 27. Some of the features illustrated in FIG. 29 are shown in phantom view so that other features may be seen behind them. The phantom view is not utilized to indicate importance, or lack thereof, of various features, or to indicate that certain features are optional. Only some of the various repeating structures of FIG. 29 are labeled, in order to simplify the drawing. The embodiment of FIG. 29 is analogous to that of FIG. 1. The wires 160 of FIG. 29 are analogous to the wires 24-39 (FIG. 1), and, like the wires 24-39, form two- dimensional arrays containing rows and columns. The conductive lines of material 130 form tiers analogous to the tiers 12, 14, 16 and 18 of FIG. 1, and, like the tiers 12, 14, 16 and 18, the tiers of FIG. 29 interconnect rows of wires. The conductive material 180 of FIG. 29 forms vertically-extending electrical interconnects, or cell strings, (specifically, cylindrical rods) analogous to the bars 54, 56, 58 and 60 of FIG. 1, and, like such bars, the vertically-extending electrical interconnects of FIG. 29 are along columns of the arrays of wires. The memory cell material 170 of FIG. 29 forms memory cell structures analogous to the structures 52 of FIG. 1. However, in the embodiment of FIG. 1 the memory cell structures 52 are formed of materials that are only at the ends of the wires, whereas in the embodiment of FIG. 29 the memory cell material 170 extends the full length of the vertical interconnects of material 180. The embodiment of FIG. 29 may be more cost-efficient to manufacture, and may be suitable in applications in which there will not be cross-talk through the memory cell material 170. In other applications, such as when there could be cross-talk between adjacent memory cells if the memory cellmaterial were continuous between the adjacent memory cells, the embodiment of FIG. 1 may be more appropriate. FIG. 29 shows that in some embodiments the cell strings corresponding to the vertically-extending electrical interconnects (i.e., the rods formed of material 180) may be shared by memory cells on opposing sides of the cell strings. Such may enable high levels of integration to be achieved. Circuitry analogous to the circuitry 61-70 of FIG. 1 is not shown in FIG. 29, but such circuitry would be present. Various components of such circuitry may be in any desired location relative to the construction of FIG. 29; and accordingly may be below, above, or laterally adjacent the construction of FIG. 29. As discussed previously, the one or more memory cell materials may be provided to form various types of memory cell structures suitable for storage of data. In some applications, the memory cell material 170 may correspond to a thin layer of dielectric material utilized to form antifuses between the wires 160 and the rods formed of material 180. Data may be stored by either blowing an antifuse (to break down the dielectric and form a conductive contact) or not blowing an antifuse. FIG. 30 shows the construction 100 of FIG. 28 in an application in which the memory cell material 170 consists of the thin dielectric material utilized for antifuses. The construction is shown after programming has been conducted to form some regions 200 of blown antifuses, while leaving other regions 202 where the antifuses are not blown. The blown antifuses may correspond to one type of data bit, while the not-blown antifuses correspond to a different type of data bit; and thus the arrangement of blown and not-blown antifuses may store information. Such information may be later accessed by using different combinations of current through various gates, tiers and vertical columns of construction 100 to uniquely address the various memory cells of the construction. The embodiments discussed above may be utilized in electronic systems, such as, for example, computers, cars, airplanes, clocks, cellular phones, etc.. FIG. 31 illustrates an embodiment of a computer system 400. Computer system 400 includes a monitor 401 or other communication output device, a keyboard 402 or other communication input device, and a motherboard 404. Motherboard 404 may carry a microprocessor 406 or other data processing unit, and at least one memory device 408. Memory device 408 may comprise an array of memory cells, and such array may becoupled with addressing circuitry for accessing individual memory cells in the array. Further, the memory cell array may be coupled to a read circuit for reading data from the memory cells. The addressing and read circuitry may be utilized for conveying information between memory device 408 and processor 406. Such is illustrated in the block diagram of the motherboard 404 shown in FIG. 32. In such block diagram, the addressing circuitry is illustrated as 410 and the read circuitry is illustrated as 412. Processor device 406 may correspond to a processor module, and associated memory utilized with the module may comprise various structures of the types described with reference to FIGS. 1-30. Memory device 408 may correspond to a memory module, and may comprise various structures of the types described with reference to FIGS. 1-30. FIG. 33 illustrates a simplified block diagram of a high-level organization of an electronic system 700. System 700 may correspond to, for example, a computer system, a process control system, or any other system that employs a processor and associated memory. Electronic system 700 has functional elements, including a processor 702, a control unit 704, a memory device unit 706 and an input/output (I/O) device 708 (it is to be understood that the system may have a plurality of processors, control units, memory device units and/or I/O devices in various embodiments). Generally, electronic system 700 will have a native set of instructions that specify operations to be performed on data by the processor 702 and other interactions between the processor 702, the memory device unit 706 and the I/O device 708. The control unit 704 coordinates all operations of the processor 702, the memory device 706 and the I/O device 708 by continuously cycling through a set of operations that cause instructions to be fetched from the memory device 706 and executed. The memory device 706 may include various structures of the types described with reference to FIGS. 1-30. FIG. 34 is a simplified block diagram of an electronic system 800. The system 800 includes a memory device 802 that has an array of memory cells 804, address decoder 806, row access circuitry 808, column access circuitry 810, read/write control circuitry 812 for controlling operations, and input/output circuitry 814. The memory device 802 further includes power circuitry 816, and sensors 820, such as current sensors for determining whether a memory cell is in a low-threshold conducting state or in a high-threshold non-conducting state. The illustrated power circuitry 816 includes powersupply circuitry 880, circuitry 882 for providing a reference voltage, circuitry 884 for providing a first interconnection line (for instance, a wordline) with pulses, circuitry 886 for providing a second interconnection line (for instance, another wordline) with pulses, and circuitry 888 for providing a third interconnection line (for instance, a bitline) with pulses. The system 800 also includes a processor 822, or memory controller for memory accessing. The memory device 802 receives control signals from the processor 822 over wiring or metallization lines. The memory device 802 is used to store data which is accessed via I/O lines. At least one of the processor 822 or memory device 802 may include various structures of the types described with reference to FIGS. 1-30. The various electronic systems may be fabricated in single-package processing units, or even on a single semiconductor chip, in order to reduce the communication time between the processor and the memory device(s). The electronic systems may be used in memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc..
In an example, a memory array may include a plurality of first dielectric materials and a plurality of stacks, where each respective first dielectric material and each respective stack alternate, and where each respective stack comprises a first conductive material and a storage material. A second conductive material may pass through the plurality of first dielectric materials and the plurality of stacks. Each respective stack may further include a second dielectric material between the first conductive material and the second conductive material.
What is claimed is;1. A memory array, comprising:a plurality of first dielectric materials and a plurality of stacks, wherein each respective first dielectric material and each respective stack alternate, and wherein each respective stack comprises a first conductive material and a storage material; anda second conductive material passing through the plurality of firstdielectric materials and the plurality of stacks;wherein each respective stack further comprises a second dielectricmaterial between the first conductive material and the second conductive material.2. The memory array of claim 1, whereinthe storage material is on only one side of the first conductive material; andthe second conductive material is perpendicular to the storage material,3. The memory array of any one of claims 1-2, further comprising a third dielectric material between the second conductive material and the plurality of stacks and between the second conductive material and the plurality of first dielectric materials.4. The memory array of any one of claims 1-2, wherein each respective stack further comprises a third dielectric material between the first conductive material and the storage material.5. The memory array of any one of claims 1-2, wherein the first conductive material and the storage material are at different levels within each respective stack.6. The memory array of any one of claims 1-2, wherein the storage material is a self-selecting storage material.7. The memory array of any one of claims 1-2, wherein the storage material comprises a chalcogenide material.8. A memory array, comprising:a stack of memory cells; anda first conductive material;wherein each respective memory cell comprises:a different portion of the first conductive material; a storage material,a second conductive material on the storage material; and a dielectric material on the storage material and between the second conductive material and the first conductive material.9. The memory array of claim 8, wherein the memory cells are separated from each other by an additional dielectric material.10. The memory array of claim 8, whereinthe dielectric material is a first dielectric material, andeach respective memory cell further comprises a different portion of a second dielectric material between the first conductive material and the first dielectric material and between the first conductive material and the storage material.1 1. The memory array of claim 10, whereinthe first dielectric material is in direct physical contact with the second conductive material and the second dielectric material; and the second dielectric material is perpendicular to the first dielectricmaterial.12. The memory array of any one of claims 8, 10, and 1 1, wherein eachrespective memory cell further comprises an additional dielectric material between the second conductive material and the storage material.13. The memory array of any one of claims 8, 10, and 11 , wherein the first conductive material is perpendicular to the second conductive material and the storage material of each respective memory cell.14. A method of forming a memory array, comprising:forming a plurality of stacks and a plurality of first dielectric materials so that each respective stack and each respective first dielectric material alternate, wherein forming each respective stack comprises forming a storage material, a first conductive material on the storage material, and a second dielectric material on the storage material adjacent to the first conductive material; and forming a second conductive material through the plurality of stacks and the plurality of first dielectric materials so that the second dielectric material in each respective stack is between the first conductive material and the second conductive material.15. The method of claim 14, further comprising forming a third dielectric material through the plurality of stacks and the plurality of first dielectric materials before forming the second conductive material, wherein forming the second conductive material comprises forming the second conductive material adjacent to the third dielectric material so that the third dielectric material is between the second conductive material and the plurality of stacks and between the second conductive material and the plurality of first dielectric materials.16. The method of claim 14, wherein forming the second dielectric material comprises recessing the first conductive material before forming the second conductive material to form an opening in the first conductive material and forming the second dielectric material in the opening.17. The method of claim 14, wherein forming each respective stack further comprises forming a third dielectric material between the storage material and the second conductive material and between the storage material and18. The method of any one of claims 14-17, wherein forming the storage material comprises flat depositing the storage material.19. The method of any one of claims 14-17, wherein forming the storage material comprises forming the storage material using physical vapor deposition.20. The method of any one of claims 14-17, wherein forming the storage material comprises forming the storage material horizontally.21. A method of forming a memory array, comprising:forming a plurality of stacks and a plurality of first dielectric materials so that each respective stack and each respective first dielectric material alternate, wherein each respective stack comprises a first conductive material and a storage material,forming a first opening through the plurality of stacks and the plurality of first dielectric materials,removing a portion of the first conductive material in each respective stack to form a second opening in each respective stack; forming a second dielectric material in the second opening in eachrespective stack; andforming a second conductive material in the first opening adjacent to the second dielectric material in each respective stack and the storage material in each respective stack.
THREE DIME SIONAL MEMORY ARRAYSTECHNICAL FIELD[0001] The present disclosure relates generally to memory, and, more particularly, to three dimensional memory arrays.BACKGROUND[0002] Memories, such as memory devices, may typically be provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including random-access memory (RAM), read only memory (ROM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), resistance variable memory, and flash memory, among others. Types of resistance variable memory may include phase-change-material (PCM) memory, programmable- conductor memory, and resistive random-access memory (RRAM), among others.[0003] Memory devices may be utilized as volatile and non-volatile memory for a wide range of electronic applications in need of high memory densities, high reliability, and low power consumption. Non-volatile memory may be used in, for example, personal computers, portable memory sticks, solid state drives (SSDs), digital cameras, cellular telephones, portable music players such as MP3 players, and movie players, among other electronic devices[0004] Resistance variable memory devices can include resistive memory cells that can store data based on the resistance state of a storage element (e.g., a resistive memory element having a variable resistance). As such, resistive memory cells may be programmed to store data corresponding to a target data state by varying the resistance level of the resistive memory element. Resistive memory cells may be programmed to a target data state (e.g., corresponding to a particular resistance state) by applying sources of an electrical field or energy, such as positive or negative electrical pulses (e.g., positive or negative voltage or current pulses) to the cells (e.g., to the resistive memory element of the cells) for a particular duration. A state of a resistive memory cell may be determined by sensing current through the cell responsive to an applied interrogation voltage. The sensed current, which varies based on the resistance level of the cell, can indicate the state of the cell. [0005] One of a number of data states (e.g., resistance states) may be set for a resistive memory ceil. For example, a single level memon,' ceil (SLC) maybe programmed to a targeted one of two different data states, which may be represented by the binary units 1 or 0 and can depend on whether the cell is programmed to a resistance above or below a particular level. As an additional example, some resistive memory cells may be programmed to a targeted one of more than two data states (e.g., 1 1 1 1 , 0111, 001 1, 101 1, 1001, 0001, 0101, 1 101, 1100, 0100, 0000, 1000, 1010, 0010, 01 10, and 1 110). Such cells may be referred to as multi state memon,' cells, multiunit cells, or multilevel cells (MLCs). MLCs can provide higher density memories without increasing the number of memory cells since each cell can represent more than one digit (e.g., more than one bit).BRIEF DESCRIPTION OF THE DRAWINGS[0006] Figures 1 A- ID illustrate cross-sectional views of processing steps associated with forming a three dimensional memory array, in accordance with an embodiment of the present disclosure.[0007] Figures IE- 1 G illustrate various views of a processing step associated with forming a three dimensional memory array, in accordance with an embodiment of the present disclosure.[0008] Figure 2 illustrates a three dimensional memory array in accordance with an embodiment of the present disclosure.DETAILED DESCRIPTION[0009] The present disclosure includes three dimensional memory arrays and methods of processing the same. A number of embodiments include a memory array that may include a plurality of first dielectric materials and a plurality of stacks, where each respective first dielectric material and each respective stack alternate, and where each respective stack comprises a first conductive material and a storage material. A second conductive material may pass through the plurality of first dielectric materials and the plurality of stacks. Each respective stack may further include a second dielectric material between the first conductive material and the second conductive material.[0010] In examples of previous memory arrays, a storage material may be formed in a (e.g., vertical) opening passing through a stack of alternating? (e.g., horizontal) first conductive materials and dielectric materials. A second conductor may be formed in the opening containing the storage material.Memory cells of an array may include different portions of the first conductors, different portions of the storage material, and different portions of the second conductor, such that the array may include (e.g., vertical) stacks of memory cells to form a three-dimensional array. Utilizing such stacks to form a three dimensional memory array may increase the number of memory cells in the array that may provide increased density and/or increased storage capacity.[0011] However, it may be difficult to form a uniform thickness of the storage material in the opening (e.g., using standard techniques, such as physical vapor deposition (PVD)). Non-uniformities in the thickness of the storage material may, for example, result in non-uniformities in the electrical properties of the storage material, and thus of the memory ceils of the array.[0012] Embodiments of the present disclosure provide benefits, such as allowing for three dimensional memory arrays with storage material having more a uniform thickness, and thus more uniform eiectrical properties, than storage material formed in openings in previous memory arrays. For example, embodiments may allow for the formation of the storage material (e.g., having a relatively uniform thickness) using standard techniques, such as PVD, while still achieving increased density and/or storage capacity.[0013] In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific examples. In the drawings, like numerals describe substantially similar components throughout the several views. Other examples may be utilized and structural and eiectrical changes may be made without departing from the scope of the present disclosure. The following detailed description i s, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims and equivalents thereof.[0014] As used herein, "a" or "an" may refer to one or more of something, and "a plurality of can refer to more than one of such things. For example, a memor cell can refer to one or more memory cells, and a plurality of memory cells can refer to two or more memory cells. [0015] The term semiconductor can refer to, for example, a layer of material, a wafer, or a substrate, and includes any base semiconductor structure. "Semiconductor" is to be understood as including silicon-on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, thin-film-transistor (TFT) technology, doped and undoped semiconductors, epitaxial layers of a silicon supported by a base semiconductor structure, as well as other semiconductor structures. Furthermore, when reference is made to a semiconductor in the following description, previous process steps may have been utilized to form regions/junctions in the base semiconductor stmcture, and the termsemiconductor can include the underlying layers containing suchregions/junctions,[0016] The term "vertical" may be defined, for example, as a direction that is perpendicular to a base structure, such as a surface of an integrated circuit die. It should be recognized the term vertical accounts for variations from "exactly" vertical due to routine manufacturing, measuring, and/or assembly variations and that one of ordinary skill in the art, would know what is meant by the term vertical. The term "horizontal" may be defined, for example, as a direction that is parallel to the base structure. It should be recognized the term horizontal accounts for variations from "exactly" horizontal due to routine manufacturing, measuring, and/or assembly variations and that one of ordinary skill in the art would know what is meant by the term horizontal. It should be recognized the terms perpendicular and parallel respectively account for variations from "exactly" perpendicular and "exactly" parallel due to routine manufacturing, measuring, and/or assembly variations and that one of ordinary skill in the art would know what is meant by the terms perpendicular and parallel.[0017] To meet the demand for higher capacity memories, designers continue to strive to increase memory density, such as the number of memory cells in a given area of a base structure (e.g., a base semiconductor, such as a semiconductor substrate, a silicon substrate, etc.), such as a die (e.g., a chip). One way to increase memory density is to form stacked memory arrays (e.g., often referred to as three dimensional memory arrays). For example, a stacked memory array may include memory cells stacked in a direction perpendicular to the base stmcture to increase the number of memory cells. There has been substantial interest in three-dimensional cross-point memory. In some examples, three-dimensional cross-point memoiy cells may utilize a resistive material, such as a phase-change material (e.g., chalcogenide), as a multi state material suitable for storing memory bits.[0018] Figures 1 A-1E are cross-sectional views of a portion of a stacked memoiy array 100 (e.g., three dimensional memoiy array), during various stages of processing (e.g., fabrication), in accordance with a number of embodiments of the present disclosure. In Figure l A, a dielectric material (e.g., a dielectric 102) may be formed over wiring (e.g., metallization levels) of an apparatus, such as a memoiy device. The wiring may be over decoder circuitry that may be formed on and/or in a semiconductor (not shown in Figure I A). Dielectric 102 may be over and may electrically isolate memory array 100 from the wiring, decoder, and semiconductor. For example, dielectric 102 may be over and may electrically isolate memory array 100 from complementary-metal-oxide- semiconductor (CMOS) and metallization levels. In some examples, dielectric 102 may act as an etch-stop. Herein a dielectric material may be referred to as a dielectric.[0019] A (e.g., horizontal) dielectric 104 may be formed (e.g., flat deposited) adjacent to (e.g., over), such as in direct physical contact with, dielectric 102. Dielectrics 102 and 104 may be oxide, such as silicon oxide, aluminum oxide, hafnium oxide, etc., or nitride, such as silicon nitride.[0020] Herein when a first element is adjacent to a second element, the first element may be over (e.g., above), below, or lateral to the second element and may be in direct physical contact with the second element with no intervening elements or may be separated from the second element by one or more intervening elements. When a first element is over a second element, the first element may be in direct physical contact with the second element or may be separated from the second element by one or more intervening elements.[0021] A (e.g., horizontal) storage material 106 may be formed (e.g., flat deposited) over (e.g., on) dielectric 104, as shown in Figure 1A. In some examples, storage material 106 may be formed using PVD, chemical vapor deposition (CVD), or atomic layer deposition (ALD). Storage material 106 may be about ten (10) nanometers thick, for example. Flat depositing storage material 106 (e.g., horizontally) may, for example, mitigate (e.g., eliminate) the (e.g., unacceptable) non-uniformities in the thickness of the storage material that may otherwise occur when a storage material is formed (e.g., vertically) in an opening.[0022] Storage material 106 may include a chalcogenide material, such as a chalcogenide alloy and/or glass, that may be a self-selecting storage material (e.g., that can serve as both a select device and a storage element). Storage material 106 (e.g., the chalcogenide material) may be responsive to an applied voltage, such as a program pulse, applied thereto. For an applied voltage that is less than a threshold voltage, storage material 06 may remain in an "off state (e.g., an electrically nonconductive state). Alternatively, responsive to an applied voltage that is greater than the threshold voltage, storage material 106 may enter an "on" state (e.g., an electrically conductive state). Further, the threshold voltage of storage material 106 in a given polarity may change based on the polarity (e.g., positive or negative) of the applied voltage. For example, the threshold voltage may change based on whether the program pulse is positive or negative,[0023] Examples of a chalcogenide material suitable for storage material106 may include indium(In)-antimony(Sb)-tellurium(Te) (1ST) materials, such as Iii2Sb2Te5, IniSb2Te4, ImSb-jTe?, etc., and germanium(Ge)-antimony(Sb)- tellurium(Te) (GST) materials, such as GegSbsTeg, Ge2Sb2Te5,GeiSb4Te7, Ge4Sb4Te7, or etc., among other chalcogenide materials, including, for instance, alloys that do not change phase during the operation (e.g., selenium -based chalcogenide alloys). Further, the chalcogenide material may include minor concentrations of other dopant materials. The hyphenated chemical composition notation, as used herein, indicates the elements included in a particular mixture or compound, and is intended to represent allstoichiometrics involving the indicated elements.[0024] As shown in Figure 1 A, a (e.g., horizontal) dielectric 108, such as aluminum oxide, hafnium oxide, etc., may be formed (e.g., flat deposited) over storage material 106, such as by CVD or ALD. In some examples, dielectric 108 may be about 0.1 nanometer to about one (1) nanometer thick.[0025] A (e.g., horizontal) conductive material (e.g., a conductor 110), such as an electrode, may be formed (e.g., flat deposited) over dielectric 108, and a (e.g., horizontal) dielectric 1 14, such as an oxide or nitride, may be formed (e.g., flat deposited) over conductor 110. For example, a dielectric 108 may act as a barrier, such as a diffusion barrier, between a conductor 110 and storage material 106. Herein a conductive material may be referred to as a conductor.[0026] In some examples, memoiy array 100 may include a stack of alternating (e.g., horizontal) stacks (e.g., tiers) 1 16 and dielectrics 114 between dielectric 104 and a (e.g., horizontal) dielectric 120. For example, each respective stack 1 16 and each respective dielectric 1 14 may alternate, where each respective stack 116 may include, for example, storage material 106, dielectric 108 over storage material 106, and conductor 110 over dielectric 108. Dielectric 120 may be over an uppermost stack 1 16. Dielectric 108 may be flat deposited over storage material 106, and conductor 110 may be flat deposited over dielectric 108 to form a stack 116, for example.[0027] In an embodiment, storage material 106 may be formed over dielectric 104 or dielectric 1 14, as shown in Figure 1A. For example, a stack 116 may be at each of a plurality of different levels in memoiy array 100. The stacks 116 may be separated from each other by a dielectric 114, as shown in Figure 1A.[0028] In some examples, the order of the formation of the storage material 106 and the conductor 110 may be inverted. For example, conductor 110 may be formed either over dielectric 104 or a dielectric 1 14, dielectric 108 may be formed over conductor 1 10, and storage material 106 may be formed over dielectric 108, and thus a dielectric 114 or dielectric 120 may be formed over storage material 106, As such, a dielectric stack 116 may, for example, include a conductor 110, a dielectric 108 over conductor 110, and storage material 106 over dielectric 108. For example, forming a dielectric stack 1 16 may include forming storage material 106, a dielectric 108, and a conductor 110 respectively at different levels within the stack 1 16, and thus at different levels within the array 100.[0029] As shown in Figure IB, openings 124 may be formed through dielectric 120, through alternating stacks 116 and dielectrics 114, and through dielectric 104, stopping on or in dielectric 102. For example, dielectric 120 may be patterned to form openings 124 through dielectric 120, through alternating stacks 116 and dielectrics 114, and through dielectric 104. For example, a mask (not shown), such as imaging resist (e.g., photo-resist), may be formed over"7 dielectric 120 and patterned to expose regions of dielectric 120. The exposed regions of dielectric 120 and portions of alternating stacks 116 and dielectrics 14 and portions of dielectric 104 under the exposed regions of dielectric 120 may be subsequently removed, such as by dry or wet etching, to form openings 124 that may terminate on or in dielectric 102.[0030] Openings 124 may expose portions of dielectric 120, portions of dielectrics 114, portions of stacks 1 16 (e.g., portions of storage materials 106, dielectrics 108, and conductors 110), and portions of dielectric 104. For example, the exposed portions of dielectric 120, diel ectrics 114, stacks 1 16, and dielectric 104 may be coplanar and contiguous and may form sides (e.g., sidewalls) 128 of openings 124. In an example, an exposed portion of a dielectric 120, a dielectric 1 14, a storage material 106, a dielectric 108, a conductor 110, and a dielectric 104 may form a bounding surface, such as a side, of the portion of the opening 124 passing though that dielectric 120, dielectric 1 14, storage material 106, dielectric 108, conductor 110, and dielectric 104. In some examples, openings 124 may have, circular, square, rectangular, polygonal, or oval cross-sections.[0031] As shown in Figure I C, a portion of the conductor 1 10 in each of the respective stacks 1 6, and thus of each of the respective conductors 110, may be removed so that an exposed portion 130 of the conductor 1 10 in each of the stacks 1 16 may be recessed relative to the exposed portion of the storage material 106 and the exposed portion of dielectric 108 in each respective stack 1 16. For example, the portion 130 of a respective conductor 1 10 may be recessed relative to the side 128 of an opening 124, and thus the exposed portions of dielectrics 104, 1 14, and 120.[0032] Recessing the portion 130 of a respective conductor 1 10 may form an opening (e.g., a recess) 134 that may extend from the side 128, and thus an exposed portion of a storage material 106, an exposed portion of a dielectric 108, an exposed portion of a di electric 1 14, and an exposed portion of a dielectric 120, to the portion 130 of the conductor 110. For example, the openings 134 may be formed in the sides 128 of openings 124. The depth d of an opening 124 from a side 128 to a portion 130 illustrated in Figure I C may be about 10 to about 30 nanometers, for example. Note that the portion 130 of a conductor 1 10 may form a bounding surface, such as a side, of a respective opening 134. In some examples, openings 134 may be formed using an isotropic etch selective to conductors 110.[0033] As shown in Figure ID, a dielectric 138, such as an oxide or a nitride, may be formed in each of the openings 134 adjacent to (e.g., in direct physical contact with) a respective portion 130 of each respective conductor 110. For example, a dielectric 138 may replace the removed portion of a respective conductor 1 10. In some examples, dielectric 138 may be fomied in openings 124 and may be subsequently removed, such as by etching, until an exposed portion of the dielectric 138 in an opening 124 is coplanar (e.g., flush) with the side 128 of the opening 124, and thus the exposed portions of storage materials 106, dielectrics 108, dielectric 104, dielectrics 114, and dielectric 120.[0034] In some examples, a dielectric, such as a dielectric similar to(e.g., the same as) a dielectric 108, may be formed in an opening 134 adjacent to a portion 130 of a conductor 110 (not shown). A dielectric 138 may then be formed in the opening 134 adjacent to the dielectric so that the dielectric is between the portion 130 of the conductor 1 10 and the dielectric 138.[0035] The exposed portions of dielectrics 138, such as an exposed portion 144 of a dielectri c 138, storage materials 106, such as an exposed portion 148 of a storage material 106, dielectrics 108, dielectric 104, dielectrics 114, and dielectric 120 may be coplanar and contiguous and may form the sides 128 of openings 124. For example, a side 128 may be a surface comprising coplanar and contiguous portions of dielectrics 138, storage materials 106, dielectrics 108, dielectric 104, dielectrics 114, and dielectric 120, Note that an exposed portion 144 of a dielectric 138 may form a bounding surface of a portion of the opening 124 passing through that dielectric 138.[0036] A dielectric 138 in a stack (e.g., each stack) 116 may extend from a portion 130 of the conductor 110 of that stack to the exposed portion of the dielectric 108 and the exposed portion 148 of the charge storage material 106 of that stack 1 16. For example, a dielectric 138 (e.g., each dielectric 138) may extend from a portion 130 of a respective conductor 110 to the exposed portions of storage materials 106, dielectrics 108, dielectric 104, dielectrics 1 14, and dielectric 120.[0037] A (e.g., vertical) dielectric 150, such as a dielectric liner, may be fomied in openings 124 adjacent to the sides 128 of those openings, as shown in Figure IE, For example, openings 124 may be lined with dielectric 150.Dielectric 150 may be formed adjacent to the exposed portions of dielectric 104, dielectrics 108, dielectrics 14, dielectric 120, dielectric 138, such as the exposed portion 144 of a respective dielectric 138, and storage materials 106, such as the exposed portion 148 of a respective storage material 106. In some examples, dielectric 150, may be similar to (e.g., the same as) dielectric 108, as described above.[0038] Figure I F illustrates a cross-sectional view taken along the lineIF- IF in Figure IE, and Figure 1G illustrates a cross-sectional view taken along the line 1 G-1G in Figure IE. Figures IE and IF show, for example, a dielectric 150 adjacent to (e.g., in direct physical contact with) a previously exposed portion 144 (e.g., exposed in Figure ID) of a respective dielectric 138. Figures IE and IF further show a dielectric 138 adjacent to a portion 130 of a conductor 1 10 and between the portion 130 and dielectric 150. Figure 1 G and Figure I E show, for example, a dielectric 150 adjacent to a previously exposed portion 148 (e.g., exposed in Figure ID) of a storage material 106.[0039] A (e.g., vertical) conductor 152 (e.g., an electrode), such as a conductive pillar, may be formed in the openings containing (e.g., lined with) dielectric 150. For example, a conductor 152 may be formed adjacent to dielectric 150, as shown in Figures 1E-1 G, In some examples, only a dielectric 150 and a conductor 152 or only a conductor 152 may be formed in an opening 124. Openings 124 may, for example, might not include (e.g., might be devoid of any) storage and/or switching materials, such as chalcogenide materials. For example, there might not be any storage and/or switching materials between side 128 and conductor 152. A conductor 152 may completely fill an opening 124 lined with a dielectric 150, for example. As previously described, it may be difficult to form storage and/or switching materials in an opening, such as an opening 124, (e.g., without having non-uniformities in the thicknesses of the storage and/or switching materials).[0040] Dielectric 150 and conductor 152 may, for example, be perpendicular to stacks 116, and thus a conductor 110, dielectric 108, dielectric 138, and storage material 106 of each respective stack 116, dielectrics 104, 1 14, and 110, and a base structure. For example, dielectric 150 and/or conductor 152 may pass through the stack of alternating dielectrics 1 14 and stacks 1 16, Conductor 152 may be adjacent to dielectric 150 such that dielectric 150 is between conductor 152 and the alternating dielectrics 114 and stacks 1 16. In some examples, the conductor 138 in each respective stack 1 16 may be between a conductor 1 10 of each respective stack 1 16 and conductor 152.[0041] In an embodiment, a dielectric 150 may be (e.g., formed) completely around a conductor 152, as shown in Figures IF and 1G. A dielectric 138 may be completely around a dielectric 150, and thus conductor 152, and a portion of a conductor 110 may be completely around dielectric 138. For example, a conductor 152, a dielectric 150, a dielectric 138, and a portion of a conductor 110 may be concentric, as shown in Figure IF. A portion of a storage material 106 may be completely around a dielectric 150, and thus a conductor 152, as shown in Figure IG For example, a conductor 152, a dielectric 150, and a portion of a storage material 106 may be concentric, as shown in Figure IG.[0042] In some examples, conductors 110 and/or conductors 152 may comprise, consist of, or consist essentially of conductively doped poiysilicon and/or may comprise, consist of, or consist essentially of metal, such as a refractory metal, or a metal-containing material, such as a refractory metal silicide, or a metal nitride, e.g., a refractory metal nitride, as well as any other conductive material. The metals of chromium (Cr), cobalt (Co), hafnium (Hf), molybdenum (Mo), niobium (Nb), tantalum (Ta), titanium (Ti), tungsten (W), vanadium (V) and zirconium (Zr) are generally recognized as refractory metals.[0043] A portion of a dielectric 108 may be completely around a dielectric 150, and thus a conductor 152, in a manner similar to that shown for storage material 106 in Figure IG. For example, a conductor 152, a dielectric 150, and a portion of a dielectric 108 may be concentric.[0044] A portion of a dielectric 114 may be completely around a dielectric 150, and thus a conductor 152, in a manner similar to that shown for storage material 106 in Figure IG. For example, a conductor 152, a dielectric 150, and a portion of a dielectri c 114 may be concentric.[0045] In some examples, a stack 116 (e.g., each of stacks 116) may include a portion of memory ceil 156. For example, each respective memory cell 156 may include a portion of a respective storage material 106, a portion of a respective conductor 1 10 (e.g., on the portion of the respective storage material 106), a portion of a respective dielectric 138 (e.g., on the portion of the respective storage material 106), a different portion of a dielectric 150, and a different portion of a conductor 152, as shown in Figures 1E-1G. A memory cell (e.g., each memory cell) 156 may, for example, he annular in shape, as shown in Figures IF and 1G. In some examples, a portion of a respective dielectric 108 may be between the portion of the respective storage material 06 and the portion of a respective conductor 10 and between the portion of the respective storage material 106 and the portion of a respective dielectric 138, as shown in Figure IE. In an example, the portion of a respective dielectric 138 may be between the portion of a respective conductor 110 and the different portion of the dielectric 150, and thus the different portion of the conductor 1 52.[0046] A memory cell 156 may be in a respective tier (e.g., a deck) of memory cells, where different tiers of memory cells 156 may be at different (e.g., vertical) levels within memory array 100 to form a stack of memory ceils 156. For example, a memory cell (e.g., each memory cell) 156 may correspond to a respective stack 1 16. A respective memory cell 156 may, for example, include a portion of a respective conductor 1 10 and a portion of a respective dielectric 138 at a level in a respective stack 116, and thus memory array 100, a portion of a respective dielectri c 108 at another level in the respective stack 116, and portion of a respective storage material 106 at yet another level in the respective stack 1 16. Each respective memory cell 156 and each respective dielectric 1 14 may alternate so that the memory cells 156 are separated from each other be a dielectric 114. Although Figures 1 A-IE show four stacks 1 16 and four tiers of memory cells 156, memory array 100 is not so limited and may include any number of stacks 116 and tiers of memory cells 156.[0047] In some examples, a conductor 1 10 may be a signal line (e.g., plane), such as an access line (e.g., a word line), and a conductor 152 may be a signal line (e.g., an access line), such as a data line (e.g., a bit line). In some examples, the storage material 106, and thus a respective memory cell 156, may be self-selecting. For example, the storage material 106 may act as a switch, such as a diode, and a storage element.[0048] The length of a dielectric 138 in each respective stack 1 16 may define an effective length of a respective memory cell 156. For example, the length of a dielectric 138, and thus the effective length of each respective memory cell 156, may be about 10 to about 30 nanometers. In some examples,Iz the effective length of each respective memory cell 156 may be about the depth d of an opening 124, shown in Figure 1C.[0049] In an example, a relatively low voltage (e.g., a negative voltage) may be applied to a conductor 152, and a relatively high voltage (e.g., a positive voltage) may be applied to a conductor 1 10 to produce a voltage differential across a storage material 106, and thus the memory cell 156 that incudes that storage material 106. The voltage differential may act to produce a conductive (e.g., a current) path from the conductor 1 10 to the conductor 152 that may include a dielectric 108, the storage material 106, and a dielectric 150. For example, the current may flow from conductor 110 through dielectric 08, the storage material 106, the dielectric 150 to the conductor 152. For example, dielectrics 108 and dielectric 150 may be sufficiently thin to pass current. In some examples, such a voltage differential may act to program a threshold voltage, and thus a state, in the respective storage material 106, and thus the respective memory cell 156. The polarity of the voltage differential may be reversed, in some examples, to program a different threshold voltage, and thus a different state, in the respective storage material 106, and thus the respective memory cell 156.[0050] Figure 2 illustrates a three dimensional memory array 200 in accordance with an embodim ent of the present disclosure. Array 200 may be, for example, array 100 previously described in connection with Figures l E-lG. For example, array 200 may be processed according to the processing steps previously described herein (e.g., in connection with Figures 1 A-IG).[0051] As shown in Figure 2, access lines, which may be referred to as word lines (WLs), may be located on a plurality of levels. For example, word lines may be located on N levels. Insulation material (not shown in Figure 2 for clarity and so as not to obscure embodiments of the present disclosure) can separate the levels of word lines. As such, the levels of word lines separated by insulation material can form a stack of WL/insulation materials. In some examples, each word line may include (e.g., may be) a respective conductor 110, shown in Figures IE and IF. In some examples, each respective word line may be in a respective stack, such as a stack 16 previously described in connection with Figures 1 A-IE, that may include a word line and a storage material, such as storage material 106 previously described in connection with Figures 1 A- I E, at a different level than the word line.[0052] Further, data lines, which may be referred to as bit lines (BLs), may be, for example, arranged perpendicular to the word lines, and located at a level above the N levels of word lines (e.g., at the N+l level). In some examples, each bit line may include to a conductor (e.g., a vertical conductor), such as a conductor 152 shown in Figures 1E-1G.[0053] For example, array 200 may include a plurality of conductive lines 202 (e.g., access lines), which may be referred to herein as word lines, and a plurality of conductive lines 224 (e.g., data lines), which may be referred to herein as bit lines. Word lines 202 may be arranged into a number of levels. Word lines 202 are shown being arranged into four levels in Figure 2, However, the quantity of levels into which the word lines 202 may be arranged are not limited to this quantity, and word lines 202 may be arranged into more, or fewer, levels. Word lines 202 may be arranged parallel one another within a particular level. For example, word lines 202 in each of the multiple levels may be located at a same relative location within each level so as to be aligned with word lines 202 directly above and/or below. Storage material (e.g., storage material 106 previously described in connection with Figures 1A-1G) may be located between the word lines at the different levels to form stacks (e.g., the stacks 1 16 previously described in connection with Figures 1 A- I E) that may include a respective word line and the respective storage material 106. Insulation material (e.g., a dielectric 1 14 previously described in connection with Figures 1 A-I E) may be located between the levels at which stacks are located.[0054] As shown in Figure 2, bit lines 224 may be arranged parallel one another at a level different than the levels at which word lines 202 are located (e.g., above the levels at which word lines 202 are located). For example, the bit lines may be located at the top of the memory array 200, as illustrated in Figure 2, As an additional example, the bit lines may be located at the bottom of array 200 (e.g., such that conductors 152 may be coupled to (e.g., contact) the bit lines at the bottom of openings 124). The bit lines 224 may be further arranged perpendicular (e.g., orthogonal) to word lines 202 so as to have overlappings (e.g., crossings at different levels) therebetween. However, embodiments of the present disclosure are not limited to a strictly parallel/orthogonal configuration. [0055] The indices shown for each word line 202 in Figure 2 indicate the position (e.g., ordering) of the word lines within a group of word lines. For example, word line WL2,o is shown being located at a position 2 at the bottom of the group of word lines, and word line WL2,3is shown being located at position 2 at the top of the group of word lines. The quantity of levels into which the word lines 202 may be arranged, and the quantity of word lines 202 at each level may be more, or fewer, than the quantities shown in Figure 2.[0056] At each overlapping of a bit line 224 and a group of word lines202, a conductor 152 of a bit line 224 may be oriented substantiallyperpendicular to the bit line 224 and the word lines 202, so as to intersect a portion of each word line 202 in the group of word lines.[0057] For example, the conductor 152 of the bit line 224 may be arranged to extend vertically from the bit line 224 to intersect a portion the respective word lines 202 therebelow, as shown in Figure 2. For instance, as one example, the conductor 152 can pass through a stack 1 16, including a word line 202 and a storage material 106, so as to be surrounded entirely by the word line 202 and the storage material 106. In some examples, a stack 116 may include a portion of a memory cell 220. For example, a memory 220 may include a portion of a word line 202, a portion of storage material 106 at a different level than the portion of word line 202, and a portion of a conductor 152.[0058] Memory cells 220 are shown in Figure 2 arranged in a three dimensional architecture near the location of where a conductor 152 of a bit line 224 and the stacks 16 are in proximity to one another at different levels. For example, a memory ceil 220 may be located where a conductor 152 passes through a portion of a stack 1 16.[0059] The memory cells 220, for example, may be arranged in multiple levels, each level having memory cells at intersections of conductors, such as conductors 152, and stacks 116 that include a portion of a word line 202 and a portion of a storage material 106. The levels of memory cells 220 may be formed at different levels from one another, thereby being vertically stacked. Accordingly, memory array 200 may be a three dimensional memory array that may include memory cells 220 having a common bit line 224, but separate word lines 202. Although four levels of word lines 202 (and four corresponding levels of memory cells 220) are shown in Figure 2, embodiments of the present disclosure are not so limited and can include more, or fewer, levels of word lines 202 (and corresponding levels of memory cells 220).[0060] Although specific examples have been illustrated and descri bed herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results may be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. The scope of one or more examples of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
PROBLEM TO BE SOLVED: To provide a block-request streaming system providing improvement in the user experience and bandwidth efficiency of the system, typically using an ingestion system that generates data in a form to be served by a conventional file server (HTTP, FTP, or the like).SOLUTION: The ingestion system ingests content and prepares it as files or data elements to be served by the file server. The system includes controlling the sequence, timing and construction of block requests, time based indexing, variable block sizing, optimal block partitioning, and control of random access point placement, including, across multiple presentation versions, dynamically updating presentation data, and/or efficiently presenting live content and time shifting.SELECTED DRAWING: Figure 1
A method of generating a block of media data that is electronically transmitted to a client device upon request, the method comprising: obtaining data representing a media of a presentation, wherein the presentation is a time- Storing the data representing the media of the presentation as a plurality of blocks; and prior to sending the data to the presentation device, storing the data representing the media of the presentation as a plurality of blocks; The method comprising: identifying a correspondence between a plurality of time ranges and a plurality of positions within a block; generating stored correspondence data representing at least a portion of the correspondence before transmitting the block; When the client device determines which part of the plurality of blocks It comprises whether to request a set, and to be able to be determined from the stored correspondence data and desired The time range of the presentation to be played out, the method.The method of claim 1, wherein the stored correspondence data is stored as part of a file that also includes the corresponding media data.The method of claim 1, wherein the stored correspondence data is a map formatted as XML metadata, wherein the time range relates to the beginning of the presentation or relates to the beginning of a media block Method.The method of claim 1, wherein the blocking of media data and the generating of the stored correspondence data is performed by a media capture system and stored on a general purpose server that responds at least to file requests.The method according to claim 4, wherein the file request is an HTTP request.Wherein the media block has a variable duration and the stored correspondence data indicates that the client device has a time range and a data position that may vary depending on the variable duration of the media block The method according to claim 1, which makes it possible to determine correspondence.The method according to claim 1, wherein the picture group (GoP) is divided into two or more media blocks.A method for determining a request made to a media server at a client device capable of presenting a media presentation over a presentation period, the method comprising: at the client device, determining a desired period of the media presentation Wherein the desired time period is less than the entire presentation period; at the client device, a stored correspondence mapping the time range of the media presentation to a data range within the block representing the media Determining at least one data range to be requested to the media server at the client device and from the stored correspondence data; performing the determined request; , And a presenting the presentation using a serial client device, method.A communication system in which a client device requests media data from a media capturing system, the method comprising: in the media capturing system, encoding a first version of a media presentation, wherein the first version is made available Provided that the random access point is selected within the first version in order to optimize the compression efficiency of the encoding on condition that it is a data rate version and conforms to the lowest temporal space between the random access points Encoding a second version of the media presentation, wherein the second version is a different data rate version that is made available and that the second version of the media presentation in the first version Random Access Point How comprising, selecting a random access point of said second version to match temporally.A communication system in which a client device requests media data from a media capturing system, the method comprising: in the media capturing system, encoding a first version of a media presentation, wherein the first version is made available Selecting a random access point in the first version; encoding a second version of the media presentation, wherein the second version is available And selecting the random access point in the second version that is temporally independent of the random access point in the first version, How to include.The method of claim 10, wherein the encoding of the first version of the media presentation is performed independently of the encoding of the second version of the media presentation.A communication system in which a client device requests media data from a media capturing system, comprising: receiving a first version of a media presentation on the client side, the first version having a data rate Version, the first version having a first set of random access points, and on the client side receiving a second version of a media presentation, the second version comprising , A data rate version to be made available and the second version having a second set of random access points that are temporally independent from the first set of random access points How to include.Presenting the media by consuming the first version of the media presentation; switching to consuming the second version of the media presentation without interrupting the presentation Wherein the switching occurs between a first random access point from the first set of random access points and a second random access point from the second set of random access points 13. The method of claim 12, further comprising:In a communication system in which a client device receives media data from a media capture system, comprising providing in said media capture system a minimum rise delay value for said client, said rise delay being such that said client device receives said media capture system Wherein said delay is the delay between when said media device first starts to receive said media data from said media device and when said client device can start to consume said media data without interruption, Being a function of the transmission bandwidth available between the ingestion system and said client.A communication system in which a client device receives media data from a media capture system, comprising: receiving a minimum rise delay value from the media capture system at the client device, wherein the rise delay comprises: Receiving media data from the media capture system, and delaying the start of consumption of the media data by the rising delay, wherein the media bandwidth is a function of the transmission bandwidth available to the client Method.Providing a media presentation description (MPD) file associated with the media data stream in the media capturing system, the media presentation description (MPD) file being associated with the media data stream; Dynamically updating the MPD while being transmitted to a client, and inserting a signal in the media data stream to indicate that the MPD is being updated.In a communication system in which a client device receives a media data stream from a media capturing system, detecting a signal in the media data stream indicating that the MPD associated with the media data stream is updated on the client side And sending the updated MPD request to the media capture system.Providing a media presentation description (MPD) file associated with the media data stream in the media capturing system; and in a communication system in which the client device receives a media data stream from a media capturing system, In the MPD, the indicator signaling that the MPD is valid for the time interval.Receiving a media presentation description (MPD) file associated with the media data stream at the client device in a communication system in which the client device receives a media data stream from a media capturing system; Extracting from the MPD the indicator comprises signaling that the MPD is valid over the time interval and determining whether the MPD is valid based on a comparison of the time interval and the current presentation time And confirming that it is not.
Extended Block with Signaling or Block Generation - Request Streaming SystemCROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part of US application Ser. Luby (Michael G. Raby), etc., each of which is a nonprovisional patent application claiming the benefit of the following provisional application entitled "Enhanced Block-Request Streaming System" is there.U.S. Provisional Patent Application No. 61 / 244,767 (filing date: September 22, 2009), U.S. Provisional Patent Application No. 61 / 257,719 (filing date: November 3, 2009) No. 61 / 258,088 (filing date: November 4, 2009), U.S. Provisional Patent Application No. 61 / 285,779 (filing date: December 11, 2009), and U.S. Provisional Application No. 61 / 296,725 (filing date: January 20, 2010).This application is a continuation-in-part of U.S. Provisional Patent Application No. 61 / 372,399, entitled "Ying Chen (In Chen)," and the like and entitled "HTTP Streaming Extensions" We also claim the following benefits of 10th month).Each provisional application cited above is hereby incorporated by reference for all purposes. This disclosure incorporates by reference the following commonly assigned applications / patents, for all purposes, as if fully set forth in this document.U.S. Pat. No. 6,307,487 to Luby (hereinafter "Luby I"); Shokrollahi, et al. "Forward Error-Correcting (FEC) Coding and Streaming" by giving the name of Luby, et al. (Hereinafter "Luby II") to U.S. Pat. No. 7,068,729 to Shokrolahi I, U.S. Patent Application No. 11 / 423,391 filed on Jun. 9, 2006, entitled "Forward Error Correction (FEC) Coding and Streaming"; Luby, et al. ("Luby III" U.S. Patent Application No. 12 / 103,605 entitled "Dynamic Stream Interleaving and Sub-Stream Based Delivery" (filing date: Apr. 15, 2008 Pakzad, et al. U.S. patent application Ser. No. 12 / 705,202, entitled "Block Partitioning for a Data Stream" (Paksad et al.) (Hereinafter "Pakzad"), entitled " Day: February 12, 2010); and Luby, et al. (FEC code with permanent deactivation of the symbols for the encoding and decoding process) and the name of the "Luby IV" (hereinafter "Luby IV"), "Methods and Apparatus Employing FEC Codes with Permanent Inactivation of Symbols for Encoding and Decoding Processes" US patent application Ser. No. 12 / 859,161 (filing date: August 18, 2010) entitled "Method and Apparatus Employing Same".The present invention relates to an improved media streaming system and method. More specifically, the present invention relates to a system and method for adapting to network and buffer states in order to optimize presentation of streamed media, and more efficient and simultaneous streaming of media data Allows distribution that is distributed in parallel or timely.Streaming media delivery is increasingly becoming more common that high quality audio and video are delivered over packet based networks, such as the Internet, cellular and wireless networks, power line networks, and other types of networks It will be important. The quality with which the delivered streaming media can be presented depends on the resolution (or other attributes) of the original content, the coding quality of the original content, the ability of the receiving device to decode and present the media, the signal received at the receiver The timeliness and quality of, and so on, can depend on several factors. Transfer and timeliness of the signal received at the receiver will be particularly important to create a realized experienced streaming media experience. A good transfer can provide the fidelity of the stream received at the receiver with respect to what the sender is sending while timeliness allows the receiver to know how quickly that content To start playing out the playlist, or the like.A media distribution system can be characterized as a system with media sources, media destinations, and channels (time and / or space) separating source and destination. Typically, the source includes a transmitter that has access to electronically manageable forms of media, a transmitter that electronically controls the receipt of media (or an approximation thereof), and controls media consumers (eg, receivers , A storage device or element, a user having a display device that is somehow coupled to another channel).While many variations are possible, in one common example, the media distribution system has one or more servers with access to electronic media content, and one or more client systems or devices are connected to the server , The server transmits the media using the transmitter as part of the server and sends the received media to the receiver at the client so that it can somehow be consumed by the client . In a simple example, there is one server and one client for a given request and response, but this need not be the case.Traditionally, media distribution systems can be characterized as either "download" models or "streaming" models. The "download" model can be characterized by timing independence between delivery of media data and playout of media to a user or receiving device.As an example, the media is downloaded sufficiently in advance as it is needed or used, and when it is used, the necessary amount is already available at the recipient. Distribution in the context of downloading is often done using file transfer protocols such as HTTP, FTP or file delivery (FLUTE) with unidirectional transfer etc, distribution rates are based on the underlying flow and / or congestion control protocol , For example TCP / IP. The operation of the flow or congestion control protocol can be independent of playout of the media to the user or the destination device, which can occur concurrently with the download or at some other time.The "streaming" mode can be characterized as a strong coupling between the timing of delivery of media data and playout of media to a user or recipient device. Distribution in this context is often done using streaming protocols such as the Real-Time Streaming Protocol (RTSP) for control and Real-time Transport Protocol (RTP) for media data, etc. The delivery rate can be determined by the streaming server and is often matched with the playout rate of the data.Some disadvantages of the "download" model are that due to the timing and independence of distribution and playout, the media data (eg, due to the available bandwidth being less than the media data rate) It can not be obtained when needed for playout, temporarily stop playout ("stop"), resulting in a bad user experience or (for example, if the available bandwidth is media data It is required to download media data far in advance of playout, due to being larger than the rate, consuming storage resources in the receiving device that may be missing, valuable storage resources Consumed for distribution and wasted when content is not eventually played out or otherwise used. Les is that there is.One advantage of the "download" model is that the technology required to do the download, such as HTTP, is very mature, widely deployed and available for a wide range of applications. The download server and the solution for large-scale scalability of the file download (eg HTTP web server and content delivery network) are easily available, making the deployment of services based on this technology simple and low cost .Some drawbacks of the "streaming" model are that, in general, the delivery rate of media data in the connection from the server to the client is not tailored to the available bandwidth and the dedicated A streaming server or a more complex network architecture would be required. There are streaming systems (eg, Adobe Flash Adaptive Streaming (Adobe Flash Adaptive Streaming)) that support variations in distribution data rates due to available bandwidth, but these generally utilize all available bandwidth Sometimes it is not as efficient as the download transfer flow control protocol such as TCP.Recently, a new media distribution system based on a combination of "streaming" model and "download" model has been developed and deployed. One example of the model is referred to herein as the "block-request streaming" model, and the media client requests a block of media data to the serving infrastructure using a download protocol, eg HTTP. One concern in the system is the ability to initiate a playout of the stream, such as decoding and rendering the audio and video streams received using a personal computer, and displaying the video on a computer screen to create a built-in speaker , Or as another example it is possible to decode and render the received audio and video stream using a set top box and display the video on a television display and It would be to play sound through the stereo system.Another concern is that it is possible to decode the source block at a rate that can be matched to speed with the source streaming rate, to minimize the decoding latency and to reduce the use of available CPU resources is there. Another concern is to provide a robust and scalable streaming delivery solution that allows components of the system to become unusable without adversely affecting the quality of the stream delivered to the receiver . As rapidly changing information on presentations is delivered, other problems may arise based on that. Therefore, it is desirable to have an improved process and apparatus.Block-Request Streaming System provides an ingestion system that provides improved user experience and bandwidth efficiency of the system and generates data in a format that is handled by conventional file servers (HTTP, FTP, etc.) And the capture system prepares it as a file or data element that takes in the content and is addressed by a file server that may or may not contain a cache. The client device can be optimized to take advantage of the capture process and includes improvements useful for better presentation independent of the capture process.These embodiments include novel improvements to the method used in the block-request streaming client and in the block request capture system to determine the sequence, timing and construction of block requests including provision of time-based indexing. In some implementations, novel improvements are provided to blocks and methods for constructing files that include variable block size setting and optimal block partitioning. In some embodiments, a novel improvement is provided for a random access point placement method that includes random access point placement across multiple presentation versions. In some embodiments, a novel improvement is provided for a method for dynamically updating presentation data including signaling within metadata. In some embodiments, new improvements are provided for efficiently presenting live content and for methods for time shifting.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The following detailed description will provide a better understanding of the nature and advantages of the present invention, with reference to the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 depicts elements of a block-request streaming system according to an embodiment of the invention. 1 is a diagram illustrating the block-request streaming system of FIG. 1, showing more detail within the elements of a client system coupled to a block serving infrastructure to receive data processed by a content capturing system. FIG. 3 is a diagram illustrating an example hardware / software implementation of an acquisition system. FIG. 4 is a diagram illustrating a hardware / software implementation of a client system; 1 is a diagram illustrating a possible structure of the content store shown in FIG. 1, including a segment and media presentation descriptor ("MPD") file and segment, timing, and other structure details within the MPD file . FIG. 3 is a diagram illustrating details of exemplary source segments that may be stored in the content store illustrated in FIGS. 1 and 5. FIG. 5 is a diagram illustrating simple and hierarchical indexing in a file. FIG. 5 is a diagram illustrating simple and hierarchical indexing in a file. FIG. 4 illustrates a variable block size setting with aligned seek points across multiple versions of a media stream. FIG. 4 illustrates a variable block size setting with unmatched seek points across multiple versions of a media stream. FIG. 4 is a diagram illustrating a metadata table. FIG. 5 is a diagram illustrating transmission of blocks and metadata tables from a server to a client. FIG. 4 is a diagram illustrating a block independent from a RAP boundary. FIG. 6 is a diagram illustrating continuous and discontinuous timing between segments. FIG. 3 is a diagram showing one aspect of a scalable block. FIG. 4 is a diagram depicting a graphical representation of the development of several variables within a block-demand streaming system over time. FIG. 4 is a diagram depicting another graphical representation of the expansion of several variables within the block-demand streaming system over time. FIG. 5 is a diagram depicting a cell grid of states as a function of a threshold value. 3 is a flowchart of a process that can be performed at a receiver capable of requesting a single block and multiple blocks for each request. Flow chart of the flexible pipeline process. A set of candidate requests, their priorities, and examples of connections that can bring them out at a certain time. A set of candidate requests developed from one time to the other, their priorities, and examples of connections to which they can be issued. 7 is a flowchart of consistent caching server proxy selection based on file identifiers. FIG. 6 is a diagram illustrating a syntax definition for an appropriate expression language. FIG. 7 is a diagram showing an example of an appropriate hash function. FIG. 7 is a diagram showing an example of a file identifier construction rule. FIG. 6 is a diagram illustrating a bandwidth variation of a TCP connection. FIG. 6 is a diagram illustrating a plurality of HTTP requests related to source and restoration data. FIG. 5 is a diagram showing an example of channel zapping time with and without FEC. FIG. 3 is an illustration of details of a repair segment generator that generates repair segments from source segments and control parameters as part of the capture system shown in FIG. 1. FIG. 5 is a diagram illustrating a relationship between a source block and a repair block. FIG. 6 is a diagram illustrating a procedure for a live service at different times at a client.In the figures, similar items are referenced using like numbers and sub indices are provided in parentheses to denote multiple instances of similar or identical item. Unless otherwise indicated, the last subindex (eg "N" or "M") is not intended to be limited to any particular value and the number of instances of one item is the same Even when numbers are indicated and sub-indexes are reused, they can differ from the number of instances of other items.As described herein, one ultimate goal of a streaming system is to allow the media to be consumed from its storage location (or where it is being generated), ie presented to the user, To be "exhausted", to a place, by a consumer on or on an electronic. Ideally, the streaming system can provide uninterrupted playback (more generally, uninterrupted "consumption") at the receiving end and the streaming or streaming (s) soon after the user has requested the stream or stream To start playing of the set of. For efficiency reasons, for example, when a user is switching from one stream to another or when it follows the presentation of a stream, eg a "subtitle" stream, It is also desirable that each stream is stopped at the time indicated by. Media components, such as video, are subsequently presented, but if different streams are chosen to present this media component, it is often the case to occupy limited bandwidth by the new stream and stop the old stream preferable.The block-request streaming system according to the embodiments described herein provides many benefits. Since some applications can provide a suitably satisfying experience with fewer features than all the features described here, a viable system needs to include all the features described herein There should be no understanding.HTTP Streaming HTTP streaming is a specific type of streaming. When using HTTP streaming, the source can be a standard web server and content delivery network (CDN) and can use standard HTTP. This technique can include segmentation of streams and use of multiple streams, all within the context of a standardized HTTP request. Media, eg video, can be encoded at different bit rates to form different versions or representations. The terms "version" and "expression" are used synonymously in this document. Each version or representation can be subdivided into smaller parts to form segments, perhaps each of the order of a few seconds. This allows each segment to be stored on a web server or CDN as a separate file.On the client side, requests can be made using HTTP for individual segments spliced ​​together seamlessly by the client. Clients can switch to different data rates based on available bandwidth. Clients can also request multiple representations, each presenting a different media component, and can present the media together and in a synchronous manner in these representations. Triggers for switching can include, for example, buffer occupancy and network measurements. At steady state operation, the client can pace the request to the server to maintain the target buffer occupancy.Advantages of HTTP streaming can include bit rate optimization, fast rise and seek (seek), and minimal unnecessary delivery. These benefits include controlling delivery so that it is only a short time before playout, maximizing the available bandwidth (through variable bit rate media), stream segmentation and intelligent client It comes from optimizing the procedure.A media presentation description can be sent to an HTTP streaming client (not shown) so that the client can use a collection of files (eg, in the format specified by 3GPP®, referred to herein as 3 gp segment) to provide streaming services to the user For example. The media presentation description, and possibly the updating of this media presentation description, can present the media containing the client in a synchronized form and can display the media containing extended features, such as seeks, bit rate switching and different representations A media presentation, which is a structured collection of segments each containing a media component, so that it can provide a co-presentation of media components of media components. The client can use the media presentation description information in different ways for provisioning of the service. In particular, from the media presentation description, the HTTP streaming client can determine in the streaming service the capabilities of the client and to which segment in the collection the data is useful for the user .In some implementations, the media presentation description can be static, but the segments can be dynamically generated. The media presentation description can be as compact as possible to minimize access and download time for the service. It is possible to minimize the connectivity of other dedicated servers, eg periodic or frequent timing synchronization between the client and the server.Media presentations may be constructed to allow access by terminals with different capabilities, eg, access to different access network types, different current network conditions, display size, access bit level and codec support it can. This allows the client to extract relevant information for providing the streaming service to the user.The media presentation description can also allow deployment flexibility and compactness according to requirements.In the simplest case, each alternative representation may be a single 3GP file, ie a file conforming to the definition in 3GPP TS 26.244, or an ISO base media file defined in ISO / IEC 14496-12 or derived specification Or any other file conforming to the format (eg, the 3GP file format described in 3GPP technical specification 26.244). In the remainder of this document, when referring to 3GP files, ISO / IEC 14496-12 and derived specifications refer to the more general ISO base media file defined in ISO / IEC 14496-12 or any derived specification It is to be understood that all described features can be mapped to form. This allows the client to request the first part of the file to learn the media metadata (typically stored in the movie header box also called "moov" box) along with the movie fragment time and byte offset . This allows the client to issue a partial HTTP request to obtain a movie fragment on demand.In some implementations it may be desirable to divide each representation into several segments, where the segments are. If the segment format is based on the 3GP file format, the segment contains a non-overlapping time slice of the movie fragment called "time division". Each of these segments can contain multiple movie fragments and each can be a valid 3GP file itself. In other embodiments, the representation is divided into a first segment that includes metadata (typically a movie header "moov" box) and a set of media segments, each containing media data, and the first segment and either The concatenation of media segments forms a valid 3GP file and the concatenation of the first segment and all media segments of one representation forms a valid 3GP file. The entire presentation can be formed by playing out each segment one after the other and mapping the local timestamp in the file at the global presentation time with the start time of each representation.Throughout this description, reference to "segment" refers to any data that has been fully or partially constructed or retrieved from a storage medium or obtained as a result of a file download protocol request including, for example, an HTTP request It should be understood that it contains objects. For example, in the case of HTTP, the data object may be stored in a real file residing on a disk or other storage medium connected to the HTTP server or forming part of the HTTP server, or the data object may be stored , CGI scripts executed in response to HTTP requests or other dynamically executed programs. The terms "file" and "segment" are used synonymously in this document unless otherwise specified. In the case of HTTP, the segment can be regarded as the entire body of the HTTP request response.The terms "presentation" and "content item" are used synonymously in this document. In many instances, the presentation is a voice, video or other media presentation with a defined "playout" time, but other variations are also possible.The terms "block" and "fragment" are used synonymously in this document unless otherwise specified and generally mean the smallest collection of indexed data. Based on the available indexing, the client can request different parts of the fragment in different HTTP requests, or it can request one or more contiguous fragments or fragments of fragments in one HTTP request. If a segment based on the ISO base media file format or a segment based on the 3GP file format is used, the fragment is typically a combination of a movie fragment header ('moof') box and a media data ('mdat') box It means a movie fragment defined as being.Here it is assumed that the network carrying the data is based on packets to simplify the description herein and those skilled in the art will appreciate that after reading this disclosure, the implementation of the invention described herein It is recognized that the configuration is applicable to other types of transmission networks, eg, continuous bitstream networks.Here, it is assumed that the FEC code provides protection against long variable data delivery time in order to simplify the description here, and those skilled in the art will understand that after reading this disclosure, Is applicable to other types of data transmission issues, such as bit-flip corruption of data. For example, in the absence of FEC, the content zapping time is large and variable if the last part of the requested fragment arrives much later than the previous part of the fragment or has a high deviation in the arrival time On the other hand, if using FEC and concurrent requests, only the majority of the requested data for the fragment needs to arrive before the fragment can be restored, whereby the content zapping time And reduces the variability of content zapping time. In this description, it can be assumed that the data to be encoded (ie, the source data) is divided into equal length "symbols" which can be any length (up to a single bit) , The symbols can be of different lengths for different parts of the data, for example different symbol sizes can be used for different blocks of data.In this description, to simplify the description here, FEC is applied to one data "block" or "fragment" at a time, ie "block" is used for FEC coding and decoding purposes It is assumed to be a "source block" for. The client device can use the segment indexing described herein to help determine the source block structure of the segment. Those skilled in the art will appreciate that embodiments of the present invention may be applied to other types of source block structures, for example, a source block may be a portion of a fragment, or multiple portions of one or more fragments or fragments Can be included.The FEC code considered to be used with block-request streaming is typically a systematic FEC code, ie the source symbols of the source block can be included as part of the encoding of the source block, Therefore, source symbols are transmitted. As those skilled in the art will appreciate, the embodiments described herein apply equally to non-systematic FEC codes. The systematic FEC encoder generates several numbers of repair symbols from the source block of the source symbols and the combination of at least some of the source symbols and repair symbols is generated by coding It is a symbol that was made. Some FEC codes can serve to efficiently generate as many repair symbols as necessary, for example, "additive code" or "fountain code" Examples of these codes include "chain reaction code" and "multistage chain reaction code". Other FEC codes, such as the Reed-Solomon code, in practice can only generate a limited number of repair symbols for each source block.In many of these examples, it is assumed that a client is coupled to a media server or a plurality of media servers, and the client requests streaming media from the media server or media servers through a channel or channels. However, more involved handling is possible.Profit Example Block - In Request Streaming, the media client maintains a combination of the timing of these block requests and the timing of media playout to the user. This model can retain the advantages of the "download" model described above while avoiding some of the drawbacks resulting from the usual detachment of media playout from data delivery. The block-request streaming model makes use of the rate and congestion control mechanism available in the transport protocol, eg TCP, in order to ensure that the maximum available bandwidth is used for the media data. In addition, the division of the media presentation into blocks allows each block of encoded media data to be selected from a plurality of available encoding sets.This selection is made by matching the media data rate to the available bandwidth, even when the available bandwidth is changing over time, matching the media resolution or decoding complexity to the client's capabilities or configuration Or to match a user's preferences, such as language, for example. The selection may also include downloading and presentation of auxiliary components, such as accessibility components, closed captioning, closed captioning, sign language images, etc. Examples of existing systems using block-request streaming models include Move Networks (R), Microsoft Smooth Streaming, and Apple iPhone (R) Streaming Protocol.Commonly, each block of media data can be stored on the server as an individual file, and the protocol, eg HTTP, is the HTTP server software running on the server to request the file as one unit . Typically, a client will use a media presentation typically referred to as "representation" in this document, eg, a media presentation, such as available encoding characteristics (eg, required bandwidth, resolution, encoding parameters, media type, language) , And metadata files describing how those encodings are divided into blocks are provided and they can be, for example, Extensible Markup Language (XML) or Playlist Text or Binary format it can. For example, the metadata may include a Uniform Resource Locator (URL) for each block. The URL itself provides a scheme such that the string "http: //" is prefixed to indicate that the protocol used to access the documented resource is HTTP You can do. Another example is "ftp: //" which indicates that the protocol to be used is FTP.In other systems, for example, a media block can be constructed "on the fly" by the server in response to a request from the client indicating the requested media presentation portion in time. For example, in the case of HTTP with the scheme "http: //", the execution of the request at this URL provides this request response containing some specific data in the entire body of the request response. Implementations within the network on how to generate this request response can vary widely depending on the implementation of the server to handle the request.Typically, each block can be independently decodable. For example, in the case of video media, each block can start with "seek point". In some coding schemes, seek points are called "random access points" or "RAPs", but not all RAPs can be specified as seek points. Similarly, in other coding schemes, the seek point is H. In the case of 264 video coding, it starts with "independent data refresh" frame or "IDR", but not all IDRs can be specified as seek points. A seek point is a position in the video (or other) media where the decoder can start decoding without requesting data on a previous frame or data or samples, for example if the frame or sample to be decoded , It is not an independent method but a case where it is encoded as a difference between the current frame and the previous frame, for example.One concern in the system is the ability to initiate playout of the stream, for example decrypting and rendering the audio and video streams received using a personal computer and displaying and displaying the video on a computer screen Reproducing the sound through the speaker which has been received, or as another example, decoding and rendering the audio stream and the video stream received using the set-top box, displaying the video on the television receiver and transmitting the audio It would be to reproduce it. One main concern is that the user decides to view the new content delivered as a stream and takes an action to indicate the determination, for example, the user clicks the link in the window of the browser or the reproduction button of the remote control device And when the content starts to be displayed on the screen of the user, this will be referred to as "content zapping time" hereinafter. Each of these concerns can be addressed by the elements of the extended system described here.An example of content zapping is to allow a user to watch the first content delivered via the first stream and to view the second content delivered via the second stream by the user And decides to start an action to start viewing the second content. The second stream can be transmitted from the same or different set of servers as the first stream. Another example of content zapping is when a user is visiting a website and begins watching the first content delivered via the first stream by clicking on the link in the browser window It is time to decide. Likewise, the user may decide to start playing the content from any point in the stream, not from the beginning. The user instructs those client devices to seek to the time position and the user can expect the selected time to be rendered instantaneously. Minimizing the content zapping time is important for video viewing in order to allow a user to experience high quality high speed content surfing experience when searching and sampling a wide range of available content.Recently it has become a common practice to consider using forward error correction (FEC) codes to protect the streaming media being transmitted. When transmitted over a packet network including as an example the Internet and wireless networks, such as those standardized by organizations such as 3GPP, 3GPP2 and DVB, etc., the source stream is generated within the packet when it is generated or becomes available So that those packets can be used to carry it in the order in which the source or content stream was generated or the receiver became available.In a typical application of the FEC code to these kinds of scenarios, the encoder can use the FEC code in the generation of the repair packet, which in addition to the original source packet containing the source stream transmit . The repair packet has the property that it can use the repair packet received to restore the data contained in the lost source packet when the loss of the source packet occurs. A repair packet can be used to recover the contents of a totally lost erasure source packet, but regardless of whether it is a fully received repair packet or a partially received repair packet , It can also be used to recover from partial packet loss. In this way repair packets received completely or partially can be used to recover completely or partially lost source packets.In yet other examples, other types of corruption may occur with respect to the transmitted data, for example, the value of the bit may be flipped, thus correcting the corruption and sending the source Repair packets can be used to provide as accurate restoration of the packet as possible. In other examples, the source stream is not necessarily transmitted in a separate packet, but instead can be transmitted, for example, as a continuous bit stream.There are numerous FEC code examples that can be used to provide source stream protection. The Reed-Solomon (Reed-Solomon) code is a well-known code for error and erasure correction in communication systems. For example, regarding erasure correction through a packet data network, a well-known and efficient implementation of Reed-Solomon code is described in L. Rizzo, "Effective Erasure Codes for Reliable Computer Communication Protocols" (XOR-based erasure code) Computer Communication Review 27 (2): 24-36 (April 1997) (hereinafter "Rizzo") and Bloemer, et al., "An XOR-Based Erasure-Resilient Coding Scheme" (Cauchy) or Vandermonde (Vandermonde) described in Technical Report TR-95-48, International Computer Science Institute, Berkeley, California (1995) (hereinafter "XOR-Reed-Solomon" Use a matrix.Other examples of FEC codes include LDPC codes and chain reaction codes such as those described in Luby I and multistage chain reaction codes such as those in Shokrollahi I.An example of the FEC decoding process for transformation of the Reed-Solomon code is described in Rizzo and XOR-Reed-Solomon. In those examples, decryption can be applied after sufficient source and repair data packets have been received. The decryption process can be computationally intensive and depending on available CPU resources this can take a considerable amount of time to complete, compared to the length of time spanned by the media in the block There is. The receiver can take into account the length of this time it takes for decoding to compute the delay required between the start of receiving the media stream and the playout of the media. This delay due to decoding is felt by the user as a delay between the specific media stream request by the user and the start of playback. Therefore, it is desirable to minimize this delay.In many applications, packets can be further subdivided into symbols to which the FEC process applies. A packet can contain more than one symbol (or it can contain fewer than one symbol, but usually a symbol is a symbol that the error state between groups of packets has a high correlation It is not divided between groups of packets unless otherwise known). Symbols can have any size, but often the size of the symbol is at most equal to the size of the packet. The source symbol is a symbol that encodes the data to be transmitted. The repair symbol is a symbol generated directly or indirectly from the source symbol in addition to the source symbol (ie, the data to be transmitted is available for all source symbols and uses repair symbols at all It is completely recoverable if it is not possible.Some FEC codes can be based on blocks in that the encoding operation depends on the symbols present in the block and can be independent of the symbols not in that block. By using block-based coding, the FEC encoder can generate repair symbols for that block from the source symbols in the block, then proceed to the next block, and the encoding target You do not need to refer to them except the source symbols for the current block. Upon transmission, the source block comprising the source symbol may be represented by an encoded block comprising encoded symbols (which may be some source symbols, some repair symbols, or both) You can do. Due to the presence of repair symbols, not all source symbols are required in all encoded blocks.Regarding some FEC codes, in particular the Reed-Solomon code, the encoding and decoding time may become unrealistic as the number of encoded symbols per source block increases is there. Thus, in practice, in the typical case where the Reed-Solomon encoding or decoding process is performed by custom hardware, it is practical in terms of the total number of coded symbols that can be generated for each source block There are often upper limits (for some applications 255 is nearly realistic limit), for example Reed included as part of the DVB-H standard to protect the stream against packet loss The MPE-FEC process with -Solomon code is implemented in dedicated hardware in the mobile phone that is limited to 255 Reed-Solomon total encoded symbols per source block. Since it is often required to put the symbols in separate packet payloads, this gives a practical upper limit to the maximum length of the source block to be encoded. For example, if the packet payload is limited to 1024 bytes or less and each packet carries one encoded symbol, the encoded source block can be up to 255 kilobytes, which is Of course, it is also the upper limit of the size of the source block itself.It is possible to minimize the decoding latency introduced by FEC decoding, which can decode the source block at a rate that can be paced with other source concerns, eg source streaming rate, at any point in the FEC decoding , Only a small part of the available CPU of the receiving device can be used, is addressed by the elements described here.The need to provide a robust and scalable streaming delivery solution that allows components of the system to fail without adversely affecting the quality of the stream delivered to the receiver.The block request streaming system may change the structure or metadata of the presentation, eg, change the number of available media encodings or parameters of media encoding, eg, bit rate, resolution, aspect ratio, audio or video codec or codec parameters Or other metadata associated with the content file, such as URL, for example. The change may be due to a bulk edit of content from different sources, such as an advertising segment of a larger presentation or a different segment, a serving infrastructure due to URL or other configuration changes, recovery from equipment failure or equipment failure or other reason Modification of the URL or other parameters required as a result of the structure change, for a number of reasons.There are ways in which the presentation can be controlled by continuously updated playlist files. Since this file is continuously updated, at least some of the changes described above can be made within these updates. A disadvantage of the conventional method is that the client device continuously searches the playlist file and loads the serving infrastructure, also called "polling", and this file caches longer than the update interval It is not possible to make the job for the serving infrastructure much more difficult. This has been addressed by the elements here so updates of the type described above are provided without the need for continuous polling by the client with respect to the metadata file.Other problems, especially in live services, are typically known problems from broadcast distribution and users are unable to view content that is being broadcast prior to the time the user joined the program . Typically, local personal recording is not possible as it consumes unnecessary local storage or because the client was not tuned to the program or prohibited by the content protection rules. Although network recording and time shift viewing is preferred, it requires separate delivery protocols and infrastructure other than the user's individual connections and live services to the server, resulting in duplicate infrastructure and significant server costs. This is also addressed by the elements described here.System Overview One embodiment of the present invention is described with reference to FIG. 1, which shows a simplified diagram of a block-request streaming system embodying the present invention.In FIG. 1, a block-streaming system 100 is illustrated and includes a block serving infrastructure ("BSI") 101 that captures content 102, prepares its content, and provides a streaming server 104 with HTTP streaming server 104 And an inclusion system 103 for packaging it for service by the HPPT streaming server 104 by storing it in a content store 110 that is accessible to both. As shown, the system 100 can also include an HTTP cache 106. In operation, the client 108, eg, an HTTP streaming client, sends a request 112 to the HTTP streaming server 104 and receives the response 114 from the HTTP streaming server 104 or the HTTP cache 106. In each case, the elements shown in FIG. 1 can be implemented, at least in part, in software comprising program code executing in a processor or other electronic device.The content may include movies, sounds, 2D plane images, 3D images, other types of images, images, time-stamped text, time-defined metadata, and the like. Certain contents are data to be presented or consumed in a time-defined manner, for example, supplementary information (for example, broadcast station identification, advertisement, stock price, Flash (registered trademark) sequence, etc.) Data for presenting it together with the media. Other hybrid presentations that combine other media and / or more than just audio and video can also be used.As illustrated in FIG. 2, the media block may be stored in block serving infrastructure 101 (1), which may be, for example, an HTTP server, a content delivery network device, an HTTP proxy, an FTP proxy or server, or It could be some other media server or system. The block serving infrastructure 101 (1) is connected to the network 122, which may be, for example, an Internet Protocol ("IP") network such as the Internet. Six functional components that serve to select the requested block or partial block from among the plurality of available blocks indicated by the metadata, ie the block selector 123, the block It receives the request command from the selector 123 and provides a block specified in the block serving infrastructure 101 (1) through the network 122, a part of one block, or a block in return and in return A block request streamer 124 having a block requestor 124 for performing necessary operations for receiving data, a block buffer 125, a buffer monitor 126, a media decoder 127 and one or more media converters 128 for facilitating media consumption System The client is indicated.The block data received by the block requestor 124 is passed to the block buffer 125 storing the media data for temporary storage. Alternatively, the received block data may be stored directly in the block buffer 125 as illustrated in FIG. The media decoder 127 is responsive to this data as needed to provide appropriate input to the media converter 128 which provides the media data by the block buffer 125 and renders the media in a form suitable for the user or other consumption And performs the conversion. Examples of media converters include visual display devices such as those found in mobile phones, computer systems or televisions, and may also include audio rendering devices such as speakers or headphones.An example of a media decoder is H.264. To convert data of the type described in the H.264 video coding standard into an analog or digital representation of the video frame, eg a YUV-format pixel map with an associated presentation time stamp for each frame or sample.The buffer monitor 126 receives information on the contents of the block buffer 125 and selects a block to be used for determining the selection of blocks to be requested, as described herein, based on this information and possibly other information Device 123.In the terminology used herein, each block has a "playout time" or "duration" that represents the amount of time that the receiver will take to play the media contained in the block at normal speed. In some cases, the playout of media in a block can depend on having already received data from a previous block. In rare cases, the playout of a portion of the media in the block may depend on the following block, in which case the playout time for the block may be within the block without reference to the succeeding block Playout time for subsequent blocks is increased by the playout time of the media in this block that can be played out only after receiving the following block. Since it is a rare case to include media in blocks dependent on subsequent blocks, it is assumed in the remainder of this disclosure that media in one block does not depend on subsequent blocks, Will recognize that this variation can easily be added to the embodiment described below.The receiver may have controls such as "pause", "fast forward", "return", etc., resulting in blocks being consumed by the playout at different rates, but the receiver may have control of each of the blocks If the sequence can be obtained and decoded within an aggregate time equal to or shorter than the total playout time except the last block in the consecutive sequence, the receiver will notify the user without stopping Media can be presented. In some of the descriptions herein, a particular location in the media stream is called a particular "time" in the media, and between a beginning of the media playout and a time to reach a particular location in the video stream It corresponds to the time that it will have passed. The time or location within the media stream is a conventional concept. For example, if the video stream comprises 24 frames per second, the first frame can be said to have a position or time of t = 0.0 seconds, the 241st frame is t = 10.0 It can be said that it has a position or time of seconds. Of course, within a frame-based video stream, each of the bits in the stream from the first bit of the 241st frame to just before the first bit of the 242nd frame are all the same time value , The position or time does not need to be continuous.Using the above terminology, the Block-Request Streaming System (BRSS) comprises one or more clients making requests to one or more content servers (eg, HTTP server, FTP server, etc.). The capture system comprises one or more capture processors that receive content (in real time or non real time), process the content for use by the BRSS, and possibly store metadata generated by the capture processor And stores it in the storage accessible by the content server.The BRSS can also incorporate a content cache to coordinate with the content server. The content server and content cache can be conventional HTTP servers and HTTP caches that receive requests for files or segments in the form of HTTP requests including URLs and fewer files than the entire file or segment indicated by the URL Or may include byte ranges to request segments. The client can include a conventional HTTP client that makes requests to the HTTP server and handles responses to those requests and the HTTP client can send a request to the presentation player for playout by the client device, It organizes the requests, passes them to HTTP clients, obtains responses from HTTP clients, and is driven by a new client system that processes (or stores, sends, etc.) them. Typically, the client system does not know in advance that necessity (as it depends on input by the user, change in input by the user, etc., which media is needed), so the media It is said to be a "streaming" system in that it is "consumed" as soon as it is received or shortly thereafter. As a result, response delay and bandwidth constraints cause delays in the presentation, for example the possibility that the stream will cause a pause of the presentation when the user catches up to the existing location as it consumes the presentation is there.In order to provide a presentation that is realized as good quality, several details can be implemented in the BRSS at the client end, at the ingest end, or both. In some cases, the details to be implemented are taken into account in the client-server interface in the network and to deal with the client-server interface in the network. In some embodiments, both the client system and the capturing system are aware of extensions, in other embodiments only one side recognizes the extension. In that case, even if one side does not recognize the extension, the whole system will enjoy the benefit from it, while in others the benefit only occurs if both sides are aware of it Even when the side does not recognize it still operates without failure.As illustrated in FIG. 3, the capture system can be implemented as a combination of hardware and software components, according to various embodiments. The capture system can comprise a set of instructions that can be executed to cause the system to perform any one or more of the methods described herein. The system can be realized as a specific machine in computer form. The system may be a server computer, a personal computer (PC), or any system capable of executing a set of instructions (sequential or otherwise) specifying actions to be performed by the system. Further, while only a single system is illustrated, the term "system" may refer to a set of instructions (or sets) for performing any one or more of the methods described herein individually or in combination It can also mean including a set of systems to be executed collectively.The capture system may include an ingest processor 302 (eg, a central processing transmission (CPU)), memory 304 capable of storing program code during execution, and disk storage 306, And communicate with each other via a bus 300. The system may further include an image display device 308 (eg, a liquid crystal display (LCD) or a cathode ray tube (CRT)). The system may also include an alphanumeric input device 310 (eg, a keyboard) and a network interface device 312 for receiving the content source and distributing the content store.Disk storage 306 may be a machine-readable medium (eg, a removable storage medium) capable of storing one or more sets (eg, software) of instructions embodying any one or more of the methods or functions described herein . Instructions may also reside wholly or at least partially in memory 304 and / or in capture processor 302 during their execution by the system, and memory 304 and capture processor 302 may also store machine-readable media To accomplish.As illustrated in FIG. 4, a client system can be implemented as a combination of hardware and software components, according to various embodiments. The client system can comprise a set of instructions that can be executed to cause the system to perform any one or more of the methods described herein. The system can be realized as a specific machine in the form of a computer. The system can be a server computer, a personal computer (PC), or a system capable of executing a set of instructions (sequential or otherwise) specifying an action to be performed by the system. Further, while only a single system is illustrated, the term "system" may refer to a set of instructions (or sets) for performing any one or more of the methods described herein individually or in combination It can also mean including a set of systems to be executed collectively.The client system can include a client processor 402 (eg, a central processing transmission (CPU)), memory 404 that can store program code during execution, and disk storage 406, Communicate with each other via the bus 400. The system may further include an image display device 408 (eg, a liquid crystal display (LCD) or a cathode ray tube (CRT)). The system may also include an alphanumeric input device 410 (eg, a keyboard) and a network interface device 412 for sending requests and receiving responses.Disk storage 406 may be a machine-readable medium (eg, a removable storage medium) capable of storing one or more sets (eg, software) of instructions embodying any one or more of the methods or functions described herein . Instructions may also reside wholly or at least partially within memory 404 and / or within client processor 402 during their execution by the system, and memory 404 and client processor 402 may also store machine-readable media To accomplish.Use of 3GPP file format Any other file based on 3GPP file format or ISO base media file format, eg MP4 file format or 3GPP2 file format, is used as a container format for HTTP streaming with the following features be able to. Segment indices can be included in each segment to signal time offsets and byte ranges so that the client can download the appropriate file or media segment as needed. The global presentation timing of the entire media presentation or the local timing within each 3 GP file or media segment can be exactly matched. Tracks in one 3 GP file or media segment can be exactly matched. Traffic across multiple representations can also be matched by assigning each of them to the global timeline so that switching between representations can be seamless and the media components in different representations The co-presentation can also be synchronous.The file format can include a profile for adaptive streaming with the following characteristics: All movie data can be included in the movie fragment and the "moov" box can not contain any sample information. Audio and video sample data can be interleaved and have similar requirements for progressive download profiles similar to those specified in TS 26.244. The "moov" box is placed at the beginning of the file and is followed by fragment offset data, also referred to as a segment index, which contains offset information in time and byte ranges for at least a subset of each fragment or fragment within the containing segment be able to.It is also possible for a media presentation description (Media Presentation Descriptions) to refer to a file following an existing progressive download profile. In this case, the client can use the media presentation description to simply select the appropriate alternate version from the multiple available versions. Clients can also use HTTP partial retrieval requests with files conforming to the progressive download profile to request a subset of each alternative version and thereby implement lower form of adaptive streaming. In this case, different expressions including media in the progressive download profile can still adhere to a common global timeline to allow seamless switching over multiple representations.Extended Method Summary The following sections describe a method for an improved block-in-demand streaming system. It should be appreciated that some of these improvements can be used with or without other of these improvements, depending on the needs of the application. In general operation, the receiver makes a request for a particular block of data or a portion of a block of data to the server or other transmitter. A file, also called a segment, can contain multiple blocks and is associated with a representation of a media presentation.Preferably, indexing information is generated, also referred to as "segment indexing" or "segment map", which provides a mapping from the playout time or decoding time of the corresponding block or fragment in the segment to the byte offset. This segment indexing can typically be included within a segment at the beginning of a segment (at least a portion of the segment map is initially present) and is often small. Segment indices can also be provided in separate index segments or files. In particular, in the case where the segment index is included in the segment, the receiver can quickly download some or all of this segment map and can associate time offsets with their time offsets in the file This can be used subsequently to determine the mapping between the corresponding byte position of the fragment.The receiver may use a byte offset to request data from fragments associated with a particular time offset without having to download all data associated with other fragments not associated with the time offset of interest Can be used. In this way, segment map or segment indexing can greatly improve the ability of the receiver to directly access the part of the segment that corresponds to the current time offset of interest, resulting in improved content zapping time, Including the ability to quickly switch from one representation to another as network conditions change, and a reduced waste of network resources to download media that is not played out at the receiver.If switching from one representation (referred to herein as the "switch-from" representation) to another representation (referred to herein as a "switch-to" representation) is considered , The segment index allows seamless switching in the sense that the media in the source representation is downloaded up to the maximum presentation time so that the playout of the switched destination representation can start seamlessly from the random access point It can also be used to specify the starting time of the random access point in the switched target expression in order to specify the amount of data required in the switch source representation to be.These blocks represent segments of video media or other media that the requesting receiver needs to generate output for the user of the receiver. The receiver of the media can be a client device, for example when the receiver receives content from a server sending content. Examples include set-top boxes, computers, game consoles, special-equipped televisions, handheld devices, special equipment mobile phones, or other client receivers.Numerous extended buffer management methods are described herein. For example, the buffer management method allows the client to request the highest media quality block that can be received in time to play out with continuity. A variable block size feature improves compression efficiency. The ability to have multiple connections for sending blocks to requesting devices while limiting the frequency of requests provides improved transmission performance. A block of partially received data can be used to continue the media presentation. A connection can be reused for multiple blocks without having to commit the connection to a particular set of blocks at the beginning. The consistency of server selection from multiple possible servers by multiple clients is improved which reduces the frequency of duplicate content in nearby servers and improves the probability that the server will include the entire file. The client may request a media block based on the metadata embedded in the URL for the file containing the media block (eg, available media encoding). The system can provide calculation and minimization of the amount of buffering time required before the playout of content can begin without suffering a subsequent pause of the media playout. Available bandwidth can be shared among multiple media blocks and can be adjusted as the playout time of each block approaches so that it will have the most recent playout time if necessary Blocks can allocate a larger percentage of available bandwidth.HTTP streaming can adopt metadata. Presentation level metadata includes, for example, stream duration, available encoding (bit rate, codec, spatial resolution, frame rate, language, media type), pointer to stream metadata for each encoding, and content Includes protection (digital rights management (DRM) information). The stream metadata can be a URL for a segment file.Segment metadata can include byte range vs. time information for requests within a segment and identification of random access points (RAP) or other seek points, some or all of this information being segment indexing or segment map It can be part of.The stream may comprise multiple encodings of the same content. Each encoding can be divided into multiple segments, each segment corresponding to one storage unit or file. In the case of HTTP, a segment is typically a resource that can be referred to by a URL, and as a result of the request of the URL, a segment is returned as a whole of the body of the request response message. A segment may comprise a plurality of picture groups (GoP). Each GoP may further comprise a plurality of fragments, and the segment indexing provides time / byte offset information for each fragment, ie the indexing unit is a fragment.In order to improve the throughput, it is possible to request a fragment or a part of a fragment through concurrent TCP connections. This can alleviate problems that arise when a connection is lost at the time of sharing or congestion on a bottleneck link, thereby improving the overall speed and reliability of delivery, The speed and reliability of the content zapping time can be substantially improved. While it is possible to secure bandwidth in exchange for latency due to excessive demands, care should be taken to avoid making requests that may increase exhaustion risk until too far into the future.You can pipeline multiple segment requests on the same server (make the next request before the current request is completed) to avoid repetitive TCP rise delays. Requests for consecutive segments can be combined into one request.Some CDNs are more preferably large files, and can trigger a background-wide fetch of the entire file from the origin server when first seeing the range request. However, most CDNs respond to range requests from the cache when data is available. Therefore, it can be advantageous to make a portion of the client request target the entire segment file. These requests can be canceled later if necessary.A valid switching point can be a seek point in the target stream, in particular RAP, for example. Different implementations (eg based on the beginning of the media or based on GoP), for example a fixed GoP structure or matching of RAPs across multiple streams, are possible.In one embodiment, the segments and the GoP can be matched across streams of different rates. In this embodiment, GoP can be variable size and can include multiple segments, but fragments are not aligned between streams of different rates.File redundancy can advantageously be used in some embodiments. In these embodiments, a lost code is applied to each segment to generate a redundant version of the data. Preferably, the formatting of the source is not changed due to the usage of the FEC, for example, as a subordinate representation of the original representation an additional repair segment containing the FEC repair data is generated and an additional step in the capture system . The client can rebuild its fragments using only the source data for fragments and only need to request the server for source data for fragments within the segment. If the server is unavailable or the connection to the server is slow, it can be determined before or after the source data request and additional repair data can be requested for fragments from the repair segment, It reliably delivers enough data to restore fragments, possibly by using FEC encoding to use a combination of source data and repair data received to restore fragment source data Reduce time for. In addition, additional repair data can be requested to allow the fragment to be restored if the fragment becomes urgent, ie, its playout time is approaching, which means that on its link Increase the percentage of data on fragments, but it is more efficient than closing other connections on the link to release bandwidth. This can also reduce the risk of depletion due to the use of concurrent connections.The fragmentation format can be a stored stream of real-time transport protocol (RPT) packets where audio / video synchronization is achieved through the real-time transfer control protocol RTCP.The segment format can also be a stored stream of MPEG-2 TS packets with MPEG-2 TS internal timing achieved by audio / video synchronization.In the use of signaling to make streaming more efficient and / or in the block generation block - request streaming system some features can be used or can not be used to provide enhanced performance. Performance is based on the ability to playout a presentation without stopping, obtaining media data within bandwidth constraints, and / or doing so within limited processor resources in client, server and / or capturing systems It can be related. Now some of these features are explained.Indexing in Segments To organize partial GET requests for movie fragments, the byte offset and start time at the time of decoding or the presentation time of all media components included in the file or fragment within the segment and which fragments will start Or which fragment is suitable for use as a switching point between alternate representations, and this information is often referred to as segment indexing or segment map. The start time or presentation time in decoding can be expressed directly or it can be expressed as a delta relative to the reference time.This time and byte offset indexing information can request at least 8 bytes of data per movie fragment. As an example, for a two hour movie included in a single file and having a 500 ms movie fragment, this is a total of about 112 kilobytes of data. Downloading all this data at the start of the presentation may result in significant additional rise delays. However, the time and byte offset data can be coded hierarchically so that the client can quickly find a small amount of time and offset data corresponding to points in the presentation that it wishes to initiate You can do. Information can also be distributed within segments so that any improvements to the segment index can be arranged in interleaved form with the media data.If the representation is segmented into multiple segments in terms of time, it is not necessary to use this hierarchical encoding, as the complete time and offset data for each segment may already be quite small Notice that there are things. For example, if in the above example the segment is one minute instead of two hours, the time-byte offset indexing information is about 1 kilobyte data, which is typically within a single TCP / IP packet It can be paid.Different options are possible to add fragment time and byte offset data to the 3 GPP file.First, a Movie Fragment Random Access Box ("MFRA") can be used for this purpose. The MFRA provides a table, which can assist the reader to find random access points in the file using movie fragments. In order to support this function, the MFRA incidentally includes a byte offset of the MFRA box including the random access point. The MFRA can be placed at the end of the file or near the end, but this is not necessarily the case. You can search the beginning of the movie fragment random access offset box by scanning from the end of the file and finding the size information in it to find the movie fragment random access offset box. However, the last placement of the MFRA for HTTP streaming typically requires at least three to four HTTP requests to access the desired data, ie, at least to request the MFRA from the end of the file Once to obtain the MFRA, and once to finally obtain the desired fragment in the file. Therefore, putting it in the beginning would be desirable as mfra can be downloaded with the first media data in a single request. Likewise, it is inefficient to use MFRA for HTTP streaming because it does not need any information in "MFRA" except for time and moof_offset, and specifies an offset instead of length That may require more bits.Second, an Item Location Box ("ILOC") can be used. "ILOC" provides a directory of their metadata resources by identifying the location of files containing metadata resources in this or other files, their offset within that file, and their length . For example, the system may consolidate all externally referenced metadata resources into one file and may readjust file offsets and file references accordingly. However, "ILOC" is intended to give the location of metadata, so it can be difficult for this to coexist with realistic metadata.Finally, and perhaps most pertinently, a new term called Time Index Box ("TIDX") dedicated to providing accurate fragment times or durations and byte offsets in an efficient manner It is a specification of a box. This will be explained in more detail in the next section. An alternative box with the same function would be a segment index box ("SIDX"). Here, unless otherwise stated, these two are interchangeable since both boxes provide the ability to provide accurate fragment times or durations and byte offsets in an efficient manner. Differences between TIDX and SIDX are provided below. Since both the TIDX box and the SIDX box implement the segment index, the compatibility of both boxes should be clear.The segment indexing segment has the specified start time and the specified number of bytes. Multiple fragments can be concatenated into a single segment and the client can issue a request to identify a particular byte range within the segment corresponding to the requested fragment or subset of fragments. For example, when HTTP is used as the request protocol, an HTTP range header can be used for this purpose. This approach requires the client to have access to the "segment index" of the segment that specifies the position within the segment of the different fragments. This "segment index" can be provided as part of metadata. This approach has the result that it requires the creation and management of far fewer files compared to the way all blocks are maintained in separate files. Managing the creation, transfer and storage of very large numbers of files can be complicated and prone to errors (for example, thousands of possibilities for a one-hour presentation), so files A decrease in number represents one advantage.If the client knows only the desired starting time of the smaller part of the segment, it can read the entire file and read the entire file to determine the corresponding playout start location. In order to improve the use of bandwidth, the segment may include an index file as metadata, the index file may map the time range corresponding to the block to the byte range of the individual block, and use segment indexing or segment map It is called. This metadata can be formatted as XML data, or they can be binary, for example following the atom and box structure in the 3 GPP file format. Indexing can be simple, the time and byte ranges of each block are absolute with respect to the beginning of the file, or they can be hierarchical, and some blocks are allocated to the parent block (and Grandfather blocks, etc.), and the time and byte ranges for a given block are expressed in terms of time and / or byte range of the block's parent block.Indexing Map Structure Example In one embodiment, the original source data for one representation of the media stream can be placed in one or more media files referred to herein as a "media segment", each media segment including media Of continuous time segments, for example 5 minutes of media playback.FIG. 6 shows an example of the overall structure of a media segment. Within each segment, there may also be indexing information comprising a time / byte offset segment map distributed initially or throughout the source segment. The time / byte - offset segment map in one embodiment is a time / byte - offset pair (T (0), B (0)), (T (1), B (1)),. . . T (i - 1) may be the list of media in all media segments, T (i), B (i), ..., Represents the start time in the segment for playback of the ith fragment of the media relative to the start time of the media, T (i) represents the end time for the ith fragment (hence the start time for the next fragment) , Byte-offset B (i-1) is the corresponding byte index of the beginning of the data in this source segment, where the ith fragment of the media starts with respect to the beginning of the source segment and B i) is the corresponding last byte index of the ith fragment (hence the index of the first byte of the next fragment) T (i) and B (i) may be provided for each component in the segment in absolute form, or they may be provided for other media components that deal with the reference media component .In this embodiment, the number of fragments in the source segment is n, where n can vary from segment to segment.In other embodiments, the time offset within the segment index for each fragment can be determined using the absolute starting time of the first fragment and the duration of each fragment. In this case, the segment index can document the start time of the first fragment and the duration of all fragments contained in the segment. The segment index can also document only a subset of fragments as well. In that case, the segment index documents the duration of the sub-segment defined as being one or more contiguous fragments at the end of the containing segment or at the beginning of the next sub-segment.For each fragment, that fragment does not rely on the seek point, that is, any media behind a point any medium before that point, so that the media ahead from that fragment is There may also be a value indicating whether it starts at that point where it can be played out independently of the fragment of, or contains that point. Seek points are generally points in the media where playouts can begin independently from all previous media. FIG. 6 also shows a simple example of possible segment indexing for source segments. In that example, the time offset value is in milliseconds, so the first fragment of this source segment starts 20 seconds after the beginning of the media and the first fragment starts out 485 ms playout Have time. The first byte offset of the first fragment is 0 and the byte offset at the beginning of the first fragment / beginning of the second fragment is 50, 245, so the size of the first fragment is 50,245 bytes . If the fragment or subsegment does not start from a random access point and the random access point is contained within a fragment or subsegment then it can be given a decoding time or presentation time difference between the start time and the actual RAP time it can. This makes it possible to know exactly how long it takes for the client to present a switch from representation when switching to this media segment.In addition to simple or hierarchical indexing or instead of simple or hierarchical indexing, it is possible to use daisy-chained indexing and / or hybrid indexing.Since sample durations for different tracks may not be the same (eg video samples can be displayed for 33 ms and audio samples can last for 80 ms), different tracks in the movie fragment will not start at exactly the same time and The audio can start slightly before or slightly later than the video, in order to compensate, and vice versa for the preceding fragment. To avoid ambiguity, the time stamp specified in the time and byte offset data can be specified for a particular track, which can be the same track for each representation. Normally, this is a video track. This allows the client to accurately identify the next video frame while switching the representation.Care must be taken to maintain a strict relationship between the track time scale and the presentation time during presentation to ensure smooth playout and maintaining audio / video synchronization despite the above task You can do.FIG. 7 illustrates some examples, such as simple index 700 and hierarchical index 702.Two examples of boxes that contain a segment map are provided below, one called a time index box ('TIDX') and one called ('SIDX'). The definition conforms to the box structure according to the ISO base media file format. Defining similar syntax and other designs for the box with the same semantics and functions should be clear to the reader.Time Index Box Definition Box Type: 'tidx' Container: File Force: None Quantity: Any number of zero or one hour index boxes are used to set the time and byte offset index pair that associates certain regions of the file with certain time intervals of the presentation Can be provided. The time index box may include a target type field indicating the type of data referenced. For example, a time index box with target type "moof" provides an index indicating media fragments contained in the file for both time and byte offsets. The time index box with the target type of the time index box can be used to construct a hierarchical time index allowing the user of the file to navigate quickly to the requested part of the index.A segment index can include, for example, the following syntax:unsigned int (64) first_element_offset; unsigned int (64) first_element_time; for ((64) first_element_time (8) class TimeIndexBoxextends FullBox ('frai') {unsigned int (32) targettype; unsigned int (32) time_reference_track_ID; unsigned int (32) deltaT;}} semantics targettype: type of box data referenced by this time index box i = 1; i <= number_of_elements; i ++) {bit (1) random_access_flag; . This can be either a movie fragment header ("moof") or a time index box ("tidx").time_reference_track_id: Indicates the track in which the time offset within this index is specified.number_of_elements: The number of elements indexed by this time index box.first_element_offset: The byte offset from the beginning of the file of the first indexed element.first_element_time: The start time of the first indexed element, using the timescale specified in the media header box of the track identified by time_reference_track_id.random_access_flag: 1 if the start time of the element is a random access point. Otherwise it is zero.length: The length of the indexed element, in bytes.deltaT: The difference with respect to the timescale specified in the media header box of the track identified by the time_reference_track_id between the start time of this element and the start time of the next element.Segment index box The segment index box ('sidx') provides a compact index of movie fragments and other segment index boxes within a segment. There are two loop structures in the segment index box. The first loop documents the first sample of the sub-segment, ie the sample in the first movie fragment referenced by the second loop. The second loop provides the index of the subsegment. The container for the 'sidx' box is a file or direct segment.Unsigned int (16) track_count; unsigned int (16) reference_count; for (i = 1; i <= track_count (8)) Syntax aligned (8) class SegmentIndexBox extends FullBox ('sidx', version, 0) {unsigned int (32) reference_track_ID; i} = (i = 1; i <= reference_count; 32) decoding_time;} else {} unsigned int (64) decoding_time;}} {unsigned int (32) track_ID; unsigned int (31) reference_offset; unsigned int (32) subsegment_duration; bit (1) contains_RAPunsigned int (31) RAP_delta_time;}} semantics reference_track_ID provides a track_ID for the reference track.track_count: number of indexed tracks in the following loop (one or more) reference_count: number of elements indexed by the second loop (one or more) track_ID: track fragment identifies the first movie fragment The ID of the track included in. Exactly one track_ID in this loop equal to reference_track_ID decoding_time is the decoding time for the first sample in the track identified by the track_ID in the movie fragment referenced by the first item in the second loop , Expressed in the time scale of the track (documented within the time scale field of the track's media header box).reference_type: When set to 0, it indicates that the movie fragment ('moof') box is to be referenced; when set to 1 it indicates that the segment index ('sidx') box is to be referenced.reference_offset: When the unit from the first byte following the contained segment index box to the first byte of the box referenced is the distance in bytes of subsegment_duration: When referring to the segment index box, this field is the second When referring to a movie fragment, this field contains the sum of the subsegment_duration fields in the loop of the first movie fragment documented by the indicated movie fragment and the next entry in the loop, or of the subsegment Within the last movie fragment, whichever comes first, we have the sum of the sample duration of the samples in the reference track That. The duration is expressed in the time scale of the track (documented within the time scale field of the track's media header box).contains_RAP: When a movie fragment is referenced, this bit is 1 if the track fragment in that movie fragment for a track having a tack_ID equal to reference_track_ID contains at least one random access point, otherwise This bit is set to 0. When a segment index is referenced, this bit is set to 1 only if one of the references in that segment index has this bit set to 1, otherwise it is set to 0 .RAP_delta_time: Provides the composition composition time of the random access point (RAP) if contains_RAP is 1 and reserves with the value 0 if contains_RAP is 0. The time is expressed as the difference between the decoding time of the first sample of the subsegment documented by this entry and the presentation (composition) time of the random access point in a track having a tack_ID equal to the reference_track_ID .Differences between TIDX and SIDX SIDX and SIDX provide the same function for indexing. The first loop of SIDX further provides global timing for the first movie fragment, but global timing can also be included within the movie fragment itself and it is absolute or relative with respect to the reference track.The second loop of SIDX implements the function of TIDX. Specifically, SIDX allows you to have a mix of targets for references for each index referenced by reference_type, while TIDX only refers to either TIDX only or MOOF only is there. Number_of_elements in TIDX corresponds to reference_count in SIDX, time_reference_track_id in TIDX corresponds to reference_track_ID in SIDX, tfirst_element_offset in TIDX corresponds to reference_offset in the first entry of the second loop, The first_element_time in TIDX corresponds to the decoding_time of the reference_track in the first loop, the random_access_flag in the TIDX corresponds to the contains_RAP in the SIDX, and in the SIDX, the RAP does not necessarily need to be arranged at the beginning of the fragment Has a free, and thus requests the RAP_delta_time, length within TIDX corresponds to reference_offset in SIDX, finally, deltaT in TIDX corresponds to subsegment_duration in SIDX. Therefore, the functions of the two boxes are equivalent.For variable block size setting and sub GoP block video media, the relationship between the video coding structure and the block structure for the request can be important. For example, if each block starts with a seat point, for example a random access point ("RAP"), and each block represents an equal period of video time, then the arrangement of at least some seek points in the video media is fixed And seek points occur at regular intervals within the video encoding. As is well known to those skilled in the art of video coding, when seek points are arranged according to relationships between video frames, especially when they are arranged in a frame having little in common with the previous frame , The compression efficiency can be improved. Thus, this requirement that blocks represent equal amount of time impose constraints on video encoding, so compression may not be optimal.It is desirable to make it possible to select the position of the seek point in the video presentation by the video coding system rather than requiring a seek point at a fixed position. Enabling the video encoding system to select a seek point results in improved video compression so that higher quality video media can be reconstructed with a given available bandwidth So that an improved user experience is obtained. The current block-request streaming system can require that all blocks are of the same duration (video time) and that each block must start from a seek point, It is a drawback.A novel block-request streaming system that provides advantages over the above will now be described. In one embodiment, the video encoding process of the first version of the video component may be configured to select placement of the seek points to optimize compression efficiency, but the duration between the seek points It is required that there be a maximum value concerning. This latter requirement reliably restricts the selection of seek points by the encoding process and therefore reduces the compression efficiency. However, the decline in compression efficiency may occur if a fixed fixed position is required for the seek point, provided that the maximum duration between seek points is not too small (eg, greater than about 1 second) It is small compared with that which it covers. Furthermore, if the maximum duration between seek points is a few seconds, the reduction in compression efficiency compared to a completely free seek point placement is generally very small.In many embodiments including this embodiment, some of the RAPs may not be seek points, for example due to the fact that the RAP is too temporally close to the surrounding seek point, or because the RAP is preceded or followed There may be frames that are RAPs between two consecutive seek points that are not selected as seek points due to too small media data volume between the seek point and the RAP.The location of the seek point within all other versions of the media presentation can be constrained to be the same as the first (eg, the highest media data rate) version. This certainly reduces the compression efficiency for these other versions compared to allowing free choice of seek points by the encoder.Seek point usage typically requires that a frame be decodable independently, resulting in low compression efficiency generally for that frame. Frames that are not required to be independently decodable can be encoded by referring to data in other frames, which means that the amount of commonality between the frame to be coded and the reference frame Generally improves the compression efficiency for that frame by an amount dependent on that frame. Efficient selection of seek point placement is achieved by preferentially selecting a frame with low commonality with the previous frame as a seek point frame and thereby encoding the frame in a form that can be decoded independently Minimize the penalty on compression efficiency incurred.However, because the original content is the same, the level of commonality between the frame and the potential reference frame has a high correlation across different representations of the content. As a result, restricting the seek points in other deformations to be at the same position as the seek point in the first deformation does not cause a large difference in compression efficiency.The seek point structure is preferably used to determine the block structure. Preferably, each seek point determines the beginning of the block and there can be one or more blocks containing data between two consecutive seek points. Since the duration between seek points is not fixed for encoding with good compression, not all blocks are required to have the same playout duration. In some implementations, the blocks are aligned between versions of the content, that is, if there is a block spanning a frame of a particular group in one version of the content, the same group of frames in the other version of the content There is a block spanning. Blocks for a given version of content do not overlap and all frames of content are included in exactly one version of each version.Features that allow for efficient use of variable duration between seek points, and hence variable duration GoP, can be included in segments or segment indexing or It is a segment map, ie it is the metadata associated with this segment in this representation that can be provided, comprising the start time and duration of each block of the presentation. The client can use this segment indexing data when determining the block to start the presentation when the user is requesting for the presentation to start at a particular point present in the segment. If the metadata is not provided, the presentation may be initiated only at the beginning of the content or at the start of the content (eg by dividing the requested starting point (time) by the average block duration and giving the index of the starting block At a random or approximate point close to the desired point) by selecting the desired point.In one embodiment, each block may be provided as a separate file. In other embodiments, multiple consecutive blocks may be consolidated into a single file to form segments. In this second embodiment, the metadata for each version is provided with the start time and duration of each block and the byte offset in the file the block begins. This metadata may be provided in response to the initial protocol request, ie it may be separately available from segments or files, or may be stored in the same file or segment as the block itself at the beginning of the file Can be included. As will be apparent to those skilled in the art, this metadata may be stored in compressed form, for example in gzip or delta encoding or in binary format, in order to reduce the network resources required to transfer the metadata to the client Can be coded.FIG. 6 shows an example of segment indexing where the blocks are of variable size and the range of blocks is partial GoP, ie a partial amount of media data between one RAP and the next RAP. In this example, the seek point is indicated by a RAP indicator and the RAP indicator value of 1 indicates that the block starts with RAP, or seek point or RAP, or seek point, and the RAP indicator of 0 , And the block also indicates that the RAP also does not include the seek point. In this example, the first three blocks, bytes 0 to 157, 033, comprise a first GoP, which has a presentation duration of 1.623 seconds and the presentation time is 20 It is from the time of entering the seconds to 21.623 seconds. In this example, the first of the three first blocks comprises a presentation time of 0.485 seconds and comprises the first 50,245 bytes of media data in the segment. In this example, blocks 4, 5, and 6 comprise a second GoP, blocks 7 and 8 comprise a third GoP, and blocks 9, 10 and 11 comprise a fourth GoP. Note that other RAPs not specified as seek points and therefore not signaled as RAPs in the segment map can exist in the media data.Referring again to FIG. 6, if the client or receiver wishes to access the content and starts at a time offset of about 22 seconds into the media presentation, the client will see that the corresponding media data is within this segment Other information, such as the MPD described in more detail later, can be used initially to initially decide to be present. The client can download the first part of the segment, for example with HTTP byte range request, in order to obtain segment indexing which in this case is only a few bytes. By using segment indexing, the client decides that the first block it should download is the first block with a time offset starting at at most 22 seconds and RAP, the seek point, can do. In this example, block 5 has a time offset that is less than 22 seconds, ie its time offset is 21.965 seconds, but segment indexing indicates that block 5 does not start with RAP so that instead Based on segment indexing, the client downloads it since the start time of Block 4 is at most 22 seconds, ie its time offset is 21.623 seconds, since it starts with RAP Select that. Thus, based on segment indexing, the client makes an HTTP range request starting at byte offset 157,034.If segment indexing is not available, the client may have to download all the previous 157,034 bytes of data before downloading this data, a much longer rise time, or channel zapping time, Leading to wasteful downloading of ineffective data. Alternatively, if segment indexing is not available, the client can estimate where the desired data starts within the segment, but the estimate may be bad, which means that the appropriate time You may miss it, request the return and increase the rise delay again.In general, each block contains a portion of the media data that can be played out by the media player with the previous block. Thus, regardless of whether the blocking structure and the signaling of the segment indexing blocking structure to the client are included in the segment or provided to the client through other means, the client may experience high speed Channel zapping, and seamless playout can be significantly improved. Support for blocks with variable duration enabled by segment indexing, and blocks that contain only a part of GoP can significantly improve streaming experience. For example, referring again to FIG. 6 and the above example that the client wishes to initiate a playout at about 22 seconds of the presentation, the client requests data in block 4 through one or more requests , Then you can supply it within the media player as soon as it becomes available for playback start. Thus, in this example, the playout will allow the fast channel zapping time as soon as block 4 of 42,011 bytes is received at the client, playout starts. Instead, if the client needs to request the entire GoP before playout begins, this is 144,211 bytes of data, so the channel zapping time will be longer.In other embodiments, the RAP or seek point may occur in the middle of the block, and that there is data in the segment indexing indicating where the RAP or seek point exists in the block or fragment You can do. In other embodiments, the time offset can be the decoding time of the first frame in the block, instead of the presentation time of the first frame in the block.8 (a) and 8 (b) show examples of variable block sizing of aligned seek point structures across multiple versions or expressions, and FIG. 8 (a) shows an example of aligned seek point structures across matched 8 (b) illustrates a variable block size setting with unmatched seek points across multiple versions of the media stream.The time is shown at the top and the unit is seconds and the blocks of two segments for the two representations and the seek points are shown from left to right with respect to their timing with respect to this timeline, The length of the block is proportional to its playout time and is not proportional to the number of bytes in the block. In this example, the segment indexing for both segments of the two representations is the same time offset with respect to the seek point, but potentially different numbers of blocks or fragments between the seek points, and the different media data in each block Will have different byte offsets for the block due to the quantity. In this example, if the client wishes to switch from expression 1 to expression 2 at a presentation time of about 23 seconds, the client requests up to block 1.2 in the segment for expression 1, It can start requesting a segment for 2 from expression 2 so switching occurs in a presentation that coincides with seek point 1.2 in expression 1, which occurs when seek point 2 in expression 2 and It is the same point in time.As should be clear from the above, the described block-request streaming system does not constrain the video encoding to place seek points at specific locations within the content, Reduce one of the problems.In the embodiment described above, it is configured such that seek points for different representations of the same content presentation are aligned. However, in many cases it is preferable to relax this alignment requirement. Sometimes encoding tools are used to generate expressions that do not have the ability to create an aligned representation of a seek point. As another example, the content presentation can be independently encoded into a different representation without seek point matching between different representations. As another example, the representation may be such that it has a lower rate and that it is more common to switch, or it may be necessary to support trick modes such as fast forward or rewind or fast seek Since it includes a seek point, more seek points can be included. Thus it is desirable to provide a method that allows a block-request streaming system to efficiently and seamlessly deal with unaligned seek points spanning various representations for content presentation.In this embodiment, the positions of the seek points over multiple representations may not match. Blocks are constructed so that new blocks start at each seek point so that there can be no consistency between blocks of different versions of the presentation. An example of the unaligned seek point structure between different representations is shown in FIG. 8 (b). The time is shown at the top, the unit is seconds, and the blocks of two segments for the two representations and the seek points are shown from left to right with respect to their timing for this timeline, so each The length of the block is proportional to its playout time and is not proportional to the number of bytes in the block. In this example, the segment indexing for both segments of the two representations can be based on potentially different time offsets with respect to seek points, as well as potentially different numbers of blocks or fragments between the seek points, Will have different byte offsets for blocks due to different amounts of media data. In this example, if the client wishes to switch from expression 1 to expression 2 at a presentation time of about 25 seconds, the client requests up to block 1.3 within the segment for expression 1, It can start requesting a segment for 3 from Expression 2 so switching occurs in a presentation that coincides with seek point 2.3 in Expression 2 and that occurs in Block 1.3 Of the playout, so that a portion of the media for block 1.2 is not played out (although the media data for the frame of block 1.3 that is not played out will be played out in the block to be played out That it must be loaded into the receiver buffer to decode the other 1.3 frames A).In this embodiment, the operation of the block selector 123 is such that each time it is necessary to select a block from a different representation from the previously selected version, the first frame is followed by the last frame of the last selected block It is possible to change so that the last block that is not later than the frame to be selected is selected.This last described embodiment can eliminate the requirement of constraining the position of the seek point within a version other than the first version and thereby improve the compression efficiency for these versions As a result, a higher quality presentation is obtained for a given available bandwidth, which improves user experience. One further consideration is that video coding tools that perform seek point matching functions over multiple encodings (versions) of the content may not be widely available, and accordingly, it is important to note that in the last described embodiment The advantage is that currently available video coding tools can be used. Another advantage is that the encoding of the different versions of the content can proceed in parallel without the need for coordination between the encoding processes for those different versions. Another advantage is that additional versions of the content can be encoded at a later time and added to the presentation without the need to provide a list of specific seek point locations to the encoding tool.In general, if a picture is coded as a picture group (GoP), the first picture in the sequence can be a seek point, but it does not always have to be so.Optimum Block Division Block One concern in the request streaming system is the mutual influence between the structure of the encoded media, eg video media, and the block structure used for the blocking requirements . As will be appreciated by those skilled in the art of video encoding, it is often the case that the number of bits required for the coded representation of each video frame varies substantially from frame to frame from time to time. As a result, the relationship between the amount of data received and the duration of the media encoded by that data may not be linear. Furthermore, splitting the media data into blocks within the block-request streaming system adds further dimensional complexity. In particular, in some systems, the block's media data can not be played out until the entire block has been received, for example within the block of media data placement and erasure code use within the block This property may occur as a result of the dependency between media samples. As a result of these complicated mutual influences between block sizes and block duration and the possibility that it may be necessary to receive the entire block before starting playout, the client system plays out It is usual to adopt a conservative approach in which media data is buffered before it starts. The buffering results in a long channel zapping time and thus a poor user experience.Pakzad describes a "block partitioning method" which is a new efficient method for deciding how to divide a data stream into neighboring blocks based on the basic structure of the data stream, and describes these in the context of the streaming system Some further advantages of the method of. A further embodiment of the invention applying Pakzad's block partitioning method to a block-request streaming system will now be described. The method can comprise arranging the presented media data in approximate presentation time order so that the playout time of any given element (eg, video frame or audio sample) of the media data is , It differs from the adjacent media data element by a smaller value than the provided threshold. Media data thus ordered can be considered to be a data stream in terms of Pakzad's term and none of the Pakzad's methods applied to this data stream can block boundaries with the data stream . Data between any pair of adjacent block boundaries is considered a "block" in the terminology of this disclosure and is applied to provide a presentation of media data within the block-request streaming system. As will be apparent to those skilled in the art upon reading this disclosure, several advantages of the method disclosed in Pakzad with respect to the block-request streaming system can be realized.As explained in Pakzad, the determination of the block structure of a segment containing a partial GoP or a block containing a part of two or more GoPs has an impact on the ability of the client to enable fast channel zapping time there is a possibility. In Pakzad, when considering one target rise time, when the client starts downloading the expression at one of the seek points and starts playing out after the rise time of the target, at each point in time A method has been provided for providing a block structure and target download rate that ensures that the amount of data downloaded by the client continues seamlessly at least over the target download rate times the download start time. This provides the client with means to decide when to start playing the presentation at the earliest point in time and will continue to play out the representation by the client as long as the download satisfies the conditions described above It is advantageous for the client to have access to the target's start time and target download rate. Thus, the method described below provides a means for including within the media presentation description the rise time of the target and the download rate of the target so that it can be used for the purposes described above.Media Presentation Data Model FIG. 5 illustrates a possible structure of the content store shown in FIG. 1 and includes a segment and media presentation description ("MPD") file and a segment, timing, and other structure subdivision within the MPD file . This time the details of the possible implementation of the MPD structure or file are explained. In many instances, the MPD is described as a file, but non-file structure can be used as well.As illustrated therein, the content store 110 possesses a plurality of source segments 510, an MPD 500 and a repair segment 512. The MPD may comprise a period record 501, which may comprise an expression record 502 that includes segment information 503, for example a reference to initialization segment 504 and media segment 505.9 (a) shows a metadata table example 900. FIG. 9 (b) shows how the HTTP streaming client 902 obtains the metadata table 900 and the media block 904 through the connection to the HTTP streaming server 906 Here is an example.In the method described herein, a "media presentation description" is provided which comprises information on representations of media presentations available to the client. Expressions can be alternatives in the sense that the client selects one out of different alternatives or they can be used by the client to select some of the representations, perhaps also from each alternate set, And they can be complementary in the sense that they are presented together. Expressions can advantageously be assigned to groups and clients are programmed or organized so that they are each alternative to one another with respect to the representation within one group, while on the other hand from the different groups two The above expressions are presented together. In other words, if there are more than one representation in the group, the client selects one representation from that group to select a representation from the next group to form a presentation, and so on For example.The information describing the representation advantageously includes details of the applied media codec including the codec profile and level required to decode the representation, video frame rate, video resolution, data rate, , For example. Clients receiving media presentation descriptions can use this information to pre-determine whether the presentation is suitable for decoding or presentation. This represents one advantage, because if the information to be distinguished is contained only in the binary data of the representation, binary data from the whole representation is requested in order to find information on the appropriateness of it And it is necessary to parse and extract the pertinent information. Extraction of these multiple requests and data parsing annexes requires a certain amount of time, resulting in a long rise time and thus a bad user experience.Further, the media presentation description may comprise information limiting the client's request based on the time. For example, for a live service, the client can be restricted to requesting a presentation part that is close to the "current broadcasting time". This is one advantage for live broadcasts because it may be desirable to erase data from the serving infrastructure for content broadcast more than the threshold provided before the current broadcast time. This would be desirable for reuse of storage resources within the serving infrastructure. This will also be desirable depending on the type of service being offered. For example, in some instances, the presentation may be made available live only due to a certain subscription model of the receiving client device, while other media presentations may be available live and on demand , And the other presentations are only live for the first class client device and only on demand for the second class client device and in the third class client For devices it can be made available in live or on-demand combination. The method described in the media presentation data model (described below) provides the client with the policy to avoid making requests and coordinating provision to the user with respect to data that may not be available in the serving infrastructure It is possible to contact you. Alternatively, for example, the client may present a notification to the user that this data is not available.In a further embodiment of the invention, the media segment conforms to the ISO base media file format described in ISO / IEC 14496-12 or derived specifications (eg the 3GP file format described in 3 GPP technical specification 26.244) can do. The section on the usage of the 3GPP file format (above) describes a novel extension of the ISO base media file format that enables efficient use of the data structure of this file format within the block-request streaming system. As described in this reference, the information can be provided in a file, allowing fast and efficient mapping between the time segment and the byte range of the media presentation in the file. The media data itself can be structured by movie fragment construction defined in ISO / IEC 14496-12. This information providing time and byte offsets can be structured hierarchically or as a single information block. This information can be provided at the beginning of the file. The provision of this information with the efficient coding described in the section on the usage of the 3GPP file format, as a result, in the case where the file download protocol used by the block request streaming system is HTTP, This information can be quickly retrieved using an HTTP partial GET request, resulting in a short rise time, seek or stream switching time, and thus an improved user experience.Representations within a media presentation are typically used to ensure seamless switching over multiple representations that are alternatives and to maintain a global time to ensure synchronous presentation of two or more expressions Synchronized on line. Accordingly, the sample timing of the contained media in the representation in the adaptive HTTP streaming media presentation can be related to a continuous global timeline across multiple segments.Blocks of encoded media including multiple types of media, such as audio and video, can have different presentation end times for different types of media. In a block request streaming system, the media block is configured so that each media type is continuously played so that one type of media sample from one block can be played out before the other types of media samples of the preceding block It can be continuously reproduced in a form, which is referred to herein as "continuous bloc splicing". Alternatively, the media block can be played out in such a way that the earliest samples of any type of one block are played after the last sample of any type of the preceding block, here "discontinuous Block splicing "(discontinuous blcok splicing). Continuous block splicing can be appropriate when both blocks contain the same content items and media from the same representation. Typically, within one representation, continuous block splicing can be applied when splicing two blocks. This is advantageous as existing coding can be applied and segmentation can be done without having to align the media tracks at block boundaries. This is illustrated in FIG. 10, where video stream 1000 comprises block 1202 and other blocks and has a RAP, eg RAP 1204.Media Presentation Description Media Presentation can be viewed as a collection of structured files on an HTTP streaming server. The HTTP streaming client can download enough information to present the streaming service to the user. The alternative representation is one of one or more 3GP files or 3GP files that conform to the 3GPP file format or comply with a suitably defined set of data structures that can be easily converted from 3 GP files or to 3GP files It can consist of parts.A media presentation can be described by a media presentation description. The media presentation description (MPD) provides metadata that can be used to construct the appropriate file request, eg HTTP GET request, to access the data at the relevant time and provide the user with the streaming service . The media presentation description can provide sufficient information for the HTTP streaming client to select the corresponding 3 GPP file and multiple files. A unit whose signaling to the client that it is accessible is called a segment.In exchange, the media presentation description can include elements and attributes such as:MediaPresentationDescription element An element that encapsulates the metadata used by the HTTP streaming client to provide streaming services to end users. A MediaPresentationDescription element may include one or more of the following attributes and elements:Version (Version): Version number of the protocol to ensure extensibility.Presentation Identifier: Information that can uniquely identify a presentation from other presentations. It can also contain private fields or names.UpdateFrequency: The update frequency of the media presentation description, that is, the frequency with which the client can reload the actual media presentation description. If it does not exist, the media presentation can be static. Updating a media presentation means that the media presentation can not be cached.MediaPresentationDescriptionURL (Media Presentation Description URL): URI to date the media presentation description.Stream: Describes the type of stream or media presentation, ie video, audio, or text. The video stream type can include audio and can include text.Service: Describe the service type with additional attributes. The service type can be live or on demand. This can be used to notify the client that seeking beyond a certain time and access is not allowed.MaximumClientPreBufferTime (Maximum client prebuffer time): The maximum amount of time a client can prebuffer a media stream. This timing can distinguish between streaming and progressive downloading if the client is limited to download beyond this maximum prebuffer time. There can not be a value indicating that pre-buffering restrictions can not be applied.SafetyGuardIntervalLiveService (safety guard interval live service): Information on the maximum turnaround time of live service on the server. Provide clients with instructions of information that is already accessible at this time. This information may be required if clients and servers are expected to operate in UTC time and no strict time synchronization is provided.TimeShiftBufferDepth (Time Shift Buffer Depth): Information on how far the client can return in the live service with respect to the current time. This extension of depth allows time-shifted viewing and catching services without specific changes in service provisioning.LocalCachingPermitted: This flag indicates that the HTTP client can cache it locally after the downloaded data is played.LivePresentationInterval (live presentation interval): Includes a time interval during which a presentation is available by specifying StartTime (start time) and EndTime (end time). StariTime indicates the start time of the service, and EndTime indicates the end time of the service. If EndTime is not specified, the end time is unknown at the current time and UpdateFrequency can ensure that the client gains access to the end time before the actual end time of the service.OnDemandAvailabilityInterval (on-demand availability interval): The presentation interval indicates the availability of services on the network. Multiple presentation intervals can be provided. The HTTP client can not access services outside the specified time window. By provisioning OnDemand Interval, additional time shift viewing can be specified. This attribute can also exist for live services. If it exists for a live service, the server can ensure that the client can access the service as OnDemand Service during all of the provided availability intervals. Therefore, LivePresentationInterval can not overlap with OnDemandAvailabilityInterval.MPDFileInfoDynamic (MPD file information dynamic): Describes the default dynamic construction of files in a media presentation. Further details are provided below. The default time specification for the MPD level can avoid unnecessary iterations if the same rule is used for some or all alternative representations.MPDCodecDescription (MPD codec description): Describes the key default time codec in media presentation. Further details are provided below. The default time designation at the MPD level can avoid unnecessary iterations if the same codec is used for some or all alternative representations.MPDMoveBoxHeaderSizeDoesNotChange (MPD moving box header size invariant): A flag to indicate whether the size of MoveBoxHeader (moving box header) varies between individual files in the entire media presentation. This flag can be used to optimize downloads and can only exist in certain segment formats, especially those in which segments contain moov headers.FileURIPattern (file URI pattern): a pattern used by the client to generate a request message for a file in a media presentation. Different attributes allow generation of a unique URI for each of the files in the media presentation. The basic URI can be an HTTP URI.Alternative Representation: describes a list of expressions.Alternative Representation element: An XML element that encapsulates all metadata for one representation. The Alternative Representation element may contain the following attributes and elements:Representation ID: A unique ID for this particular alternative representation in the media presentation.FilesInfoStatic (file information static): Provides an explicit list of start times and URLs of all files of one alternative representation. Static provisioning of the list of files provides the advantage of describing the exact timing of the media presentation, but it may not be as compact, especially if the alternative representation contains a large number of files. Further, the file name can have any name.FileInfoDynamic (File Information Dynamic): Provides an implicit way to build the start time and list of URIs of one alternate presentation. Dynamic provisioning of a list of files can provide the advantage of a more compact representation. If only a sequence of start times is provided, timing benefits are also valid here, but file names are built dynamically based on FilePattern URI. Where only the duration of each segment is provided, the representation is compact and may be suitable for use within a live service, but file creation can be controlled by global timing.APMoveBoxHeaderSizeDoesNotChange (AP moving box header size immutable): A flag to indicate whether the size of the moving box header changes between individual files in the alternate description. This flag can be used to optimize downloads and can only exist in certain segment formats, especially those in which segments contain moov headers.APCodecDescription: Describes the main codec of the file in the alternate presentation.MediaDescription element MediaDescription: an element that can encapsulate all the metadata for the media contained in this representation. Specifically, it can include information on tracks in this alternate presentation and groups of recommended tracks, if applicable. The MediaDescription attribute contains the following attributes:TrackDescription: an XML attribute that encapsulates all metadata for the media contained in this representation. The TrackDescription attribute contains the following attributes:TrackID (TID ID): A unique ID for the track in the alternate representation. This can be used if the track is part of a group classification description.Bitrate(ビットレート):トラックのビットレート。TrackCodecDescription: An XML attribute containing all the attributes in the codec used in this track. The TrackCodecDescription attribute contains the following attributes.MediaName: An attribute that defines the media type. The media type includes "audio", "video", "text", "application", and "message".Codec (Codec): CodeecType including profile and level.LanguageTag (language tag): The language tag, if applicable.MaxWidth, MaxHeight (Maximum width, Maximum height): For the image, it is the height and width of the included image, in pixels.Sampling rate (sampling rate): Sampling rate for audio.GroupDescription: an attribute that provides clients with recommendations on applicable group classifications based on different parameters.GroupType (Group Type): the underlying type that the client can decide how to group tracks.The information in the media presentation description is advantageously used by the HTTP streaming client to make requests for files / segments or parts thereof at relevant times, for example their ability with respect to access bandwidth, display capabilities, codec capabilities, etc. , And segments matching the user's preferences, such as language, etc., from the appropriate representation. In addition, because the media presentation description describes an expression that is time aligned and mapped to the global timeline, the client switches between multiple expressions, presents the representation together or seeks in the media presentation You can also use the information in the MPD during the ongoing media presentation to start the corresponding action of.The signaling representation of the segment start time can be divided into multiple segments in terms of time. There is an inter-track timing problem between the last fragment of one segment and the next fragment of the next segment. Furthermore, there are other timing problems when segments of constant duration are used.Using the same duration for all segments can have the advantage that the MPD is compact and static. However, all segments can still start at random access points. In this way video coding can be constrained to provide random access points at these particular points or the actual segment duration can not be exactly the same as specified in the MPD. It is desirable that the streaming system does not impose unnecessary constraints on the video encoding process, so a second option would be preferable.Specifically, if the file duration is specified in the MPD to be d seconds, then the nth file is stored in random access at time (n-1) d or immediately after time (n-1) d You can start from the point.In this approach, each file can contain information on the exact start time of the segment in terms of global presentation time. Three possible methods for signaling this include:(1) First, limit the start time of each segment to the exact timing specified in the MPD. However, the media encoder may not have flexibility with respect to the placement of IDR frames and may require special coding for file streaming.(2) Second, add an exact start time to the MPD for each segment. In the case of on-demand, the compactness of the MPD may be reduced. In the case of live, this may require regular updating of the MPD, which may reduce scalability.(3) Thirdly, in the sense that the segment contains information, we add to the segment the global time or exact start time for the announced start time of the representation in the MPD or the announced start time of the segment. It can be added to a new box dedicated to adaptive streaming. This box can also contain the information provided by the "TIDX" or "SIDX" box. One result of this third approach is that when seeking to a specific location near the beginning of one of the segments, the client selects it based on the MPD, up to the segment containing the requested seek point It is possible to do. A simple response in this case can be to move the seek point forward until the beginning of the retrieved segment (ie to the next random access point after the seek point). Normally, random access points are provided at least every few seconds (and often little coding gain due to lowering their frequency), so in the worst case the seek point May be moved after a few seconds than specified. Alternatively, in retrieving the header information for a segment, the client can determine that the requested seek point actually exists in the previous segment and request that segment instead. As a result, the time required to perform the seek operation sometimes increases.Accessible Segment List Media Presentation comprises a set of representations, each providing some different version of the encoding for the original media content. The expression itself advantageously includes information on the distinguishing parameters of the expression compared with the other parameters. They also include a list of explicitly or implicitly accessible segments.A segment can be distinguished as a media segment mainly including a timeless segment including only metadata and media data. A media presentation description ("MPD") implicitly or explicitly identifies each of the segments advantageously and assigns different attributes. The attributes advantageously assigned to each segment comprise the time period during which the segment is accessible, the resource, and the protocol the segment is accessible to. In addition, media segments are advantageously assigned attributes such as the start time of a segment in a media presentation and the duration of a segment in a media presentation.If the media presentation is of the "on-demand" type, as is advantageously indicated by attributes in the media presentation description, eg, OnDemandAvailabilityInterval (On Demand Availability Interval), the media presentation description is typically It describes all segments and also provides indications when the segments are accessible and when the segments are not accessible. The start time of the segment is advantageously expressed with respect to the beginning of the media presentation so that two clients starting playing the same media presentation but at different times can use the same media presentation description and the same media segment . This advantageously improves the ability to categorize segments.If the media presentation is of the "live" type, as advantageously indicated by an attribute in the media presentation description, such as the attribute Service (service), the segment with the media presentation after the actual time is segmented into MPD Despite being fully described within, it is generally not generated or at least not accessible. However, using the indication that the media presentation service is a "live" type, the client can use the information contained in the MPD and the client's internal time in the actual elapsed time (wall-clock time) based on the download time of the MPD You can generate a list of accessible segments with timing attributes for NOW. The server works advantageously in the sense that the reference client operating with an instance of MPD in the actual elapsed time NOW makes the resource accessible so that it can access the resource.Specifically, the reference client generates a list of segments that are accessible together with the timing attribute for the client internal time NOW in the actual elapsed time based on the information contained in the MPD and the download time of the MPD. With the progression of time, the client creates a new accessible segment list that can use the same MPD and to continuously play out the media presentation. Thus, the server can announce these segments before the segments in the MPD are actually accessible. This is advantageous in order to reduce frequent updates and downloads of the MPD.Suppose a list of segments, each with a starting time t S, is described explicitly by a playlist in an element of FileInfoStatic, etc., or implicitly with an element of FileInfoDynamic, etc. An advantageous method for generating a segment list using FileInfoDynamic is described below. Based on this building rule, the client now accesses the list of URIs for each expression r called FileURI (r, i) and access to the start time tS (r, i) for each segment with index i .The use of information in the MPD to generate segment accessible time windows can be done using the following rules:As advantageously indicated by attributes such as Service, etc., for services of the "on demand" type, the current actual elapsed time at the client NOW is advantageously represented by an MPD element such as OnDemandAvailabilityInterval or any availability If it is within range, all described segments of this on-demand presentation are accessible. If the current actual elapsed time at the client NOW is outside the availability range, then none of the described segments of this on-demand presentation is inaccessible.Service, etc., the starting time tS (r, i) advantageously represents the availability time at the actual elapsed time for "live" type of service. The availability start time can be derived as a combination of the live service time of the event and some turnaround time in the server for capturing, encoding, and issuing. The time for this process can be specified in the MPD, for example, using the safety guard interval tG specified as SafetyGuardIntervalLiveService in MPD, for example. This provides the lowest difference between UTC time on the HTTP streaming server and data availability. In other embodiments, the MPD explicitly specifies the availability time of a segment within the MPD without providing a turnaround time as the difference between the event live time and the turnaround time. In the following description, it is assumed that any global time is specified as availability time. Those skilled in the art of live media broadcasting can derive this information from the appropriate information in the media presentation description after reading this description.If the current actual elapsed time at the client NOW is outside any range of the live presentation interval that is advantageously represented by an MPD element such as LivePresentationInterval, none of the described segments of this live presentation It is inaccessible. If the current actual elapsed time at the client NOW is within the live presentation interval, at least certain of the described segments of this live presentation may be accessible.Restrictions on accessible segments are controlled by the following values.Actual elapsed time (available by client) NOW.Allowed time shift buffer depth tTSB specified in the media presentation description eg as TimeShiftBufferDepth.The client at relative event time t 1 also includes the end time of the segment with (NOW - t TSB) and NOW intervals or duration d so that the result (NOW - t TSB - d) and the NOW interval are obtained Only in the interval it is permitted to request a segment with a starting time t S (r, i).Updating the MPD In some embodiments, for example, since the location of the server changes, the server may not know in advance the start time of the file or segment locator and segment, or the media presentation may not be somewhat Including advertisement, or the duration of the media presentation is unknown, or the server wishes to obscure the locator for the following segment.In this embodiment, the server can only describe segments that are already accessible or become soon accessible since this instance of the MPD was issued. Further, in some embodiments, the client advantageously consumes media close to media described in the MPD to experience media programs that are included at a point as close as possible to the generation of the media content by the user . Upon its prediction that the client will arrive at the end of the described media segment in the MPD, it expects continuous playout in anticipation that the server is issuing a new MPD describing the new media segment It advantageously requires a new instance of MPD to continue. The server advantageously generates a new instance of the MPD and updates the MPD so that the client can rely on a continuous updating procedure. The server can adapt its MPD update procedure to the procedure of the reference client acting in the same way as a common client acts with segment creation and issuance.If the new instance of the MPD describes only a short time advance, the client needs to request a new instance of the MPD frequently. This may result in scalability problems due to unnecessary frequent requests and unnecessary uplink and downlink traffic.Thus it is appropriate on the one hand to describe them as far as possible far away without having to necessarily make the segments accessible yet, allowing for unpredictable updates in the MPD to represent new server locations , Advertisement, etc, or to provide changes in the codec parameters is appropriate on the other hand.Further, in some embodiments, the duration of a media segment can be small, for example, within a range of a few seconds. The duration of a media segment can be adjusted to suit the appropriate segment size that can be optimized for delivery or caching characteristics, in the live service or in other aspects dealing with the storage or distribution of segments, at the end and the end In order to compensate for the delay between, or for other reasons. In particular, in cases where the segments are small compared to the media presentation duration, a significant amount of media segment resources and start times need to be described in the media presentation description. As a result, the size of the media presentation description can be large, which adversely affects the download time of the media presentation description and therefore affects the rising delay of the media presentation and the bandwidth usage on the access link In some cases. It is therefore advantageous to enable not only the description of the list of media segments using the play list but also the description by using the template or the URL construction rule. Templates and URL building rules are used synonymously in this description.In addition, templates can be advantageously used to describe segment locators in live cases beyond the current time. In this case, updating the MPD itself is unnecessary since locators and segment lists are described by templates. However, unexpected events may still occur that require changes in the description or segment description. When contents from different sources are spliced ​​together (joined), for example, when an advertisement is inserted, it may be necessary to change the adaptive HTTP streaming media presentation description. Content from different sources can be different in different ways. Another reason is that during the live presentation it is necessary to change the URL used for the content file provided for failover from one live origin server to the other.In some embodiments, when the MPD is updated, the reference client and hence every implemented client may be informed of any time up to the validity time of the previous MPD, just as it would have been from the previous instance of the MPD It is advantageous for MPD updates to be made so that the updated MPD is compatible with the previous MPD in the sense that it creates a list of identical functions of segments accessible from the updated MPD during. This requirement is (a) that the new MPD is compatible with the old MPD before the update time, so that the client can immediately start using it without synchronization with the old MPD, and (b) update To ensure that the time does not need to be synchronized with the time at which the actual change of the MPD takes place. In other words, updates of the MPD can be advertised in advance, and the server can replace the old instance of the MPD at the point in time when new information becomes available without having to maintain different versions of the MPD.There are two possibilities for media timing in MPD update for a set of representations or for full representation. (A) the existing global timeline continues beyond the MPD update (referred to herein as "continuous MPD update"), or (b) the current timeline ends and the new timeline continues to segment (Here called "discontinuous MPD update").The difference between these alternatives is that when considering that the tracks of the media fragments, and hence the segments, will generally not start and finish at the same time due to different sample granularity between the tracks . During a normal presentation, a sample of one track of a fragment can be rendered before some samples of another track of the previous fragment. That is, there can not be overlapping within a single track, but there is some overlap between the fragments.The difference between (a) and (b) is whether or not the overlap can be made to MPD update. When MPD updates are due to splicing of completely separate content, it is generally difficult to achieve this overlap because the new content requires new encoding to be spliced ​​with the previous content. It is therefore also advantageous to provide the ability to discontinuously update the media presentation by restarting the timeline for a given segment and perhaps to define a new set of representations after updating. Furthermore, if the content is independently encoded and segmented, adjusting the timestamp to fit within the global timeline of the previous content is also avoided.If, for example, the update only adds a new media segment to the list of written media segments, for reasons why updates are less important, or where the location of the URL changes, overlap and continuous update Can be allowed.In the case of a discontinuous MPD update, the timeline of the last segment of the previous expression ends at the end time of the last presentation of the sample in the segment. The timeline of the next expression (more precisely the first presentation time of the first media segment of the new part of the media presentation, also called the new period) will ensure seamless continuous playout Initially and advantageously starting at the very same moment as the end of the presentation of the last period in order to do so.Two examples are illustrated in FIG.It is preferable and advantageous to limit MPD updates to segment boundaries. The rationale for limiting the change or update to the segment boundary is as follows. First, a change in binary metadata for each representation, typically a movie header, can occur at least at the segment boundary. Second, the media presentation description may include a pointer (URL) indicating a segment. In a sense, the MPD is an "umbrella" data structure that groups all the segment files associated with the media presentation together. To maintain this inclusion relation, each segment can be referenced by a single MPD, and when the MPD is updated it is advantageously updated only at the segment boundary.Segment boundaries are not generally required to match, but for cases of content spliced ​​from different sources, and for discontinuous MPD updates it is reasonable to align the segment boundaries (specifically, , The last segment of each representation can end with the same video frame and can not contain audio samples with a presentation start time later than the presentation time of that frame). This allows discontinuous updates to start a new set of expressions at a common moment called period. The starting time of validity of this new set of expressions is provided, for example, by the period start time. The relative start time of each representation is reset to zero and the start time of the period places a set of expressions within this new time period in the global media presentation timeline.For continuous MPD updates, segment boundaries need not be matched. Each segment of each alternative representation can be controlled by a single media presentation description so that an update request for a new instance of a media presentation description will generally be based on the expectation that additional media segments will not be described in the active MPD It can occur at different times depending on the set of consumed expressions including the set of expressions that are triggered and expected to be consumed.To support the updating of MPD elements and attributes in more general cases, any element that is not just a set of expressions or expressions can be associated with valid time. Thus, when it is necessary to update certain elements of the MPD, for example, if the number of expressions is changed or the URL building rules are changed, these elements each contain multiple copies of the element Can be individually updated at a specified time by providing a lifetime that is separate to the current time.The effectiveness is advantageously associated with the global media time so that the described element associated with the valid time is valid during the global time line of the media presentation.As described above, in one embodiment, the valid time is added only to the complete representation set. Each complete set forms a period. The effective time forms the start time of that period. In other words, in a particular case using a validity element, the complete representation set can be valid for the time period indicated by the global valid time for the set of representations. The effective time of a set of expressions is called a period. At the beginning of the new period, the validity of the representation of the previous set expires and the new set of expressions is valid. Note again that the effective time of the period is preferably decoupled.As mentioned above, changes to the media presentation description are made at the segment boundary so that for each representation the change of the element is actually done at the next segment boundary. The client can form a valid MPD that includes a list of segments for each instant within the presentation time of the media.Discontinuous block splicing can be appropriate in cases where the block contains media data from different representations or from different content, eg from segments or advertisements of content. In a block request streaming system, it is possible to request that changes to the presentation metadata are made only at block boundaries. This can be advantageous for implementation reasons, since updating the media decoder parameters within a block may be more complicated than updating them only between blocks. In this case, it is assumed that the element is considered valid from the first block boundary that is not earlier than the start of the specified effective interval to the first block boundary that is not earlier than the end of the specified effective interval It can be advantageously defined that the effective interval to be interpreted can be interpreted as an approximation.An example embodiment of the above-described new extension of the block-request streaming system is described in the submitted section later in the title entitled Change Media Presentation.Segment duration signaling Discontinuous update effectively divides the presentation into a series of detached intervals called periods. Each period has its own timeline for media sampling timing. The media timing of the representation within the time period can advantageously be indicated by specifying a separate compact list of segment durations for each period or for each representation within the period.An attribute associated with an element in the MPD, for example an attribute called period start time, can specify the lifetime of certain elements within the media presentation time. This attribute can be added to any element of the MPD (attributes to which validity can be assigned can be changed to elements).For discontinuous MPD updating, all representation segments can end in discontinuities. This at least implies at least that the last segment before the discontinuity has a different duration than those before. Signaling the segment duration may include indicating that all segments have the same duration or indicating a separate duration for all segments. It would be desirable to have a compact representation for a list of segment durations that are efficient when many of them have the same duration.The duration of each segment within one representation or representation set is the sum of all segments between the beginning of the discontinuous update, ie, the beginning of the period, to the last media segment described in the MPD It can be advantageously accomplished using a single string specifying duration. In one embodiment, the form of this element indicates that this representation is <mult> of the first entry segment of the duration of the first entry <dur>, then the duration of the second entry <dur> Each entry contains a segment duration entry list in which each entry includes a duration attribute dur of the attribute indicating that it contains a <mult> of the second entry segment, and so on, and an optional multiplier mult of the attribute It is a text string.Each duration entry specifies the duration of one or more segments. <Dur> If the value is followed by a "*" character and a number, this number specifies the number of consecutive segments with this duration, in seconds. If the multiplier code "*" does not exist, the number of segments is one. If "*" exists without trailing digits, all subsequent segments have a specified duration and no further entries can exist in the list. For example, the string "30 *" means that all segments have a duration of 30 seconds. The string "30 * 12 10.5" is a twelve segment with a duration of 30 seconds and indicates that one of the durations of 10.5 seconds follows.If the segment duration is specified separately for each alternative representation, the sum of the segment durations within each interval can be the same for each representation. In the case of video tracks, the intervals can end in the same frame in each alternative representation.Those skilled in the art, upon reading this disclosure, will find similar and equivalent methods for representing segment durations in a compact way.In other embodiments, the duration of the segment is signaled to be constant for all segments in the representation except the last one by the signal duration attribute <duration>. The duration of the last segment before the discontinuous update can be shorter as long as the start point of the next discontinuous update or the start of a new period is provided, which leads to the beginning of the next period Means the duration of the last segment up to.Indicating changes in the binary coded representation metadata, such as altering the rendering metadata and updating the updated movie header "moov", can be accomplished in different ways. That is, there can be (a) there can be one moov box for all the representations in a separate file referenced in the MPD, (b) each of the individual files in a separate file referenced within each alternative representation There can be one moov box for alternate representation, (c) each segment can contain a moov box and can therefore be self-contained, (d) one 3 GP There can be a moov box for all representations in the file.In the case of (a) and (b), as long as the effectiveness of the 'moov' box is separated, it is possible to refer to more 'moov' boxes within MPD, It can be advantageously combined with the concept of sex. For example, according to the definition of the period boundary, the validity of 'moov' within the old period can be revoked as the new period starts.For option (a), you can assign a validity factor to a single moov box reference. Multiple presentation headers can be allowed, but only one can be valid at a time. In other embodiments, the effective time of the entire set of expressions in the period or period defined above may be used as the validity time for this expression metadata, typically provided as a moov header .In the case of option (b), validity elements can be assigned to the moov box reference of each expression. Multiple expression headers can be allowed, but only one can be valid at a time. In other embodiments, the lifetime of the entire expression or the entire period of time defined above may be used as the valid time for this expression metadata, typically provided in the moov header.For option (c), it is not possible to add signaling in the MPD, but adding additional signaling in the media stream to indicate whether the moov box changes with respect to any incoming segment it can. This is further explained below in the context of "signaling of updates in segment metadata".Signaling of Updates in Segment Metadata Advantageously, in order to avoid frequent updating of media presentation descriptions for obtaining knowledge about potential updates, the updates are signaled with media segments. An updated metadata, for example a media presentation description, is available that can indicate that you must access within a certain amount of time in order to successfully generate an accessible segment list Of the elements or elements (s) in the media segment itself. In addition, the element may provide information that can be used to construct file identifiers, eg URLs, or file identifiers for the updated metadata file. The updated metadata file can include that equal to the metadata provided in the raw metadata file for the presentation changed for valid interval and also with additional metadata accompanying along with additional metadata. The instructions can be provided within the media segment of all available representations for the media presentation. A client accessing a block request description streaming system can use a file download protocol or other means to retrieve the updated metadata file at the time that the instruction is detected within the media block. The client is thereby provided with information on changes in the media presentation description and the time they occur or occurred. Advantageously, each client requests a renewed media presentation description only once when the change occurs, rather than "polling" and receiving the file many times for possible updates or changes .Modifications may include addition or deletion of representation, change in one or more expressions, such as bit rate, resolution, aspect ratio, change in included track or codec parameters, and change in URL building rules, eg, advertisement Different origin servers, for. Several changes may affect only the initialization segment, eg, the movie header ("moov") atom associated with the representation, while the other changes are to the media presentation description (MPD) It may have an effect.In the case of on-demand content, these changes and their timing can be known in advance and signaled in the media presentation description.For live content, changes can not be made to the point where they occur. One solution is to dynamically update available media presentation descriptions at a particular URL and to require the client to periodically request this MPD to detect changes . This solution has drawbacks in terms of scalability (origin server load and cache efficiency). In a scenario with so many viewers, the cache can receive a number of MPD requests after the previous version has become invalidated from the cache and before the new version is received, It can be transferred to the server. The origin server needs to always process the request from the cache for each updated version of the MPD. Furthermore, updating can not be easily matched temporally with changes in media presentations.Since one of the advantages of HTTP streaming is the ability to utilize standard web infrastructure and services for scalability, the preferred solution is to use only "static" (ie cachable) files , And not relying on the client "polling" file to see if they are changing.Solutions are described and proposed in an adaptive HTTP streaming media presentation to solve the update of metadata including media presentation descriptions and binary representation metadata, eg "moov" atom.With respect to live content, we can not know the point where MPD or "moov" may change when MPD is constructed. For bandwidth and scalability reasons, MPD's frequent "polling" to check for updates should generally be avoided, so updating the MPD is "in band" within the segment file itself ). That is, each media segment may have the option of indicating an update. Different updates can be signaled depending on the above segment types (a) - (c).In general, the following instructions can advantageously be provided in the signals within the segment. An indicator that indicates that the MPD can be updated before requesting the next segment with a start time greater than the start time of the next segment or current segment in this representation. Updates may be announced beforehand and update may indicate that it needs to occur only on one of the following segments. This MPD update can also be used to update the binary representation metadata, for example the movie header, if the locator of the media segment is changed. Other signals can indicate that no further segment preceding in time with completion of this segment should be requested.If the segment is formatted according to segment format (c), that is, each media segment contains self-initializing metadata, eg a movie header, that the subsequent segment contains the updated movie header (moov) Other signals shown can be added. This advantageously allows to include movie headers in the segments, but the movie header requests by the client in the case of a seek or random access when the previous segment indicates a movie header update or when switching expressions There is only need. In other cases, the client can exclude the movie header from the download and thus segment the byte range request that advantageously saves bandwidth.In yet another embodiment, if an MPD update indication is signaled, the signal may also include a locator, eg a URL for an updated media presentation description. The updated MPD can describe presentations both before and after updating, using validity attributes, eg new and old periods in the case of discontinuous updates. This can advantageously be used to enable time shifted viewing as explained in more detail below, but it may be desirable to signal MPD updates at any point in time before the changes it contains become effective As well. The client can immediately download the new MPD and apply it to the ongoing presentation.In a specific implementation, the signaling of the change in the media presentation description, the moov header or the end of the presentation should be included in the streaming information box formatted according to the rules of the segment format using the box structure of the ISO base media file format You can do. This box can provide a specific signal for any of the different updates.Streaming Information Box Definition Box Type: 'sinf' Container: None Force: None Quantity: Zero or 1 The streaming information box contains information on the stream presentation the file is part of.Syntax aligned (8) class StreamingInformationBoxextends FullBox ('sinf') {unsigned int (32) streaming_information_flags; /// The following are optional fields. string mpd_location} Semantics streaming_information_flags contains zero or more of the following logical ORs:0x00000001 Movie header update followed by 0x00000002 Presentation description update 0x00000004 Presentation end mpd_location exists only when presentation description update flag is set and provides Uniform Resource Locator for new media presentation description . Use Case for MPD Update for Live Service Suppose the service provider wishes to provide live football events using the extended block - request streaming described here. Perhaps millions of users would like to access the presentation of the event. This live event is sporadically blocked by an interruption when a timeout is called, or other alarm of the action, during which advertisement can be added. Typically, there is no or little prior notification of the exact timing of those interruptions.The service provider may need to provide a redundant infrastructure (eg, encoder or server) to allow seamless switching in the event of failure of any of the components during the live event is there.Let's assume that a user named Ann accesses the service in the bus using her mobile device and services are readily available. Next to her, another user, Paul, is sitting and watching the event on his laptop. The goal is decided and both of them celebrate this event at the same time. Paul says to Anne that the first goal in the game was more exciting and Ann uses a service to make it possible to watch the event 30 minutes ago. She returns to the live event after seeing its goal. In order to deal with that use case, the service provider must update the MPD, signal the client that the updated MPD is available, and make it possible for the client to present the streaming service In order to be able to access it.MPD updates can be performed asynchronously with segment distribution as described elsewhere herein. The server can provide the receiver with an assurance that the MPD will not be updated for a period of time. The server can rely on the current MPD. However, explicit signaling is not required when the MPD is updated before a certain minimum update period.Clients may operate against different MPD update instances, so clients may have drift so that completely synchronous playouts are hardly achieved. By using MPD update, the server can communicate the change, even while it is being presented, and can prompt the client for attention to change. In-band signaling per segment can be used to indicate MPD updates so that updates can be restricted to segment boundaries, but it should be acceptable in most applications.An MPD element that provides an optional MPD update box added at the beginning of the segment to signal that the MPD's actual elapsed time and the MPD is required can be added. As with MPD, updating can be done hierarchically.The MPD "issue time" provides a unique identifier for the MPD and when the MPD is issued. It also provides an anchor for the update procedure. An MPD update box can be found in the MPD after the "styp" box and can be defined by Box Type = "mupe", the container is unnecessary, not mandatory and has a quantity of zero or one. The MPD update box contains information on the media presentation on which the segment is a part.Examples of syntax are as follows.unsigned int (1) new-location flag; unsigned int (28) latest_mpd_update time; /// The following are optional fields: aligned (8) class MPDUpdateBoxextends FullBox ('mupe') {unsigned int . string mpd_location} The semantics of the various objects of class MPDUpdateBox can be as follows.mpd_information_flags: contains zero or more of the following logical ORs:0x00 Current Media Presentation Description Update 0x01 Future Media Presentation Description Update 0x02 Presentation End 0x03 - 0x07 Reserved new_location flag: When set to 1, new media presentation description is available at the new location specified in mpd_location .latest_mpd_update time: Specifies the time (in ms) that MPD update is required for the MPD issue time of the latest MPD indefinitely. The client can choose to update the MPD at any time between now and now.It exists only if mpd_location: new_location_flag is set, and if so, mpd_location provides a uniform resource locator for the new media presentation description.If the bandwidth used by the update is an issue, the server can provide MPD for these parts so that only certain device capabilities are updated.When time shift viewing and network PVR time shift viewing is supported, it can happen that more than one MPD or movie header is valid during the lifetime of the session. In this case, it is possible to have a valid MPD throughout the time window by updating the MPD when necessary, but adding validity mechanisms or period concepts. This means that the server can guarantee that MPD and movie headers will be announced for any period within the valid time window for time shift viewing. It is the client's obligation to ensure that the metadata for the client's available MPD and its current presentation time is valid. Migration of live sessions to network PVR sessions using only small MPD updates can also be supported.Special Media Segment Block Request One challenge when the ISO / IEC 14496-12 file format is used within the streaming system is to allocate media data for a single version of the presentation to multiple files It is advantageous to store and place in consecutive time segments. Furthermore, it may be advantageous to arrange so that each file starts from a random access point. It is further advantageous to segment the presentation into a plurality of files each starting from a seek point based on the selection of the position of the seek point during the video encoding process and the selection of the seek point made during the encoding process , And each random access point may not be placed or placed at the beginning of the file, but each file starts with a random access point. In one embodiment having the characteristics described above, the presentation metadata, or the media presentation description, can include the exact duration of each file and the duration can be determined, for example, by the starting time of the video media of the file And the start time of the video media of the file of the file. Based on this information in the presentation metadata, the client can construct a mapping between the global timeline for the media presentation and the local timeline for the media in each file.In other embodiments, the size of the presentation metadata can advantageously be reduced by alternatively designating that all files or segments have the same duration. However, in this case and when the media file is constructed according to the above method, there may be no random access point at the point which is the exact specified duration from the beginning of the file, so the duration of each file May not be exactly the same as the duration specified in the media presentation description.A further embodiment of the present invention for providing accurate operation of a block-request streaming system regardless of the above discrepancies will now be described. In this method, the local timeline of the media in each file (which is the criterion when the decryption and configuration timestamp of the media sample in the file is specified by ISO / IEC 14496-12, the time starting from zero time stamp zero An element specifying the mapping to the global presentation timeline of the line in the file. This mapping information may comprise a single timestamp in the global presentation time corresponding to the type-of-zero of zero in the local file timeline. The mapping information includes an offset value that specifies the difference between the global presentation time corresponding to the time stamp of zero in the local file timeline and the global presentation time corresponding to the beginning of the file according to the information provided in the presentation metadata It can be provided as an alternative.An example for the box can be, for example, a Track Fragment Decoding Time ('tfdt') box or a Track Fragment Adjustment Box ('tfad') box with a Track Fragment Media Adjust ('tfma') box.Example client including segment list generation Now the client examples are explained. It can be used as a reference client for the server to guarantee proper generation and update of MPD.The HTTP streaming client is guided by the information provided in the MPD. The client is assumed to have access to the MPD it received at time T, ie, the time it was able to successfully receive the MPD. Determining that it is a successful receipt can include the client obtaining an updated MPD or the client verifying that the MPD has not been updated since the last successful reception.An example of operation of the client is introduced. In order to provide continuous streaming service to users, the client considers the segment list generation procedure probably to use the playlist or URL construction rules, detailed below, and sets the MPD at the current system time First parse and generate a list of accessible segments for each representation for client local time. The client then selects one or more representations based on information and other information within the presentation attributes, eg, available bandwidth and client capabilities. Depending on the grouping, expressions can be presented alone or with other expressions.For each representation, the client, if present, obtains the binary metadata for that representation, eg the "moov" header, and the media segment of the selected representation. The client accesses the media content, perhaps by using the segment list, by requesting the byte range of the segment or segment. The client first buffers the media before initiating the presentation and once the presentation has started the client may request the MPD update procedure by consecutively requesting a segment or a portion of the segment Continue to consume media content.The client can switch expressions taking into account updated MPD information and / or updated information from its environment, such as changes in available bandwidth. By using media segment requests including random access points, clients can switch to different expressions. When moving forward, when the current system time (called "NOW time" to represent time on presentation) progresses, the client consumes accessible segments. NOW As the time advances, the client probably extends the list of accessible segments for each representation by the procedure specified here.In the event that the end of the media presentation has not yet been reached and the current playing time is within the threshold which is the limit that the client expects to be out of media in the media described in the MPD regarding some consuming or consumed representation If so, the client can request an update of the MPD with a new fetch time reception time T. Upon receipt, the client will take into account the probably updated MPD and the new time T in generating the accessible segment list. FIG. 29 illustrates the procedure for live service at different times at the client.Generate Accessible Segment List Suppose that an HTTP streaming client has access to the MPD and wishes to generate an accessible segment list for the actual elapsed time NOW. The client is synchronized to the global time base with constant accuracy, but advantageously direct synchronization with the HTTP streaming server is not required.The accessible segment list for each representation is preferably defined as a segment start time and a list of segment locator pairs and the segment start time is related to the beginning of the expression without loss of generality Can be defined. The beginning of the expression can be matched at the beginning of the period or when this concept is applied. Otherwise, the beginning of presentation can be the beginning of the media presentation.The client uses URL construction rules and timing, for example, as further defined herein. At the time the list of segments described is obtained, this list is further restricted to those accessible, which can be a subset of segments of a complete media presentation. Construction is controlled by the current value of the clock at the client NOW time. In general, segments are available only during any time NOW within the set of availability times. With respect to the time NOW outside this window neither segment is available. Furthermore, regarding the live service, some time check time provides information on how far the media will be described in the future. The check time is defined on the MPD-documented media time axis and it advantageously requires a new MPD when the client's play time reaches the check time.; When the play time of the client reaches the check time, it advantageously requests a new MPD.The segment list is then further limited by the check time with the MPD attribute TimeShiftBufferDepth so that the only available media segment is that the sum of the beginning time of the media segment and the presentation start time is the NOW of the last described segment Minus timeShiftBufferDepth These are those within the interval between the minus duration and the smaller of check time or NOW.Scalable Block The available bandwidth sometimes becomes very low and the blocks or blocks currently being received at the receiver are completely received in time to playout without pausing the presentation There is no possibility. The receiver can detect the situation in advance. For example, the receiver can decide that it is receiving a block encoding 5 units of media every 6 units of time and has 4 units of media buffer, so the receiver Can expect that the presentation must be stopped or paused after about 24 units of time. With adequate notification, the receiver may respond to the situation, for example by discarding the stream of the current block, and block or block (s) from different representations of the content, eg one unit of playout It can start to demand that it uses less bandwidth per hour. For example, if the receiver switches to a coded representation of the block for at least 20% more video time for blocks of the same size, the receiver eliminates the need to stop until the bandwidth situation improves You can do.However, it would be a waste to have the receiver completely throw away the data that has already been received from the discarded representation. In one embodiment of the block-streaming system described herein, the data within each block is stored in a non-received state with a constant prefix of the data in the block in order for the remainder of the block to continue presentation Can be encoded and arranged in such a way that it can be used. For example, well-known techniques of scalable video coding can be used. An example of the video coding method is H.264. 264 Scalable Video Coding (SVC) or H.264. Includes temporal scalability of H.264 Advanced Video Coding (AVC). Advantageously, this method ensures that the presentation continues based on the already received block part even when the reception of the block or blocks is discarded, for example due to a change in the available bandwidth . Another advantage is that a single data file can be used as a source for multiple different representations of content. This is possible, for example, by utilizing an HTTP partial GET request that selects a subset of blocks corresponding to the required representation.One refinement detailed here is the scalable segment map which is an extended segment. The scalable segment map contains the location of the different layers within the segment so that the client can access the parts of the segment as appropriate to extract those layers. In another embodiment, the sequence of media data in a segment is such that the quality of the segment rises while gradually downloading data from the beginning of the segment. In other embodiments, a gradual increase in quality is applied for each block or fragment included in the segment, so that fragment requests can be made to address scalable approaches.FIG. 12 is a diagram showing an aspect of a scalable block. In that figure, transmitter 1200 outputs metadata 1202, scalable layer 1 (1204), scalable layer 2 (1206), and scalable layer 3 (1208), the latter being delayed. Receiver 1210 can use metadata 1202, scalable layer 1 (1204), and scalable layer 2 (1206) to present media presentation 1212.As described on the independent scalability layer, when a receiver can not receive a requested block of a particular representation of a media dataer in time for its playout, the block-request streaming system It is not desirable because it often creates a bad user experience. Stopping can be avoided, reduced or mitigated by limiting the data rate of the selected representation to be much smaller than the available bandwidth, so that certain portions of the presentation can be played in time Although it will hardly happen that it will not be received, this strategy has the disadvantage that the media quality necessarily drops significantly more than it can in principle be supported by the available bandwidth. A presentation with lower quality than possible can be interpreted as a bad user experience. Thus, the designer of the block-request streaming system may request a content version with a data rate that is significantly lower than the available bandwidth, in this case the user may suffer poor media quality, or , In order to request a content version with a data rate close to the available bandwidth, in this case the user may suffer a high probability pause during presentation in response to changes in available bandwidth , Design of client procedures, client programming or hardware configuration.To address this situation, the block-streaming system described here is designed so that the receiver can make stratified requests and the transmitter can respond to stratified requests, It can be configured to process multiple scalability layers independently.In this embodiment, the encoded media data for each block can be divided into a plurality of decoupled parts, referred to herein as "layers", so that the combination of layers is Of the media data, and a client receiving a certain subset of the layers can decode and present the representation of the content. In this method, the ordering of data in the stream is an order in which the quality of consecutive ranges progressively improves, and the metadata reflects this.One example of a technique that can be used to generate a layer with the above characteristics is described in ITU-T standard H.264. Scalable video coding technique described in H.264 / SVC. Another example of a technique that can be used to generate a layer with the above characteristics is ITU-T standard H.264. It is a technique of the temporal scalability layer provided in H.264 / AVC. In these embodiments, the metadata may be stored in an MPD that allows construction of requests for individual layers of a given block and / or one predetermined layer of one of the plurality of blocks and / or a combination of layers of the plurality of blocks Or within the segments themselves. For example, a layer comprising blocks can be stored in a single file, and metadata specifying byte ranges in the file corresponding to the individual layers can be provided. A file download protocol, such as HTTP 1.1, that allows byte ranges to be specified to request an individual layer or layers can be used. Moreover, as will be apparent to those skilled in the art upon consideration of this disclosure, the techniques described above with respect to building, requesting and downloading variable-size blocks or blocks of variable combinations can be applied in this situation as well .Combination To achieve user experience as compared to existing techniques by using layer-separated media data as described above and / or to reduce demands on the capacity of the serving infrastructure, a block-request streaming client Several embodiments that can be advantageously employed by the present invention will now be described.In the first embodiment, the known techniques of block request streaming systems can be applied by making changes in which different versions of the content can be replaced by different combinations of layers in some cases. That is, if the existing system can provide two separate representations of the content, the extended system described here can provide two layers, one of the content in the existing system The representation is similar in terms of bit rate, quality and possibly other metrics to the first layer in the extended system and the second representation of the content in the existing system is similar to the two layers in the extended system In terms of bit rate, quality and possibly other metrics. As a result, the storage capacity required within the extended system is reduced compared to that required in existing systems. In addition, while clients of existing systems can make requests for blocks of one representation or representation of the other, clients of the extended system can request requests for either the first layer or both layers of the block You can put out. As a result, the user experience in the two systems is similar. In addition, improved caching is provided because common segments cached with higher likelihood are used for different qualities as well.In a second embodiment, the client in the extended block-request streaming system employing the now described layer method maintains a separate data buffer for each of several layers of media encoding be able to. As will be apparent to those skilled in the art of data management within a client device, these "separate" buffers are either physically or logically allocated for separate buffers by separate memory region allocations or buffered Other data stored in single or multiple memory areas and the separation of data from different layers is accomplished logically through the use of a data structure that includes references to storage locations of data from different layers , It should be understood that hereinafter the term "separate buffer" includes any way that can distinguish data of separate layers separately. The client makes a request for an individual layer of each block based on the occupancy of each buffer, for example, if the buffer occupancy for the lower layer in the priority order is lower than the threshold for the lower layer Can set the priority order in such a way that it can not issue a request for data from one layer. In this way, receiving data from lower priority layers is preferred, so if the available bandwidth is lower than that required to receive higher priority ordered layers, Only lower layers are required. Furthermore, the thresholds associated with the different layers can be different, so for example lower layers have a higher threshold. In cases where the available bandwidth changes in such a way that the data for the higher layer can not be received before the playout time of the block, the data for the lower layer has always been received , So that the presentation can continue with lower layers only. The threshold for buffer occupancy can be defined in terms of bytes of data, playout duration of data contained in the buffer, number of blocks or any other suitable scale.In the third embodiment, the methods of the first and second embodiments can be combined so that each of them (as in the first embodiment) has a plurality of media representations comprising a subset of layers Are provided and the second embodiment is applied to a subset of the layers in the representation.In a fourth embodiment, the methods of the first, second and / or third embodiment can be combined with embodiments in which multiple independent representations of content are provided, so that, for example, independent At least one of the expressions comprises a plurality of layers to which the techniques of the first, second and / or third embodiment are applied.In combination with the extended buffer manager buffer monitor 126 (see FIG. 2), an extended buffer manager can be used to optimize the client side buffer. The block-request stream system wishes that the playout of the media starts quickly and quickly while simultaneously providing the highest media quality to the user or the destination device. This requires a client to request a block that has the best media quality but can also be started quickly and then can be received in time to be played out without forcing the presentation pause can do.In an embodiment using an extended buffer manager, the manager determines which media data blocks to request and when to do those requests. The extended buffer manager can, for example, provide a set of metadata for the content to be presented, which metadata includes a representation available for the content and a list of metadata for each representation including. Metadata for presentation affects the choice of representation data rate and other parameters such as video, audio or other codec and codec parameters, video resolution, decoding complexity, speech language and expression on the client There can be information on other parameters that have.The metadata for presentation may also comprise identifiers for the blocks whose representation is segmented and these identifiers provide the information necessary for the client to request the block. For example, if the request protocol is HTTP, the identifier can be an HTTP URL, possibly with additional information identifying the byte range or time span in the file identified by the HTTP URL, this byte range or time The span identifies a particular block in the file identified by the URL.In a specific implementation, the extended buffer manager can decide when the receiver will request the new block and can handle the transmission of the request on its own. In a novel aspect, the enhanced buffer manager is responsive to a request for a new block due to a balance ratio value that balances between using too much bandwidth during streaming playout and losing metadata I do.The information received by the buffer monitor 126 from the block buffer 125 includes each event when the media data was received, how much has been received, when and when the playout of the media data has started and stopped, Including an indication of the speed of the media playout. Based on this information, the buffer monitor 126 can calculate a variable representing the current buffer size Bcurrent. In these examples, Bcurrent represents the amount of media contained in the client or other device buffer or buffers and can be measured in units of time, so Bcurrent can be used as an additional block or Represents the amount of time it takes to play out all media represented by blocks or partial blocks stored in buffers or buffers if a partial block is not received. Thus, Bcurrent represents "playout duration" at normal playout speed of media data available at the client but not yet played.As time passes, the value of Bcurrent decreases as the media is played out and can increase each time new data for the block is received. For the purposes of this description, a block is assumed to have been received when the entire data of the block is available at the block requestor 124, but other measures are taken to take into account, for example, the reception of partial blocks Note that it can be used instead. In practice, the reception of a block can take place over a period of time.FIG. 13 illustrates the variation of the Bcurrent value over time as the media is played out and the block is received. As shown in FIG. 13, the value of Bcurrent is zero for times smaller than t 0, indicating that data has not been received. At t 0, the first block is received and the value of Bcurrent increases until it equals the playout duration of the received block. At this point, the playout has not started yet, so the value of Bcurrent continues to be constant until time t 1, at which point the second block arrives and Bcurrent will be sent to this second block Increase by the size of. At this point, the playout starts and the value of Bcurrent begins to decrease linearly until time t 2, at which point the third block arrives.The deployment of Bcurrent continues with this "saw tooth" scheme, incrementing in stages each time a block is received (at times t 2, t 3, t 4, t 5 and t 6), while data is played out during that Smoothly decreases. In this example, the playout proceeds at the normal playout speed for the content, so that the slope of the curve between block reception is exactly -1 and 1 second for each second of elapsed real time This means that the media data is reproduced. With frame-based media being played out for a predetermined number of frames per second, eg 24 frames per second, the slope of -1 is a small step function indicating the playout of each individual data frame , Eg by -1 / 24 steps of 1 second as each frame is played out. FIG. 14 shows another example of development of Bcurrent over time. In that example, the first block arrives at t 0 and the playout starts immediately. Block arrival and playback continues until t 3, at which time the value of Bcurrent reaches zero. When that happens, there is no further media data for playout and forces pause of media presentation. At time t 4, the fourth block is received and playout can resume. Thus, this example shows an example where reception of the fourth block is later than desired, so that the playout is paused and thus a bad user experience occurs. Thus, the ultimate goal of the extended buffer manager and other features is to reduce the probability of this event while simultaneously maintaining high media quality.The buffer monitor 126 can also calculate another metric, Bratio (t), which is the ratio of the media received in a given period to the length of that period. More specifically, Bratio (t) is equal to Treceived / (Tnow - t) and Treceived was received in a period from t which is some time earlier than the current time to the current time Tnow The amount of media measured by playout time).Bratio (t) can be used to measure the rate of change of Bcurrent. Bratio (t) = 0 is a case where no data is received after time t, and assuming that the media is being played out, Bcurrent is decreased by (Tnow-t) after that time It will be. Bratio (t) = 1 is the case where the same amount is received for the time (Tnow-t) as the media is playing out and Bcurrent will have the same value as time t in time Tnow . Bratio (t)> 1 is an example where more data is being received than needed to playout with respect to time (Tnow-t), Bcurrent is increasing from time t to time Tnow become.The buffer monitor 126 further calculates the value State, which can take a distinct value number. The buffer monitor 126 is further equipped with the function NewState (Bcurrent, Braio), which provides a new State value as the output given the current value of Bcurrent and the value of Bratio for t <Tnow. Every time Bcurrent and Bratio make this function return a value different from the current value of State, a new value is assigned to State, and this new State value is shown in block selector 123.The function NewState can be evaluated with reference to the space of all possible values ​​of a pair (Bcurrent, Bratio (Tnow-Tx), where Tx is a fixed (set) value Or it can be derived from Bcurrent by a configuration table that maps from the value of Bcurrent to the value of Tx for example or can depend on the previous value of State The buffer monitor 126 monitors one or more parties Each partitioning is provided with a set of separated regions, each region is annotated with a State value, so the evaluation of the function NewState identifies the partitioning and the pair (Bcurrent , And Braio (Tnow-Tx) are included The return value is the annotation associated with the region In a simple case only one partitioning is provided In more complicated cases partitioning can be done using the NewState function (Bcurrent, Bratio (Tnow-Tx)) at the pre-evaluation time of the current time or other factors.In one particular embodiment, the partitioning described above can be based on a configuration table that includes several threshold values ​​for Bcurrent and several threshold values ​​for Bratio. Specifically, the threshold value for Bcurrent is Bthresh (0) = 0, Bthresh (1),. . . , Bthresh (n1), Bthresh (n1 + 1) = ∞, where n1 is the number of nonzero threshold values ​​for Bcurrent. Br - thresh (0) = 0, Br - thresh (1),. . . , Br - thresh (n 2), Br - thresh (n 2 + 1) = ∞, where n 2 is the number of threshold values ​​for Bratio. These threshold values ​​define a partitioning with cells of a lattice of (n 1 + 1) × (n 2 + 1) where the i th cell of the j th row is B thresh (i - 1) ≤ Bcurrent <B thresh i) and Br - thresh (j - 1) ≤ Bratio <Br - thresh (j). Each cell of the grid described above is annotated with a state value, for example by associating it with a particular value stored in the memory and the function NewState is annotated with the cell indicated by the values ​​Bcurrent and Bratio (Tnow-Tx) Returns the status value.In a further embodiment, a hysteresis value can be associated with each threshold value. In this extended method, the evaluation of the function NewState can be based on temporary partitioning built using a set of temporarily changed threshold values, as follows: For each Bcurrent threshold value less than the Bcurrent range corresponding to the cell selected at the last evaluation of NewState, the threshold value is reduced by subtracting the hysteresis value associated with that threshold. For each Bcurrent threshold value greater than the Bcurrent range corresponding to the cell selected at the last evaluation of NewState, the threshold value is increased by adding the hysteresis value associated with that threshold. For each Bratio threshold value smaller than the Bratio range corresponding to the cell selected at the last evaluation of NewState, the threshold value is decreased by subtracting the hysteresis value associated with that threshold. For each Bratio threshold value greater than the Bratio range corresponding to the cell selected at the last evaluation of NewState, the threshold value is increased by adding the hysteresis value associated with that threshold. The modified threshold value is used to evaluate the value of NewState, and then the threshold values ​​are restored to their original values.Other ways of defining partitioning of the space as soon as this disclosure is read will become apparent to those skilled in the art. For example, partitioning may be based on an inequality based on a linear combination of Bratio and Bcurrent, for example, the form α 1 · Bratio + α 2 in the case of α 0, α 1, and α 2 given real values ​​to define a half space in the overall space It can be defined by using a linear inequality threshold of Bcurrent ≦ α 0 and defining each disjoint set as the intersection of several such half spaces.The above description is an example of a basic process. As it becomes clear to those skilled in real time programming as soon as this disclosure is read, efficient implementation is possible. For example, each time new information is provided to the buffer monitor 126, it is possible to calculate the future time that NewState will transition to the new value, for example if no further data for the block is received. For this time the timer is set and if there is no further input, this timer expiration will cause the new State value to be sent to the block selection 123. As a result, the calculation is not continuous, but only when new information is provided to the buffer monitor 126 or when the timer expires.Appropriate values ​​for State can be "Low", "Stable" and "Full" (Full). An example of an appropriate set of threshold values ​​and the resulting cell grid are shown in FIG.In FIG. 15, the Bcurrent threshold is shown on the horizontal axis with the unit of milliseconds and the hysteresis value is shown downward as "+/- value". The Bratio threshold is shown on the vertical axis of the unit in per mil (ie multiplied by 1000) and the hysteresis value is shown down as "+/- value". The state value is annotated in the lattice cell as "L", "S" and "F" respectively representing "low", "stable" and "full".Block selector 123 receives a notification from block requestor 124 each time there is an opportunity to request a new block. As discussed above, the block selector 123 is provided with information on a plurality of available blocks and metadata for those blocks, for example including information on the media data rate of each block.The information on the block's media data rate may be the actual media data rate of a particular block (ie, the unit of byte block size divided by playout time in seconds), the average media data rate Or a measure of the available bandwidth that is persistently requested to playout the expression to which the block belongs without pausing, or combinations of the above.The block selector 123 selects a block based on the last State value indicated by the buffer monitor 126. When this State value is "stable", the block selector 123 selects a block from the same expression as the previous selected block. The selected block is the first block (playout order) that contains the media data on the duration in the presentation in which the media data has not previously been requested.When the State value is "low", the block selector 123 selects a block from the expression having it lower than the media data rate of the previously selected block. Several factors can affect the exact choice of expression in this case. For example, the block selector 123 may provide an indication of the total rate of incoming data and may select an expression having a media data rate that is less than that value.When the State value is "full", the block selector 123 selects a block from the expression having it higher than the media data rate of the previously selected block. Several factors can affect the exact choice of expression in this case. For example, the block selector 123 can provide an indication of the total rate of incoming data and can select an expression with a media data rate that is not greater than that value.Several additional factors may further affect the operation of the block selector 123. In particular, even when the buffer monitor 126 continues to indicate the "full" state, the frequency with which the media data rate of the selected block is increased can be limited. In addition, the block selector 123 receives an indication of a "full" state, but it may receive a higher availability (eg, since the last selected block was already present for the highest available media data rate) There may be no block of media data rate. In this case, the block selector 123 can delay the selection of the next block by the time selected so that the total amount of media data buffered in the block buffer 125 is limited as described above.Additional factors may affect the set of blocks considered during the selection process. For example, available blocks may be limited to those from representations in which the encoding resolution falls within the specific range provided for the block selector 123.Block selector 123 may also receive input from other components that monitor other aspects of the system, such as the availability of computational resources for media decoding. If the resource is depleted, the block selector 123 can select blocks in the metadata that are indicated to have a lower computational complexity of decoding (eg, lower resolution or frame rate The representation possessed is generally less complex to decode).The embodiment described above is distinguished in that the use of the value Bratio in the evaluation of the NewState function in the buffer monitor 126 allows for a faster increase in quality at the beginning of the presentation compared to a method considering only Bcurrent To provide substantial benefits. If Bratio is not considered, a large amount of buffered data may accumulate before the system can select higher media data rates and thus blocks with higher quality. However, when the Bratio value is high, this means that the available bandwidth is much higher than the media data rate of the previously received block and relatively small buffered data (ie, for Bcurrent Low value) continues to indicate higher media data rate, and thus it is safe to request blocks with higher quality. Similarly, if the Bratio value is low (eg, <1), this indicates that the available bandwidth is lower than the media data rate of the previously requested block, so Bcurrent Even when it is high, the system will switch to a lower media data rate, and thus a lower quality, to avoid reaching the point at which media playout stops at Bcurrent = 0, for example. This enhanced operation can be particularly important in environments where the network conditions and thus the delivery rate can change rapidly and dynamically, eg, the user streams to the mobile device.Another advantage is given by the use of configuration data specifying partitioning of the space of (Bcurrent, Bratio) values. The configuration data may be provided to the buffer monitor 126 as part of the presentation metadata or by other dynamic means. In actual deployment, it is difficult to predict the partitioning functioning properly for all users, as the behavior of user network connections tends to change greatly over time for users and for a single user over time. The fact that it is possible to dynamically provide the setting information to the user makes it possible to formulate good setting values ​​over time with accumulated experience.Variable Request Size Setting A high frequency requirement may be required if each request is for a single block and each block encodes for a short media segment. If the media block is short, the video playout is rapidly moving from block to block, which is more frequent for the receiver to adjust or change its selected data rate by changing the representation Provide opportunities and improve the probability that playouts can last without stopping. However, a disadvantage of the high frequency requirement is that in a wireless WAN network, eg 3G and 4G wireless WAN, the available bandwidth is not sustainable in certain networks constrained to the server network at the client, The capacity of the data link from the network to the network may be limited or limited for a short or long term due to changes in radio conditions.A high frequency requirement also implies a high load on the serving infrastructure, which generates an associated cost in terms of capacity requirements. Therefore, it would be desirable to have some of the benefits of high frequency requirements without all the drawbacks.In some embodiments of the block streaming system, high flexibility of flexibility is combined with lower frequency requirements. In this embodiment, the blocks may be constructed as described above and may be segments that collectively comprise multiple blocks, as also described above. At the beginning of the presentation each request is sent to a single block or multiple simultaneous requests to request that a portion of the block be applied in order to guarantee fast channel zapping time and thus good user experience at the start of presentation The above process is referenced which refers to concurrent requests. Subsequently, when certain conditions described below are met, the client can issue requests to include multiple blocks in a single request. This is possible because blocks are grouped in larger files or segments and can be requested using bytes or time ranges. Consecutive bytes or time ranges can be grouped into a single larger byte or time range so that a single request can be made for multiple blocks and even a discontinuous block can be made with one request Can be requested.One basic configuration that can be promoted by determining whether to request a single block (or partial block) or multiple consecutive blocks is that the requested block is expected to be played out Is to make a decision on whether it is the basis of its composition. For example, it is better for the client to make a request for a single block, ie a small amount of media data, if it is expected that it will soon be necessary to change to another representation. One reason for this is that if more than one block request is made when switching to another representation is imminent, that switching is done before the last few blocks of the request are played out There is. Thus, downloading of these last few blocks may delay the delivery of the media data of the representation to which the switch is to take place, which can cause the playout of the media to cease.However, the requirement of a single block results in a reliably higher frequency requirement as a result. On the other hand, if it is unlikely that it will be necessary to change to other expressions soon, it is likely that all blocks will be played out, so it is preferable to request these blocks Results in less frequent requests, which can substantially reduce the request overhead, especially when it is typical that there is no impending change of representation.In a conventional block aggregation system, the quantity required for each request is not dynamically adjusted, ie typically each request is an entire file or each request is a file of representation Approximately the same amount is covered (occasionally in hours, sometimes measured in bytes). Thus, if the total request is less than that, the request overhead is high, whereas if the total request is greater than this, this will increase the opportunity for the media stop event and / or the network state will change Providing a lower quality media playout if a lower quality representation is chosen to avoid having to quickly change the representation in response to the request.An example of a condition that can cause a subsequent request to reference more than one block when satisfied is a threshold for buffer size Bcurrent. If Bcurrent is below the threshold, each request issued will refer to a single block. If Bcurrent is greater than or equal to the threshold, each request issued references multiple blocks. If a request is made to refer to more than one block, the number of blocks required in each single request can be determined in one of several possible ways. For example, the number can be constant, for example 2. Alternatively, the number of blocks required in a single request can depend on the buffer state and, in particular, Bcurrent. For example, several thresholds can be set and the number of blocks required in a single request is derived from the best of the multiple thresholds that is less than Bcurrent.Another example of a condition that allows a request to refer to more than one block when it is satisfied is the variable value Sate (state) described above. For example, when State is "stable" or "full", a request can be issued for a plurality of blocks, but when State is "low", all requests target one block be able to.Another embodiment is shown in FIG. In this embodiment, the current Sate value and Bcurrent are used to determine the size of the next request when the next request is made (determined in step 1300). If the current Sate value is "low" or the current State value is "full" and the current expression is not the best available (at step 1310 and the answer is "yes") , The next request is chosen to be short, for example just for the next block (the block determined and the request made in step 1320). The rationale for this is that they are in a state that is likely to be a change of expression quite soon. If the current State value is "stable" or the current State value is "full" and the current expression is the best available (at step 1130, the answer is "No") , The duration of consecutive blocks required in the next request is chosen to be proportional to the α portion of Bcurrent for some fixed α <1 (the block determined in step 1330, performed in step 1340 For example, for α = 0.4, for Bcurrent = 5 seconds, the next request can cover a block of about 2 seconds and in the case of Bcurrent = 10 seconds, Blocks of about 4 seconds can be targeted. One rationale for this is that in these states there is no chance of switching to a new representation during the amount of time proportional to Bcurrent.A flexible pipelined block - streaming system can use a file request protocol with a specific underlying transport protocol, eg TCP / IP. At the beginning of a TCP / IP or other transport protocol connection, it may take a considerable amount of time to achieve utilization of the full available bandwidth. As a result, every time a new connection is started, it may suffer "disadvantage on connection rise". For example, in the case of TCP / IP, the disadvantage on the connection start-up is a congestion control protocol to achieve full utilization of the time and available bandwidth available for the first TCP handshake to establish a connection It takes place due to both of the time taken for it.In this case it may be desirable to issue multiple requests with a single connection to reduce the frequency of incurring disadvantages on connection rise. However, some file transfer protocols, such as HTTP, do not provide a mechanism to cancel the request except to completely close the connection at the transport layer so that when a new connection is established instead of an old connection, a connection A disadvantage on the rise occurs. The request issued needs to be canceled if the available bandwidth is changing and it is determined that a different media data rate is to be requested instead, ie a switching decision to a different representation exists There is something. Another reason to cancel the issued request is when the user has requested to terminate the media presentation and to initiate a new presentation (probably of the same content item at a different point in the presentation or perhaps a new content item) be able to.As is known, disadvantages on connection start-up can be avoided by placing the connection open and reusing the same connection for subsequent requests, and as is also known, You can keep a connection fully utilized when it is issued on the same connection at the same time (a technique called "pipelining" in the context of HTTP). However, at the same time, or more generally, the disadvantage of issuing multiple requests in such a way that multiple requests are issued before the previous request completes through the connection is that the connection carries a response to those requests For this reason, if it is desirable to change the request that should be issued, it may be necessary to close the connection if it is necessary to cancel a request that has already been issued and is no longer wanted it can.The probability of having to cancel a request made is that when the time interval between requesting and the playout time of the requested block is large (the possibility that the available bandwidth will change during that interval Depending on the circumstances) it may be partially dependent on the duration of this time interval in the sense that the probability of having to cancel the request issued is also high.As is known, some file download protocols have the property that a single underlying transport layer connection can advantageously be used for multiple download requests. For example, HTTP has this property because reuse of a single connection for multiple requests avoids the above-mentioned "connection startup disadvantages" for non-first requests. However, one disadvantage of this approach is that a connection is injected to transfer the requested data within each issued request, so that if the request or requests (multiple) need to be canceled, the connection is closed It may be subject to the disadvantage on connection rise when a substitute connection is established or wait to receive data that the client no longer needs and may cause a delay in reception of subsequent data .This time we will describe embodiments that retain the advantages of reusing connections without suffering this drawback and further improve the frequency with which connections can be reused.Embodiments of the block-streaming system described herein are configured to reuse connections for multiple requests without having to populate the connection with a particular set of requests at the beginning. Basically, when a request already issued in an existing connection has not been completed yet it is near completion, a new request is issued on the connection. One reason not to wait for an existing request to be completed is that if the previous request is completed the connection speed may be degraded, ie the underlying TCP session may become dormant Or the TCP cwnd variable is substantially reduced, thereby substantially reducing the initial download speed of new requests issued on that connection. One reason to wait until completion before submitting an additional request is that if a new request is issued long before the previous request is completed, that new request will commence for a substantial substantial period Even during this period before the new issued request begins, the decision to make a new request may no longer be valid due to, for example, a rendition change decision. Thus, an embodiment of a client implementing this technique issues a new request on the connection as late as possible without reducing the downloading capability of the connection.The method comprises monitoring the number of bytes received on this connection in response to the last request issued on the connection and performing a test on this number. This can be done by configuring the receiver (or, if applicable, the transmitter) to perform monitoring and testing.If you pass the exam, you can make further requests on the connection. An example of a suitable test is whether the number of bytes received is greater than the fixed part of the size of the requested data. For example, this part can be 80%. Another example of a suitable test is based on the following calculation, as illustrated in FIG. In the calculation, the estimated value of the data rate of the connection is R, the estimated value of the round trip time ("RTT") is T, for example set to a value between 0.5 and 2 Let X be a numerical coefficient, which can be a constant, where the estimates of R and T are periodically updated (updated in step 1410). Let S be the size of the data requested by the last request and let B be the number of bytes of requested data received (calculated in step 1420).One suitable test is to cause the receiver (or, if applicable, the transmitter) to execute a routine to evaluate the inequality (S - B) <X · R · T (tested in step 1430) "Is to cause action. For example, a test can be made to see if there are other requests ready to be issued on the connection (step 1440), and if "yes", issue the request to the connection (Step 1450). If "No", the process returns to step 1410 to continue the update and test. If the test result in step 1430 is "no", the process returns to step 1410 to continue the update and test.The inequality test at step 1430 (implemented, for example, by appropriately programmed elements) is to check that the amount of remaining data to be received is data that can be received at the current estimated reception rate within one RTT The amount of each subsequent request when it is equal to X times the amount of. Several methods for estimating the data rate R in step 1410 are known in the art. For example, the data rate can be estimated as Dt / t, where Dt is the number of bits received in the preceding t seconds, where t is, for example, 1 second or 0.5 Seconds or some other interval. Another method is an exponential weighted average of incoming data rates, or a first order finite impulse response (IIR) filter. Several methods for estimating RTT, T in step 1410 are known in the art.The test at step 1430 can be applied to a set of all active connections on an interface, as will be explained in more detail below.The method further comprises constructing a candidate request list associating each candidate request with an appropriate set of servers to which requests can be made and setting the order of the candidate request list in order of priority . Some entries on the candidate request list can have the same priority. The server on the list of appropriate servers associated with each candidate request is identified by the host name. Each host name corresponds to a set of Internet protocol addresses that can be obtained from the domain name system as is well known. Thus, each possible request on the candidate request list consists of a set of Internet protocol addresses, specifically the union of the set of Internet protocol addresses associated with the host name associated with the server associated with the candidate request . Whenever the test described in step 1430 is successful with respect to the connection and a new request has not yet been issued on the connection, the internet protocol address of the destination of the connection will always be the highest on the list of candidate requests associated with A priority request is selected and this request is issued on the connection. The request is also deleted from the candidate request list.The candidate request can be deleted (canceled) from the candidate request list, the new request can be added to the candidate list with a higher priority than the existing request on the candidate list, and the existing request on the candidate list Requests can change their priorities. The dynamic nature of the request being on the candidate request list and the dynamic nature of those priorities on the candidate list depend on when it passes the test of the type described in step 1430 You can change whether the request can be subsequently issued.For example, if the answer to the test described in step 1430 is "yes" at some time t, then the next request issued will be request A, while the answer to the test described in step 1430 will be at a certain time If not 'yes' until t'> t, due to the request A being deleted from the candidate request list between times t and t 'or request B being requested between times t and t' A due to being added to the candidate request list with a higher priority than A or if request B exists on the candidate list at time t but has a lower priority than request A, The next request issued due to the priority of request B being made higher than that of request A between time t and time t 'will instead be request B.FIG. 18 shows an example of a list of requests on the candidate request list. In this example, there are three connections and there are six requests labeled A, B, C, D, E and F on the candidate list. Each request on the candidate list can be issued in a subset of the connection according to the instructions, for example request A can be issued in connection 1, while request F can be issued in connection 2 or connection 3. The priority of each request is also labeled in FIG. 18, and a lower priority value indicates that the request is a higher priority. Thus, requests A and B with priority 0 are requests with the highest priority, while request F with priority value 3 has the lowest priority among the requests on the candidate list is there.At this point in time t, if connection 1 passes the test described in step 1430, either request A or request B is issued in connection 1. Instead, if connection 3 passes the test described in step 1430 at this time t, request D is the request with the highest priority that can be issued in connection 3, so request D is issued in connection 3 .For all connections, the answer to the test described in step 1430 from time t to some later time t 'is "no" and between time t and t' request A changes its priority to 0 To 5, request B is deleted from the candidate list, and a new request G with priority 0 is added to the candidate list. As a result, at the time t ', the new candidate list becomes as shown in FIG. 19.At time t ', if connection 1 passes the test described in step 1430, request C with priority 4 can now request with the highest priority request on the candidate list that can be issued in connection 1 at connection 1 Because it is, it is issued in connection 1.In this same situation, instead, request A is issued at connection 1 at time t (as shown in FIG. 18, one of the two highest priority choices for connection 1 at time t It was supposed to be one). Since the answer to the test described in step 1430 from the time t to a certain later time t 'is "no" for all connections, connection 1 still remains due to the request issued before time t It is at least delivering data until time t ', so request A has not started at least until time t'. Request C starts at the same time that Request A after t 'is supposed to have started, and by that time Request C has a higher priority than Request A, so time t' Issuing a request C at a time t is a better decision than when issuing a request A at time t.As another alternative, if a test of the type described in step 1430 is applied to a set of active connections, then the internet protocol address may be the first request on the candidate request list or the same priority as the first request A connection with a destination associated with another request with degrees can be selected.Several methods are possible for constructing a candidate request list. For example, the candidate list may include a request for n representing a request for the next n parts of the presentation's current representation data in chronological order, the earliest part data request having the highest priority And the request for the last part of the data has the lowest priority. In some cases, n can be 1. The value of n can depend on the buffer size Bcurrent, or another measure of the state variable or client buffer occupancy state. For example, several threshold values ​​can be set for Bcurrent and the value associated with each threshold, and therefore the value of n, is taken as the value associated with the highest threshold that is smaller than Bcurrent.The embodiment described above guarantees a flexible request assignment to the connection (due to the destination IP address of the connection being not assigned to any of the host names associated with the request) It guarantees that priority is given to reusing the connection even if the highest priority request is not appropriate for the existing connection. The dependence on Bcurrent or State or other measure of occupancy of client buffer occupancy means that when the client urgently needs to issue and complete a request associated with the data of the next part to be played out in chronological order, We will guarantee that requests of "out of priority" will not be issued.These methods can be advantageously combined with collaborative HTTP and FEC.Consistent Server Selection As is well known, files downloaded using the file download protocol are commonly identified by identifiers comprising host names and file names. For example, this is a case for the HTTP protocol, in which case the identifier is a Uniform Resource Identifier (URI). The host name can correspond to multiple hosts identified by internet protocol address. For example, this is a common way to distribute the load of requests from multiple clients across multiple physical machines. In particular, this approach is commonly adopted by the Content Delivery Network (CDN). In this case, the request issued at the connection to one of the physical hosts is expected to succeed. Several methods are known for allowing a client to select from among Internet protocol addresses associated with a host name. For example, these addresses are typically provided to clients via the domain name system and are provided according to priorities. The client can select the highest priority (first) internet protocol address. In general, however, no adjustment is made between clients as to the way this selection is made, so that different clients can request the same file to different servers. As a result, the same file may be stored in the cache of multiple servers in the vicinity, which reduces the efficiency of the cache infrastructure.This can be handled by a system that advantageously increases the probability that two clients requesting the same block will request this block to the same server. The novel method described herein may be implemented in a manner determined by the identifier of the file to be requested and in such a way that different clients presented with the same or a similar selection of internet protocol addresses and file identifiers make the same selection From among the Internet protocol addresses available at.A first embodiment of the method will be described with reference to FIG. As indicated in step 1710, the client sends the Internet protocol addresses IP 1, IP 2,. . . , IPn pair is first obtained. If there is a file to be requested by the decision at step 1720, the client determines the internet protocol address for requesting that file by decision at steps 1730 to 1770. Considering the Internet protocol address set and the identifier for the requested file, the method comprises setting the order of the internet protocol address in a manner determined by the file identifier. For example, for each Internet protocol address, a byte string comprising a concatenation of internet protocol addresses and file identifiers is constructed, as shown in step 1730. As shown at step 1740, a hash function is applied to this byte string, and the resulting hash value is returned to the byte string by a fixed ordering including an ordering of Internet protocol addresses, as indicated at step 1750, For example in ascending numerical order. The same hash function can be used by all clients, thereby ensuring that the same result is generated by the hash function for a given input by all the clients. A hash function can be statically set up in all clients in the client's set or all clients in the client's set can use the hash function when each client gets a list of Internet protocol addresses A partial or complete description can be obtained or all clients in the client's set can obtain a partial or complete description of the hash function when those clients obtain the file identifier Alternatively, the hash function can be determined by other means. As shown at steps 1760 and 1770, the internet protocol address that is first in this ordering is selected, and this address is used to establish the connection and to request all or part of the file.The above method can be applied when a new connection is established to request a file. It can also be applied when several established connections are available and one of them can be selected to issue a new request.Furthermore, when an established connection is available and a request can be selected from among a set of candidate requests having equal priority, for example, the ordering setting for the candidate request is determined by the same hash value method described above And the candidate request that first appears in this order setting is selected. Those methods again calculate a hash for each combination of connection and request, set the order of these hash values ​​by fixed ordering, and at the beginning of the induced order setting for the combination of request and connection By selecting the combination occurring in the connection and candidate request from among the set of connections and equal priority requests.This method has advantages for the following reasons. That is, the exemplary approach employed by the block serving infrastructure, such as that shown in FIG. 1 (BS 101) or FIG. 2 (BS 1 s 101), and in particular the technique commonly adopted by the CDN, To provide a plurality of caching proxy servers. The caching proxy server is unable to provide the requested file in a given request, in which case the server typically forwards the request to another server, typically by sending the requested file Receive responses received from that server and forward responses to clients. The caching proxy server can also store (caching) the file so that it can immediately respond to subsequent requests for the requested file. The common approach described above has the property that the set of files stored in a given caching proxy server is predominantly determined by the set of requests that the caching proxy server is receiving.The method described above has the following advantages. If the same list of internet protocol addresses is provided to all clients in the client's set, these clients will use the same internet protocol address for all requests made for the same file. If there are two different lists of internet protocol addresses and each client is provided with one of these two lists, the client will have at most two for all the requests made for the same file Use different internet protocol addresses. In general, if the list of internet protocol addresses provided to the client is similar, the client uses a small set of Internet protocol addresses provided for all requests made for the same file. Close clients tend to be offered a similar list of Internet protocol addresses so that neighboring clients may issue file requests to only a small portion of the caching proxy server available to those clients. Thus, only a small portion of the caching proxy server that caches the file will be present, which advantageously minimizes the amount of caching resources used to cache the file.Preferably, for a given set of internet protocol addresses, the ratio of the first one in the sorted list generated by step 1750 for a given one of the internet protocol addresses is approximately equal for all Internet protocol addresses in the list In order to be the same, the hash function has the property that very small portions of different inputs are mapped to the same output, and different inputs are mapped to essentially random outputs. On the other hand, for a given input, it is important that the hash function is a decision method in the sense that the output of the hash function is the same for all clients.Other advantages of the method described above are as follows. Suppose the same list of Internet protocol addresses is provided to all clients in the client's set. Due to the nature of the hash function just described, requests for different files from these clients are evenly distributed across the Internet protocol address set, which means that those requests are evenly distributed across the caching proxy server It means that. Therefore, the caching resources used to store these files are evenly distributed across the caching proxy server and the file requests are evenly distributed across the caching proxy server. Thus, the method provides for both storage balancing and load balancing across the caching infrastructure.Several variations of the approach described above are known to those skilled in the art and in many cases these variants are based on the fact that the set of files stored in a given proxy is the same as that of the request that the caching proxy server is receiving Which is determined at least in part by the set. In a common case where a given host name resolves to multiple physical caching proxy servers, it is possible that all these servers will ultimately store copies of certain files that are frequently requested It becomes common. The storage resources on the caching proxy server are limited and as a result the file may be undesirably occasionally deleted (purged) from the cache. The novel method described here is that the request for a given file is directed to the caching proxy server in a manner that reduces this duplication thereby reducing the need to delete files from the cache and thereby reducing the need for a given file Will increase the likelihood of being in the proxy cache (ie not being purged from the proxy cache). When a file exists in the proxy cache, the response sent to the client is faster, which means that the requested file arrives late and as a result the playout of the media pauses, thus a bad user experience, It has the advantage of lowering the probability of generating it. Furthermore, when there is no file in the proxy cache, the request is sent to the other server and may add additional load on both the serving infrastructure and the network connection between the servers. In many cases, the server to which the request is sent may be located in a remote location, and the return of the file from this server to the caching proxy server can cause transmission costs. Therefore, the new method described here results in a reduction in these transmission costs.Requesting Overall Probabilistic File One particular concern in instances where the HTTP protocol is used with range requests is the behavior of the cache server that is commonly used to provide scalability in the serving infrastructure. While the HTTP cache server commonly supports HTTP range headers, the exact behavior of different HTTP cache servers varies from implementation to implementation. Most cache server implementations address scope requests from the cache if the file is available in the cache. A common implementation of the HTTP cache server always forwards downstream HTTP requests including range headers to the upstream node (cache server or origin server) unless the cache server has a copy of the file. In some implementations, the upstream response to the range request is the entire file, the entire file is cached and the response to the downstream scope request is extracted from this file and sent. However, in at least one implementation, the upstream response to the range request is exactly the data byte within the range request itself, these data bytes are not cached and instead sent in response to the downstream range request It is only. As a result, the use of range headers by the client may have the consequence that the file itself is never brought into the cache and the desired scalability characteristics of the network are lost.In the above, the operation of the caching proxy server has been described and the method of requesting a block from a file which is a collection of blocks has also been described. For example, this can be achieved by using an HTTP range request header. The request is called "partial request" below. A further embodiment is now described which has advantages in instances where the block serving infrastructure 101 does not provide full support for the HTTP range header. Commonly, servers in the block serving infrastructure, eg, the content delivery network, support partial requests but can not store responses to partial requests in the local storage (cache). The server can fulfill the partial request by forwarding the request to another server, unless the entire file is stored in the local storage, in which case the response forwards the request to the other server You can send without it.All requests that are partial requests are forwarded to other servers and neither request is addressed by the caching proxy server and will frustrate the purpose of providing a caching proxy server in the first place so the block aggregate Block-request streaming system that utilizes the new extension of the block-serving infrastructure may fail if the block serving infrastructure presents this behavior. During the Block-Request Streaming process as described above, the client can request the block that exists at the beginning of the file at some point.With the new method described here, every time certain conditions are met, the request can be converted from the request of the first block in the file to the request of the whole file. When an entire file request is received by the caching proxy server, the proxy server typically stores the response. Thus, the use of these requests can be used to store files in the cache of the local caching proxy server so that subsequent requests, whether full files or partial requests, can be handled directly by the caching proxy server . The condition may be a set of requests associated with a given file, for example in a set of requests generated by a set of clients watching a content item of interest, with respect to at least the provided part of these requests It can be that the condition is satisfied.An example of a suitable condition is that randomly selected numbers are above the provided threshold. This threshold can be set such that the conversion of a single block request to an entire file request occurs on average for the provided part of those requests, for example once every ten times In case you can select a random number from the interval [0, 1], the threshold can be 0.9). Another example of a suitable condition is that a hash function computed with respect to some information associated with the block and some information associated with the client takes one of the provided value pairs. With regard to frequently requested files, the file is placed in the cache of the local proxy server but the operation of the block-request streaming system is changed significantly from the standard operation, where each request covers a single block It has the advantage that it is not done. In many cases where conversion of a request from a single block request to a whole file request occurs, the client procedure proceeds to request other blocks in the file. If this is the case, as a result of the request for the whole file, the target block is received in any case, so that the request can be suppressed.URL construction and generation of segment lists and seek segment list generation may be based on the start of a media presentation for an on-demand instance or for a particular representation starting at some start time starttime represented by the actual elapsed time We deal with the problem how the client can generate the segment list from the MPD in the local time NOW of the client. The segment list may comprise a locator, for example the URL of the optional first expression metadata, and a list of media segments. Start time, duration and locator can be allocated to each media segment. The start time typically represents an estimate of the media time of the contained media in the segment, but does not necessarily represent the exact time of the sample. The start time is used by the HTTP streaming client to issue a download request at the relevant time. The generation of the segment list including each start time can be done in different ways. The URL can be provided as a playlist or can advantageously use URL building rules for a compact representation of the segment list.The segment list based on the URL construction can be executed, for example, when the MPD signals it with a specific attribute or element, for example FileDynamicInfo or an equivalent signal. A general method for generating a segment list from URL construction is provided in the section "URL construction overview" below. Construction based on playlists can be signaled, for example, by different signals. Seeking in the segment list to reach the correct media time is also advantageously implemented in this context.URL Constructor Overview As described above, in one embodiment of the present invention, a metadata file may be provided that includes a URL building rule that allows a client device to construct a file identifier for a block of a presentation . This time it is necessary to modify the URL building rules, change the number of available encodings, metadata associated with the available encoding, such as bit rate, aspect ratio, resolution, audio or video codec or codec parameters or We will describe a further novel extension of the Block Request Streaming System that provides changes in metadata files including changes to other parameters.In this new extension, additional data associated with each element of the metadata file indicating the time interval within the overall presentation can be provided. Within this time interval the element can be considered valid, and outside the time interval the element can be ignored. In addition, the syntax of the metadata can be extended so that elements previously permitted to appear one time or at most once appear multiple times. In this case additional constraints can be applied that specify that the specified time intervals must be separated from one another for the element. Considering only those elements whose time intervals contain a given moment at any given moment results in a metadata file that matches the original metadata syntax. The time interval is called effective interval. Thus, this method provides for signaling changes of the type described above in a single metadata file. Advantageously, the method can be used to provide a media presentation that supports the described type of change at a specified point in a media presentation.URL constructor As described herein, one common feature of the block-request streaming system is needed by the client to identify available media encodings and request blocks from those encodings It is the necessity to provide "clients" with "metadata" to provide information. For example, in the case of HTTP, this information may comprise a URL for a file containing a media block. It is possible to provide a playlist file describing a URL relating to a block for a predetermined encoding. Multiple playlist files are provided with a master playlist of a playlist that lists playlists corresponding to different encodings, one for each encoding. One disadvantage of this system is that the metadata can be quite large, so it takes a certain amount of time to request when the client starts the stream. A further disadvantage of this system is that the file corresponding to the media data block is generated "on the fly" from a media stream that is being captured in real time (live), eg a live sports event or news program It is remarkable in the case of live content being done. In this case, the playlist file can be updated as new blocks become available (eg every few seconds). The client device can repeatedly fetch the playlist file to determine whether new blocks are available and obtain those URLs. This may impose a significant load on the serving infrastructure, which means that metadata files can not be cached for a longer time than update intervals that are equal to the block size, in particular orders of a few seconds are common.One important aspect of the block-request streaming system is the method used to inform the client of the file identifier to be used with the file download protocol, eg URL, to request the block. For example, a playlist is provided that describes the URL of a file containing blocks of media data for each representation of the presentation. One disadvantage of this method is that it is necessary to download at least a part of the playlist file itself before the playout can start, increasing the channel zapping time and thus producing a bad user experience . For long media presentations with some or numerous expressions, the list of file URLs may be large, so the playlist file grows and may further increase the channel zapping time.Another disadvantage of this method occurs in the case of live content. In this case, the complete list of URLs can not be made available in advance, and in order to receive the updated version, the playlist file becomes available for new blocks and the client periodically updates the playlist file It is updated on a regular basis in response to requests. Due to frequent updating of this file, it can not be stored in the caching proxy server for a long time. This means that much of the request for this file will be forwarded to other servers and ultimately to the server that creates the file. In the case of popular media presentations this will result in high loads on this server and network which will result in slow response time and thus long channel zapping time and bad user experience is there. In the worst case, the server is overloaded, which results in some users not being able to view the presentation.In the block - request streaming system design, it is desirable to avoid imposing constraints on the type of file identifiers that can be used. The reason for this is that some considerations motivate the use of identifiers of a particular type. For example, if the block serving infrastructure is a content delivery network, it can not be predicted during file naming or storage conventions or system designs associated with desiring to distribute storage or service load across the network There may be other requirements leading to a particular format of the file identifier.A further embodiment will now be described which mitigates the above disadvantages while maintaining the flexibility of selecting the appropriate file identification convention. In this way metadata can be provided for each representation of a media presentation with file identifier construction rules. The file identifier construction rule may comprise, for example, a text string. A method of interpreting the file identifier construction rules may be provided to determine a file identifier for a given block of the presentation and the method includes determining the input parameters, evaluating the file identification construction rules with the input parameters . The input parameters can include, for example, an index of the file to be identified, where the first file has index 0, the second has index 1, the third has index 2, The same applies to the following. For example, if all files span the same duration (or approximately the same duration), the index of the file associated with any given time in the presentation can be easily determined. Alternatively, the time within the presentation that each file spans can be provided in the presentation or version metadata.In one embodiment, the file identifier construction rules may comprise text strings that may include certain special identifiers corresponding to the input parameters. The method for evaluating the file identifier construction rule comprises determining the location of a special identifier within the text string and replacing each of the special identifiers with a string representation of the value of the corresponding input parameter.In other embodiments, the file identifier construction rules may comprise text strings that conform to the expression language. The expression language comprises a definition of a syntax that an expression in the language can conform and a set of rules for evaluating the string according to the syntax.This time, one embodiment will be described with reference to FIG. 21, see below. An example of a syntax definition for a suitable expression language defined in Extended Bacchus-Naur Form (Augmented Backus-Naur Form) is shown in FIG. An example of a rule for evaluating a string that conforms to <expression> (<expression>) production in Figure 21 is a string that conforms to <expression> production (<expression>) as follows, <literal > (<Literal>) iteratively to a string conforming to production.<Expression> conforming to <literal> production is immutable.<Variable> (<variable>) Production compliant <expression> is replaced with the value of the variable identified by <variable> generated <token> (<token>) string. <Function> (<function>) Production compliant <expression> evaluates each of its arguments with these rules and relies on the <function> production's <token> element as described below Applying the transformation to these arguments will be evaluated.<Expression> The <expression> that conforms to the last alternative of the production evaluates the two <expression> elements, as described below, and the last alternative <operator> (<operator >) Elements are evaluated by applying operations to these arguments, depending on the element. In the method described above, it is assumed that the evaluation is done in a relationship in which multiple variables can be defined. The variable is a (name, value) pair, where "name" is a string conforming to <token> production and "value" is a string conforming to <literal> production. Several variables can be defined outside the evaluation process before the evaluation begins. Other variables can be defined within the evaluation process itself. All variables are "global" in the sense that only one variable exists with each possible "name".An example of a function is the "printf" function. This function accepts one or more arguments. The first argument can conform to <string> (<string>) production (hereinafter "string"). The printf function evaluates up to the translated version of its first argument. The applied transformation is the same as the C standard library "printf" function, and additional arguments supplying additional arguments expected by the C standard library printf function are included in the <function> production.Another example of a function is the "hash" function. This function accepts two arguments, the first of which can be a string, the second of which can conform to a production (hereinafter "number"). The "hash" function applies a hash algorithm to the first argument and returns a result that is a nonnegative integer less than the second argument. An example of a suitable hash function is given in the C function shown in Figure 22, which are the input string (excluding the enclosing quotation marks) and the numeric input value. Other examples of hash functions are well known to those skilled in the art.Another example of a function is the "Subst" function which takes one, two or three string arguments. If one argument is supplied, the result of the "Subst" function is the first argument. If two arguments are supplied, the result of the "Subst" function removes the existence of the second argument in the first argument (except the enclosing quotation marks) and is modified accordingly Calculate by returning the first argument. If three arguments are supplied, the result of the "Subst" function is that the presence of the second argument within the first argument (except the enclosing quotation marks) is replaced by the third argument With the exception of the mark), and then returning the first argument modified in this way.Some examples of operators are add, subtract, divide, multiply and remainder operators, <operator> (<operator>) production, '+', '-', '/', '* ','% ', Respectively. These operators require that the <expression> production on either side of the <operator> production evaluate up to numbers. Evaluation of operators is based on applying the relevant arithmetic operations (addition, subtraction, division, multiplication and remainder, respectively) to these two numbers in the usual way and to comply with <number> (<number>) production And returning the result in a form.Another example of an operator is the assignment operator identified by <operator> production '='. This operator requires that the left argument evaluate to a string with content conforming to <token> production. The contents of the string are defined to be character strings within the enclosing quotation marks. The equality operator ensures that a variable whose name is equal to the content of the left argument is assigned a value equal to the evaluation result of the right argument. This value is also the evaluation result of the operator expression.Another example of an operator is the sequence operator identified by <operator> production ';'. The evaluation result of this operator is the right argument. As with all operators, notice that both arguments are evaluated and the left argument is evaluated first.In one embodiment of the invention, the identifier of the file can be obtained by evaluating the file identifier building rule with the above rule using a specific set of input variables identifying the requested file. An example of an input variable is a variable having a value equal to the name "index" and the numeric index of the file in the presentation. Another example of an input variable is a variable having a value equal to the name "bitrate" and the average bit rate of the requested version of the presentation.FIG. 23 shows an example of the file identifier construction rule, where the input variable is "id" giving an identifier for the expression of the desired presentation and "seq" giving the sequence number for the file.Numerous variations of the above methods are possible, as will be apparent to those skilled in the art upon reading this disclosure. For example, not all of the functions and operators described above need be provided, and additional functions or operators may be provided.URL Construction Rules and Timing This section provides the basic URL building rules and representations for assigning files or segment URIs and the starting times for each segment in the media presentation.For this section, availability of the media presentation description at the client is assumed.Suppose that the HTTP streaming client is playing out media being downloaded in the media presentation. The actual presentation time of the HTTP client can be defined as to when the presentation time is relative to the beginning of the presentation. At the time of initialization, it is possible to assume the presentation time t = 0.At any point in time t, the HTTP client will interact with any data and playback time tP earlier by MaximumClientPreBufferTime than the actual presentation time t (and with respect to the beginning of the presentation) tP, eg seek, fast forward, etc. Any data required as a result of downloading can be downloaded. In some embodiments, the MaximumClientPreBufferTime can even be unspecified in the sense that the client can download the data before the current play time tP without constraints.The HTTP client can avoid downloading unnecessary data and typically can not download any segments from expressions that are not expected to be played out, for example.The basic process in providing the streaming service is to send the data by generating a corresponding request for downloading a whole file / segment or a subset of files / segments using, for example, an HTTP GET request or an HTTP partial GET request It can be download. Although this description deals with a way of accessing data on a specific playback time tP, in general, the client may download the data on the playback time in a larger time range to avoid inefficient requests You can do. The HTTP client can minimize the number / frequency of HTTP requests when providing a streaming service.In order to access the media data at the playback time tP or at least near the playback time tP in a particular representation, the client determines the URL of the file containing this playback time, and bytes in the file to access this playback time Determine the range further.A media presentation description can assign an expression id, r, to each representation, for example by using the RepersentationID attribute. In other words, the contents of the MPD are interpreted such that there is an allocation when written by the capture system or read by the client. In order to download data on the specific playback time iP for a particular expression with id, r, the client can construct the corresponding URL for the file.The media presentation description can assign the following attributes to each file or segment of each representation r.(A) Sequence numbers i, i = 1, 2,. . . , Nr, (b) defined as the file index i, ts (r, i) for the relative start time and presentation time of the file with the expression idr, (c) the expression idr and the file index i File URL, FileURL (r, i).In one embodiment, the start time of the file and the file URL can be explicitly provided for presentation. In another embodiment, a list of file URLs can be explicitly provided, each file URL being uniquely assigned an index i by a position in the list, the start time of the segment being 1 to i-1 As the sum of all segment durations for the segment. The duration of each segment can be provided by any of the rules described above. For example, one skilled in the art of basic mathematics can use other methods to derive a method for easily deriving the start time from a single element or attribute and the location / index of the file URL in the representation.Where the dynamic URL building rules are provided in the MPD, the start time of each file and each file URI are determined by the build rules, the index of the requested file and potentially some additional It can be built dynamically by using parameters. Information can be provided, for example, in MPD attributes and elements, for example FileURIPattern and FileInfoDynamic. FileURLPattern provides information on how to construct a URI based on file index sequence number i and expression IDr. FileURIFormat is constructed as follows.FileURLFormat = sprintf ("% s% s% s% s% s.% S", BaseURL, BaseFileName, RepresentationIDFormat, SeparatorFormat, FileSequenceIDFormat, FileExtension); and FileURL (r, i) are constructed as follows.The relative start time ts (r, i) for each file / segment is calculated as a function of some sort of information contained in the MPD describing the duration of the segment in this representation: r (i, r) Attribute, eg FileInfoDynamic attribute. The MPD may also contain a sequence of FileInfoDynamic attributes that are global with respect to all representations in a media presentation or at least for all representations within a period of time, as described above. If media data on a particular playing time tP in the expression r is required, the corresponding index i (r, tP) is derived as i (r, tp), so the playing time of this index is ts r, i (r, tP)) and the interval of ts (r, i (r, tP) +1). Segment access may be further constrained by the above case, for example, you can not access the segment.Accessing the exact playing time tP at the time the index of the corresponding segment and the URL is obtained depends on the actual segment format. In this example, assume the media segment has a local timeline starting at 0 without loss of generality. For accessing and presenting data at playback time iP, the client sends data corresponding to the local time from the file / segment that can be accessed via URLFileURI (r, i), where i = i (r, tp) You can download it.Broadly, the client can download the entire file and can access the playback time tP. However, since 3GP files provide a structure for mapping local timings to byte ranges, it is not necessary to download all the information of 3GP files. Therefore, as long as sufficient random access information is available, only a specific byte range for accessing the playback time tP can be sufficient for playing the media. Furthermore, for example using the segment index, we can provide sufficient information on the structure of the byte range and the local timing of the mapping and media segments in the first part of the segment. Having access to the first, eg 1200 bytes of the segment, the client can have enough information to directly access the byte range required for the play time tP.In a further example, we assume that you can use the segment index, presumably designated as the "tidx" box below, to identify the byte offset of the requested fragment or fragment (s). A partial GET request can be formed for the requested fragment or fragment (s). Other alternatives may exist, for example, the client may cancel standards for files and cancel this when the first "tidx" box is being received.The seek client can attempt to seek until a certain presentation time tp in the representation. Based on the MPD, the client has access to the media segment start time and media segment URL of each segment in the representation. The client can obtain the segment index segment_index of the segment most likely to include the media sample for the presentation time tp as the maximum segment index i and the start time tS (r, i) is less than the presentation time tp That is, segment_index = max {i | tS (r, i) ≦ tp}. The segment URL is obtained as FileURL (r, i).Note that the timing information in MPD is rough estimate due to issues related to random access point placement, media track alignment and media timing drift. As a result, the segment identified by the above procedure can start at a time slightly after tp, and the media data for presentation time tp can be in the previous media segment . In the case of a seek, the seek time can be updated to be equal to the first sample time of the retrieved file, or the preceding file can be retrieved instead. However, during consecutive playout, including cases where there is a switch between alternate expressions / versions, it is possible to obtain media data for the time between time tp and the beginning of the segment retrieved To pay attention.For accurate seeking up to presentation time tp, the HTTP streaming client needs to access a random access point (RAP). In order to determine the random access point in the media segment in the case of 3GPP adaptive HTTP streaming, the client has information in the 'tidx' or 'sidx' box, for example to identify the location of the random access point , And the corresponding presentation time in the media presentation can be used. In the case where the segment is a 3GPP movie segment, the client may use 'moof' to locate the RAP, for example, and obtain the required presentation time from the information in the movie fragment and the segment start time derived from the MPD It is also possible to use the information in the 'mdat' box. If a RAP with a presentation time before the requested presentation time tp is not available, the client can access the previous segment or use the first random access point as a seek result. These procedures are simple when the media segment starts from the RAP.Also note that it is not necessary to download all the information of the media segment to access the presentation time tp. The client may initially request the 'tidx' or 'sidx' box from the beginning of the media segment, for example, using a byte range request. By using the 'tidx' or 'sidx' boxes, segment timing can be mapped to the byte range of the segment. Sequential use of partial HTTP requests requires access to only the relevant part of the media segment due to improved user experience and low rise delay.Segment List Generation As described herein, a direct HTTP streaming client using the information provided by the MPD to generate a list of segments for representation with a signaled approximate segment duration of dur The method of implementation should be clear. In some embodiments, the client sends a continuous index i = 1, 2, 3,. . . , That is, the index i = 1 is allocated to the first media segment, the index i = 2 is allocated to the second media segment, and so on. Next, startTime [i] is assigned to the list of media segments having segment index i and URL [i] is generated, for example, as follows. First, the index i is set to 1. The start time of the first media segment is obtained as 0, startTime [1] = 0. The URL of the media segment i, URL [i], is obtained as FileURL (r, i). The process continues for all described media segments with index i, startTime [i] for media segment i is obtained as (i - 1) * dur and URL [i] as FileURI (r, i) Is obtained.Concurrent HTTP / TCP Request Block - One concern in request streaming systems is the hope of constantly requesting the highest quality blocks that can be received completely in time for playout. However, the data arrival rate can not be known in advance, so that the requested block may not arrive in time for playout. As a result, there is a necessity to pause playout of media, resulting in a bad user experience. This problem calls for lower quality (and therefore smaller size) blocks that are more likely to be received in time in time, even when the data arrival rate drops during reception of the block Can be mitigated by a client algorithm taking a conservative approach to selecting blocks to be requested. However, this modest approach has the disadvantage of probably delivering a lower quality playout to the user or the destination device, which is also a bad user experience. The problem is that multiple HTTP connections are used at the same time, as explained below, since the available network resources are shared between connections and are concurrently used for blocks with different playout times Sometimes it is amplified.It would be advantageous for clients to request multiple blocks concurrently. In this context "concurrently" means that responses to requests are occurring at overlapping time intervals, it does not necessarily apply that requests are made at all or nearly simultaneously. In the case of the HTTP protocol, this approach can improve utilization of available bandwidth due to the operation of the TCP protocol (as it is known). In this regard, when new content is first requested, the corresponding HTTP / TCP connection requested for data for the block may be delayed starting, so at this point some HTTP / TCP connection is particularly important for improving the content zapping time since it can dramatically increase the speed of the data delivery time of the first block. However, the request for the block to be played out first is in conflict with the request for the following block, and competing HTTP / TCP downloads are subject to significant fluctuations in their delivery times, , It is generally not possible to control which HTTP / TCP downloads are completed quickly and which will be later, so at least in some time the first few blocks of HTTP / TCP downloads May finish at the end, requiring different blocks or fragments through different HTTP / TCP connections may result in reduced performance, as large and variable channel zapping times will occur.Each block or fragment of the segment is downloaded through a separate HTTP / TCP connection and the number of concurrent connections is n, the playout duration of each block is t seconds and the streaming rate of the content associated with the segment Is assumed to be S. When the client first starts streaming the content, it can issue a request for the first n blocks representing n * t seconds of media data.As known to those skilled in the art, there are large variations in the data rate of TCP connections. However, in order to simplify this explanation, it is ideal that if all connections are in progress and that the first block is completely received almost at the same time as the requested other n-1 blocks Suppose. To further simplify the explanation it is assumed that the total bandwidth utilized by the download connection of n is fixed to the value B for the entire duration of the download and that the streaming rate S is constant throughout the representation. Block playout is a structure that can be done when the entire block is available at the client, ie the playout of the block can be done, for example, on account of the underlying video coding structure, Because encryption is employed to separately encrypt fragments or blocks and therefore the entire block needs to receive the entire block because it is necessary to receive the fragment or entire block before it can decode it It is assumed further that it can only be started after being done. Therefore, for simplicity of explanation in the following it is assumed that it is necessary to receive the entire block before any of the blocks can be played out. Now the time required before the first block arrives and can be played out is approximately n * t * S / B.Since it is desirable to minimize content zapping time, it is desirable to minimize n * t * S / B. The value of t can be determined by factors, such as the underlying video coding structure and how the capturing method is used, etc., so t can reasonably be reduced, but t Very small values ​​result in overly complex segment maps and, if used, may not be compatible with probably efficient video coding and decoding. The value of n may also affect the value of B, ie B may be larger for larger number n of connections, so reducing the number n of connections is utilized It has bad side effects that potentially reduce the amount of available bandwidth and therefore may not be effective in achieving the ultimate goal of shortening content zapping time. The value of S depends on which representation is chosen to download and playout, ideally S is to maximize the playout quality of the media for a given network condition Should be as close to B as possible. Therefore, in order to simplify this explanation, it is assumed that S is approximately equal to B. Therefore, the channel zapping time is proportional to n * t. Therefore, it is typical to use more connections to download different fragments, but if the total bandwidth utilized by those connections is non-linearly proportional to the number of connections, It may degrade the zapping time.As an example, assuming t = 1 second, the value of B = 500 Kbps in the case of n = 1, the value of B = 700 Kbps in the case of n = 2 and the value of B = 800 Kbps in the case of n = 3. Assuming that an expression with S = 700 Kbps is selected, in the case of n = 1, the download time for the first block is 1 * 700/500 = 1.4 seconds, in the case of n = 2, The download time for the block is 2 * 700/700 = 2 seconds and for n = 3, the download time for the first block is 3 * 700/800 = 2.625 seconds. Furthermore, as the number of connections increases, the variability of the individual download speeds of those connections increases (despite the fact that there may be some significant variability with only one connection) there is a possibility. Therefore, in this example, the variability of the channel zapping time and channel zapping time increases as the number of connections increases. Intuitively, the blocks being distributed have different priorities, ie, the first block has the earliest delivery deadline, the second block has the second earliest deadline, etc. , On the other hand, the download connection on which the block is being delivered is in contention for network resources during distribution, so the block with the earliest deadline will be longer in response to more competing blocks being requested Delay. On the other hand, even in this case, using two or more download connections ultimately makes it possible to support a higher streaming rate in a sustainable way, for example in the case of three connections, It can support a streaming rate of 800 Kbps, while on one connection only 500 Kbps streams can be supported.In practice, as noted above, the data rate of a connection may be highly variable over time within the same connection and between connections, so that the requested blocks of n are generally simultaneously In fact, it is commonly true that one block completes at half the time of the other block. As a result, in some cases the first block may complete much faster than the other blocks, and in other cases the first block may complete much later than the other blocks As a result, an unpredictable behavior arises, so that the beginning of the playout may occur relatively quickly in some cases and slow in other cases. This unpredictable behavior may be irritating to the user, and thus can be regarded as a bad user experience.Therefore, what is needed is a way that multiple TCP connections can be used to improve the variability of channel zapping time and channel zapping time while simultaneously supporting possible high quality streaming rates. Also necessary is a method for allocating a larger proportion of the available bandwidth to the block having the closest playout time, if necessary, so that each block playout time approaches Thereby making it possible to adjust the proportion of the available bandwidth allocated to the block.Collaborative HTTP / TCP Requests Now, we describe a method for cooperatively using concurrent HTTP / TCP requests. A receiver can employ multiple concurrent cooperative HTTP / TCP requests, for example with multiple HTTP byte range requests, while each request is a part of a fragment in the source segment, or a source The entire fragment of the segment, or a portion of the repair segment or repair fragment, or the entire repair fragment of the repair segment, is of interest.The benefits of cooperative HTTP / TCP requests involving the use of FEC repair data may be particularly important to consistently provide fast channel zapping time. For example, at the channel zapping time, the TCP connection may have just been started or it may be dormant for a period of time, in which case the congestion window cwnd is the minimum value for the connection and hence these TCP connection delivery rate requires several round trip times (RTT) in order to ramp up, and during this rise time there is high variability of distribution speed over different TCP connections Become.This time, an outline of a non-FEC method which is a cooperative HTTP / TCP request method is described, and only media data of a source block is requested using a plurality of concurrent HTTP / TCP connections, that is, FEC repair data is requested It is not done. With the non-FEC method, for example, a portion of the same fragment is requested over different connections using an HTTP byte range request for a portion of the fragment, so for example each HTTP byte range request is a segment for the fragment Part of the byte range shown in the map is targeted. Individual HTTP / TCP requests raise their delivery speed to take full advantage of the available bandwidth over several RTTs (round-trip time), so the delivery rate is less than the available bandwidth There may be a relatively long period of time, for example, if a single HTTP / TCP connection is used to download the first fragment of the content to be played out, the channel zapping time may increase It will be true. By using the non-FEC method, downloading different parts of the same fragment through different HTTP / TCP connections can significantly shorten the channel zapping time.This time, the outline of cooperative HTTP / TCP request method FEC method is explained, FEC repair data generated from media data and media data of the source segment is requested using plural concurrent HTTP / TCP connections . With FEC method, FEC repair data generated from a part of the same fragment and its fragments is requested through different connections, using HTTP byte range request for a part of the fragment, so for example each HTTP byte range request Is part of the byte range shown in the segment map for fragments. Individual HTTP / TCP requests raise their delivery speed to take full advantage of the available bandwidth over several RTTs (round-trip time), so the distribution time is less than the available bandwidth There is a relatively long period of time so that channel zapping time can be large if, for example, a single HTTP / TCP connection is used to download the first fragment of the content to be played out That will apply. Using the FEC method has the same advantages as the non-FEC method and it is not necessary for the entire requested data to arrive before fragment can be restored, thus further shortening the channel zapping time and channel zapping time Which has the additional advantage of further improving the variability of the sample. A first requested fragment that allows to initiate media playback by making requests via different TCP connections and making excessive requests by also requesting FEC repair data on at least one of the connections To significantly reduce the time it takes to deliver a sufficient amount of data to restore, for example, and make it much more consistent than if the cooperative TCP connection and FEC repair data were not used it can.Figures 24 (a) - (e) show the results of five TCP connections running through the same link from the same HTTP web server of the emulated evolution data optimized (EVDO) (emulated evolution data optimized) network to the same client An example of distribution rate fluctuation is shown. In Figures 24 (a) - (e), the x-axis shows the time in seconds and the Y-axis shows the bit at the client through each of the five TCP connections measured at intervals of 1 second for each connection Indicates the rate received. In this particular emulation a total of 12 TCP connections are running through this link and the load on the network during the indicated time is relatively high because more than one client streams in the same cell of the mobile network It will be typical when doing. Note that distribution rates have some correlation over time, but at many points there are significant differences in the delivery rates of the five connections.FIG. 25 shows a possible request structure for fragments of size 250,000 bits (about 31.25 kilobytes), where there are four HTTP byte range requests made concurrently for different parts of the fragment . That is, the first HTTP connection requests the first 50,000 bits, the second HTTP connection requests the next 50,000 bits, the third HTTP connection requests the next 50,000 bits And the fourth HTTP connection requests the next 50,000 bits. If FEC is not used, that is, non-FEC method, these are only four requests for fragments in this example. If FEC is used, that is, FEC method, in this example, there is one additional HTTP connection requesting an additional 50,000 bits of FEC repair data of the repair segment generated from the fragment.FIG. 26 is an enlargement of the first few seconds of the five TCP connections shown in FIGS. 24 (a) to 24 (e), in FIG. 26, the X axis shows a time interval of 100 milliseconds and the Y axis shows , Indicating the rate at which bits are received at the client through each of the five TCP connections measured at 100 millisecond intervals. A single line arrives using the total amount of bits being received at the client with respect to fragments from the first four HTTP connections (except for HTTP connections for which FEC data is requested), ie using non-FEC method, . Other lines indicate the total amount of bits being received at the client for fragments from all five HTTP connections (including HTTP connections for which FEC data is requested), ie those arriving using the FEC method. With respect to the FEC method, it is assumed that fragments can be FEC decoded from receiving 200,000 bits of any of the 250,000 requested bits, for example when the Reed-Solomon FEC code is used Can be realized, and can basically be realized when RaptorQ code as described for example in Luby IV is used. For the FEC method of this example, enough data is received to restore fragments using FEC decoding after 1 second, (data is requested for subsequent fragments before the first fragment is fully played out And assuming that it is able to receive) channel zapping time of 1 second. With respect to the non-FEC method in this example, before all fragments can be restored, all data on the four requests must be received, which occurs after 1.7 seconds and channel zapping for 1.7 seconds It will be time. As described above, in the example shown in FIG. 26, the non-FEC method is inferior to the FEC method by 70% in channel sampling time. One of the reasons for the advantage indicated by the FEC method in this example is that for the FEC method, 80% of the requested data reception allows fragment reconstruction, whereas for the non-FEC method it is required 100% of the received data is requested. As described above, the non-FEC method has to wait for the slowest TCP connection to terminate the delivery, and because of the inherent fluctuation of the TCP delivery rate, the least TCP connection There is a tendency that there is a large deviation in the delivery speed of the slow TCP connection. With the FEC method of this example, one slow TCP connection does not determine when the fragment is recoverable. Instead, in the case of the FEC method, delivery of sufficient data is very often a function of the average TCP delivery rate over the worst case TCP delivery rate.There are numerous variations of the non-FEC and FEC methods described above. For example, a collaborative HTTP / TCP request can be used only for the first few fragments after channel zapping has occurred, after which a single HTTP (HTTP) request to download additional fragments, fragments, Only TCP requests are used. As another example, the number of collaborative HTTP / TCP connections used will depend on the urgency of the fragment being requested, ie, how immediate the playout time of these fragments is imminent, and the current network state It can be a function.In some variations, multiple HTTP connections can be used to request repair data from the repair segment. In other variants, different amounts of data can be requested at different HTTP connections depending on the current size of the media buffer and the data reception rate at the client. In another variation, the source representations are not independent of each other, instead representing stratified media coding, for example the extended source representation can be dependent on the base source representation. In this case there may be a repair representation corresponding to the base source representation and other repair representations corresponding to the combination of the base and extended source representation.Additional overall elements enhance the advantages that can be realized by the method disclosed above. For example, the number of HTTP connections used may vary depending on the current amount of media in the media buffer and / or the rate of reception into the media buffer. A collaborative HTTP request using FEC, ie the FEC method described above and a variation of that method, may be used, as the media buffer is relatively empty and the media buffer grows, for example, More cooperative HTTP requests are made concurrently for different parts, requiring a relatively large part of the repair data from the entire source fragment and the corresponding repair fragment, and then a reduced number of concurrent Transition to an HTTP request, requesting a larger part of the media data per request, and requesting a smaller part of the repair data, for example moving to one, two or three concurrent HTTP requests , Go to make a request for all fragments or multiple consecutive fragments per request and request repair data When moving to Ikoto, it can be positively used.As another example, the amount of FEC repair data can vary as a function of the media buffer size, that is, when the media buffer is small, more FEC repair data can be requested and the media buffer grows , The amount of FEC repair data required can be reduced and FEC repair data can not be requested at any point in time at which the media buffer is large enough so that data from the source segment of the source representation Only. The benefit of the extension technique is that it requires additional bandwidth to be used beyond the amount that will be consumed by delivering the media in the source segment by reducing both the request message traffic and the FEC repair data Faster and more consistent channel zapping time, and potential media stutter (stutter) while simultaneously simultaneously minimizing the amount of data to be transmitted and simultaneously allowing support for the highest possible media rate for a given network condition ) Or a higher resilience to stoppage.Additional extensions when using simultaneous concurrent HTTP connections HTTP / TCP requests can be discarded if the appropriate conditions are met and data that can replace the requested data in the discarded request The other HTTP / TCP request can be made for downloading and the second HTTP / TCP request can be sent with exactly the same data as in the original request, eg source data, or duplicate data, eg requested with the first request A part of the same source data and repair data that has not been done or completely disconnected data, for example repair data that was not requested in the first request. An example of a suitable condition is that the request does not respond from Block Server Infrastructure (BSI) within the provided time or fails to establish a forwarding connection to the BSI or receives explicit failure messages from the server or others That the request failed due to failure conditions.Another example of suitable conditions is the comparison of the measure of the connection speed (the data arrival rate in response to the target request) with the expected connection speed or depends on the playout time or the time of the media data contained therein The comparison of the estimate of the connection speed required to receive the response before the other time to do is that the reception of the data is progressing later than usual.This approach has advantages when the BSI sometimes exhibits unsuccessful or poor performance. In this case, the above approach increases the probability that the client can continue reliable playout of the media data regardless of failure or bad performance in the BSI. In some cases, there may be advantages in designing the BSI to occasionally exhibit the failing or poor performance, for example the design does not exhibit the failure or poor performance, It can have a lower cost than alternative designs that occur less frequently. In this case the method described here has the further advantage in that it allows the use of said lower cost design for BSI without consequent degradation of user experience.In other embodiments, the number of requests issued for data corresponding to a given block may depend on whether the appropriate conditions for that block are satisfied. If the condition is not met, the client constrains to make further requests for the block if the successful completion of all currently incomplete data requests for the block allows for the restoration of the block with a high probability You can do. If the condition is satisfied, more requests for blocks can be issued, ie the above constraints do not apply. An example of a suitable condition is that the time to the scheduled playout time of the block or other time dependent on that time is lower than the provided threshold. This method has an advantage because the playout time of the media data comprising the block is so close that additional data requests for the block are issued when block reception is more imminent. In the case of a common transport protocol, eg HTTP / TCP, these additional requests have the effect of increasing the proportion of available bandwidth dedicated to data contributing to the reception of the block of interest. This reduces the time it takes to receive sufficient data to restore the block to be completed and thus the probability that the block can not be restored before the scheduled playout time of the media data comprising the block Down. As mentioned above, if the block can not be restored before the scheduled playout time of the media data comprising the block, the playout will be paused, which may result in a bad user experience, , The method described here advantageously lowers the probability of this bad user experience.Throughout this specification reference to a scheduled playout time of a block means that the encoded media data comprising the block is initially available at the client to achieve playout of the presentation without pauses It should be understood that it means the time that can be done. As will be clear to those skilled in the media presentation system, this time it is necessary to apply some transformation functions to the media data comprising that block in order to validate the actual playout of the block, and these functions May actually require a certain amount of completion time so in practice it is more practical than the actual time of appearance of media with blocks in physical transducers (screens, speakers, etc.) used for playout It is slightly before. For example, media data can be transferred in a generally compressed format and a decompression transformation can be applied.Method for Generating File Structure Supporting Collaborative HTTP / FEC Method An embodiment for generating a file structure that can advantageously be used by a client employing cooperative HTTP / FEC methods is now described. In this embodiment, for each source segment there is a corresponding repair segment generated as follows. The parameter R indicates how much FEC repair data is generated on average for the source data in the source segment. For example, R = 0.33 indicates that if the source segment contains 1,000 kilobytes of data, the corresponding repair segment will contain approximately 330 kilobytes of repair data. Parameter S indicates the symbol size of bytes used for FEC coding and decoding. For example, S = 64 indicates that the source data and the repair data have symbols of size 64 bytes each for FEC coding and decoding.Repair segments can be generated for source segments as follows: Each fragment of the source segment is considered to be a source block for FEC coding purposes and so each fragment is processed as a sequence of source symbols of the source block from which the repair symbol is generated. The total number of repair symbols generated for the first i fragment is calculated as TNRS (i) = ceiling R * B (i) / S) and ceiling (x) has a value that is at least x It is a function that outputs the minimum integer. Therefore, the number of repair symbols generated for fragment i is NRS (i) = TNRS (i) - TNRS (i - 1).The repair segment comprises a concatenation of repair symbols for fragments, the order of the repair symbols within the repair segment is the order of the fragments from which they are generated, and within the fragments the repair symbols are those coding symbols It is in the order of identifier (ESI). The repair segment structure corresponding to the source segment structure is shown in FIG. 27 and includes a repair segment generator 2700.By defining the number of repair symbols for fragments as described above, the total number of repair symbols for all previous fragments, and hence the byte index into the repair segment, is R, S, B (i -1) and B (i), and does not depend on any of the previous or subsequent structures of fragments in the source segment. This is accomplished by the client quickly calculating the starting position of the repair block in the repair segment using only local information on the structure of the corresponding fragment of the source segment where the repair block is generated, It is advantageous to allow the number of repair symbols to be computed too quickly. Thus, if the client decides to start downloading and playing out fragments from the center of the source segment, it can also quickly generate and access corresponding repair blocks from within the corresponding repair segment .The number of source symbols in the source block corresponding to fragment i is calculated as NSS (i) = ceiling ((B (i) - B (i - 1)) / S). The last source symbol is padded with zero bytes for FEC coding and decoding purposes if B (i) - B (i - 1) is not a multiple of S, ie, the last source symbol Is padded with zero bytes so that it is S bytes for FEC coding and decoding purposes but these zero padding bytes are not stored as part of the source segment. In this embodiment, the ESI for the source symbol is 0, 1,. . . , NSS (i) -1, and the ESI for the repair symbol is NSS (i),. . . , NSS (i) + NRS (i) - 1.The URL for the repair segment in this embodiment can be generated from the URL for the corresponding source segment simply by attaching the prefix ". Repair" (repair) to the URL of the source segment, for example.As described herein, the repair indexing information and FEC information for the repair segment is implicitly defined by the indexing information for the corresponding source segment and from the values ​​of R and S. The fragment structure with time offset and repair segment is determined by the time offset and structure of the corresponding source segment. The byte offset for the end of the repair symbol in the repair segment corresponding to fragment i can be calculated as RB (i) = S * ceiling (R * B (i) / S). Therefore, the number of bytes in the repair segment corresponding to fragment i is RB (i) - RB (i - 1), so the number of repair symbols corresponding to fragment i is NRS (i) = (RB i) - RB (i - 1)) / S. The number of source symbols corresponding to fragment i can be calculated as NSS (i) = ceiling ((B (i) - B (i - 1)) / S). Thus, in this embodiment it is possible to implicitly derive the repair indexing information and the corresponding FEC information for the repair block in the repair segment from the R, S and indexing information for the corresponding fragment of the corresponding source segment it can.As an example consider the example of FIG. 28 showing fragment 2 starting at byte offset B (1) = 6,410 and ending at byte offset B (2) = 6,770. In this example, the symbol size is S = 64 bytes and the vertical line of points indicates the byte offset within the source segment corresponding to a multiple of S. The global repair segment size as part of the source segment size is set to R = 0.5 in this example. The number of source symbols in the source block for fragment 2 is calculated as NSS (2) = ceiling ((6,770 - 64 10) / 64) = ceil (5.625) = 6 and these six source symbols Is ESI 0,. . . , 5, the first source symbol is the first 64 bytes of fragment 2 starting at byte index 6, 410 in the source segment and the second source symbol is byte index 6 , The next 64 bytes of fragment 2 starting at 474, and so on. The last byte offset of the repair block corresponding to fragment 2 is RB (2) = 64 * ceiling (0.5 * 6, 770/64) = 64 * ceiling (52.89 ...) = 64 * 53 = 3, 392, and the start byte offset of the repair block corresponding to fragment 2 is RB (1) = 64 * ceiling (0.5 * 6, 410/64) = 64 * ceiling (50.07 ... ) = 64 * 51 = 3, 264, so in this example there are two repair symbols in the repair block corresponding to fragment 2 with ESI 6 and 7, respectively, with byte offset 3 in the repair segment , Starting at 264 and ending at byte offset 3, 392.In the example shown in FIG. 28, when the number of source symbols is simply used to calculate the number of repair symbols, although R = 0.5 and there are six source symbols corresponding to fragment 2 As you can imagine, notice that the number of repair symbols is not 3, and it will be 2 by the method described here. Contrary to simply using the number of source symbols of a fragment to determine the number of repair symbols, the embodiment described above can be used to determine the number of symbols in a repair segment only from the index information associated with the corresponding source block of the corresponding source segment Allows to calculate the repair block placement. Furthermore, the number of source symbols K in the source block. , The number of repair symbols of the corresponding repair block KR (K * R), since KR is generally at most ceil (K * R) and KR is at least floor Is approximately approximated by K * R, where floor (x) is the largest integer that is at most x.As those skilled in the art will appreciate, there are numerous variations of the above embodiments for generating file structures that can be advantageously used by clients employing collaborative HTTP / FEC methods. As an example of an alternative embodiment, the original segments for the representation can be divided into parallel segments with N> 1 and i = 1,. . . , N, the specified part Fi of the original segment is included in the ith parallel segment, where Fi = i = 1,. . . , The sum for N is equal to 1. In this embodiment there is one master segment map that is used to derive the segment map for all concurrent segments, as well as the manner in which the repair segment map is derived from the source segment map in the embodiment described above be able to. For example, the master segment map may indicate a fragment structure if the entire source media data is not divided into parallel segments, instead being included in one original segment, and the original prefix of the original segment fragment If the amount of media data in a word is L bytes, the total number of bytes of this prefix among the first i parallel segments is ceil (L * Gi), where Gi is the Fj's j = 1,. . . , I can be derived from the master segment map by calculating it as the sum in i. As another example of an alternative embodiment, the segment can consist of a combination of original source media data for each fragment and repair data for that fragment following immediately thereon, so that the source media data and its A segment including a combination of repair data generated using the FEC code from the source media data is obtained. As another example of an alternative embodiment, a segment including a combination of source media data and repair data may be divided into a plurality of parallel segments including a combination of source media data and repair data.Further embodiments for those skilled in the art can be envisioned after reading this disclosure. In other embodiments, combinations or subcombinations of the above disclosed inventions can advantageously be made. It is to be understood that exemplary arrangements of components are shown for purposes of illustration and that combinations, additions, rearrangements, etc. are contemplated in alternative embodiments of the invention. Accordingly, while the present invention has been described with reference to exemplary embodiments, those skilled in the art will recognize that numerous modifications are possible.For example, the processes described herein may be implemented using hardware components, software components, and / or combinations thereof. In some cases, software components may be provided on the medium for the purpose of execution on hardware equipped with tangible non-transitory media or hardware separate from that medium. The specification and drawings are to be regarded as illustrative rather than limiting in any way. It is to be understood, however, that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as detailed in the claims, It is obvious that it is intended to cover all modifications and equivalents within the scope of the appended claims.
Methods of forming contact openings, making electrical interconnections, and related integrated circuitry are described. Integrated circuitry formed through one or more of the inventive methodologies is also described. In one implementation, a conductive runner or line having a contact pad with which electrical communication is desired is formed over a substrate outer surface. A conductive plug is formed laterally proximate the contact pad and together therewith defines an effectively widened contact pad. Conductive material is formed within a contact opening which is received within insulative material over the effectively widened contact pad. In a preferred implementation, a pair of conductive plugs are formed on either side of the contact pad laterally proximate thereof. The conductive plug(s) can extend away from the substrate outer surface a distance which is greater or less than a conductive line height of a conductive line adjacent which the plug is formed. In the former instance and in accordance with one aspect, such plug(s) can include a portion which overlaps with the contact pad of the associated conductive line.
What is claimed is: 1. A method of forming a contact opening to a conductive runner comprising: forming a conductive runner having an outer contact opening target area over a semiconductive substrate; forming at least one conductive plug within insulating material laterally adjacent to the runner target area, the plug being formed within a contact opening in the insulating material which is essentially self aligned at and to the semiconductive substrate at two locations disposed on opposing sides of the conductive runner; and etching a contact opening to and overlapping the runner target area and plug through insulating material overlying the runner and plug; wherein the conductive runner comprises an outermost surface which defines a conductive runner height outwardly of a substrate outer surface; and the forming of the conductive plug comprises forming the plug to extend outwardly from the substrate outer surface a distance which is less than the conductive runner height. 2. The method of forming a contact opening of claim 1, wherein one of the self-aligned locations is defined by the conductive runner. 3. The method of forming a contact opening of claim 2, wherein the other of the self-aligned locations is defined by a next adjacent conductive runner. 4. The method of forming a contact opening of claim 1, wherein the conductive plug comprises a portion which overlaps with the target area. 5. The method of forming a contact opening of claim 1, wherein: the forming of the conductive plug comprises forming another conductive plug within the insulating material laterally adjacent the runner target area, the conductive plugs being disposed on either side of the runner target area. 6. The method of forming a contact opening of claim 5, wherein one of the respective self-aligned locations for individual conductive plugs is defined by the conductive runner, and the other respective self-aligned location is defined by respective next adjacent conductive runners. 7. The method of forming a contact opening of claim 6, wherein: the conductive runner comprises an outermost surface which defines a conductive runner height outwardly of a substrate outer surface; and the forming of the conductive plugs comprise forming both plugs to extend outwardly from the substrate outer surface a distance which is greater than the conductive runner height and to overlap with the target area. 8. The method of forming a contact opening of claim 6, wherein: the conductive runner comprises an outermost surface which defines a conductive runner height outwardly of a substrate outer surface; and the forming of the conductive plugs comprise forming both plugs to extend outwardly from the substrate outer surface a distance which is less than the conductive runner height. 9. A method of forming a contact opening to a conductive line comprising: forming a conductive line over a semiconductive substrate, the conductive line having a conductive line width and a target area with which electrical communication is desired; forming a pair of conductive plugs laterally proximate the target area disposed on opposing sides of the conductive line and defining together therewith an effectively widened target area, the conductive plugs being self-aligned to the substrate adjacent the conductive line; forming a material at least over the effectively widened target area; outwardly exposing at least a portion of the widened target area through the material; and the forming of the pair of conductive plugs comprises forming the plugs to extend outwardly from the substrate outer surface a distance which is less than a conductive line height. 10. The method of forming a contact opening of claim 9, wherein the forming of the pair of conductive plugs comprises forming at least one of the conductive plugs between the conductive line and a next adjacent conductive line. 11. The method of forming a contact opening of claim 9, wherein the forming of the pair of conductive plugs comprises forming the conductive plugs between the conductive line and respective next adjacent conductive lines. 12. The method of forming a contact opening of claim 9, wherein the forming of the pair of conductive plugs comprises forming at least one of the plugs to overlap with the target area. 13. The method of forming a contact opening of claim 9, wherein the forming of the pair of conductive plugs comprises forming the plugs to overlap with the target area. 14. The method of forming a contact opening of claim 1, wherein forming at least one conductive plug comprises forming at least one conductive plug within a contact opening in the insulating material which is essentially self aligned at and to the semiconductive substrate at two locations along a line extending laterally from the conductive runner. 15. The method of forming a contact opening of claim 9, wherein forming a material comprises forming a dielectric layer. 16. The method of forming a contact opening of claim 9, wherein outwardly exposing comprises etching an opening through the material to expose at least a portion of the widened target area. 17. The method of forming a contact opening of claim 9, wherein forming a material comprises forming a dielectric layer, and wherein outwardly exposing comprises etching an opening through the dielectric layer to expose at least a portion of the widened target area.
TECHNICAL FIELD This invention relates to semiconductor processing methods of forming contact openings, methods of forming electrical connections and interconnections, and integrated circuitry comprising such contact openings and electrical connections and interconnections. BACKGROUND OF THE INVENTION Referring to FIGS. 1 and 2, a semiconductor wafer fragment is indicated generally at 10 and comprises a semiconductive substrate 12. In the context of this document, the term "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Substrate 12 comprises a field oxide region 13 having an outer surface 14 (FIG. 2) over which a plurality of conductive runners or conductive lines 16, 18, and 20 are formed. The illustrated conductive lines or runners include conductive portions and insulative portions. Exemplary conductive portions are constituted, in this example, by a respective polysilicon layer 22 and an overlying silicide layer 24. The insulative portions of the runners or lines are constituted by respective overlying caps 26 and associated sidewall spacers 28. Exemplary materials for the insulative portions include oxides and nitrides. An insulative layer 30 such as borophosphosilicate glass is formed over runners 16, 18, and 20 and a contact opening 32 is formed through a masked etch of layer 30 to outwardly expose a portion of silicide layer 24. Thereafter, conductive material such as conductively doped polysilicon is formed within contact opening 32 to provide a conductive contact 34 to conductive line 18. A metal layer 36 is provided thereover to form an electrical connection with conductive line 18. A typical practice within the semiconductor industry is to provide a conductive line or runner with a widened landing pad in order to accommodate mask misalignments when contact openings are formed. An exemplary widened landing pad is shown in FIGS. 1 at 38 and FIG. 2 by area. By having a widened landing pad, contact opening 32 can shift left or right some distance relative to the position shown in FIGS. 1 and 2 without making undesirable contact with the substrate. For purposes of the ongoing discussion, landing pad 38 includes the conductive and insulative portions of conductive line 18; and the conductive portions of conductive line 18 define a contact pad with which electrical communication is desired. Accordingly, in the illustrated example a contact pad is defined by polysilicon layer 22 and silicide layer 24 of conductive line 18. The contact pad defines a target area A inside of which it is desirable to form a contact opening. An electrical connection through contact opening 32 can be formed anywhere within target area A and still effectively make a desirable connection with the conductive contact pad. Hence, the target area tolerates a contact opening mask misalignment on either side of the illustrated and desired contact opening 32. A tradeoff for improved mask misalignment tolerance is a reduction in wafer real estate available for supporting conductive lines and other integrated circuitry components. This is due largely in part to the increased area which is occupied by the widened landing pad 38. This also adversely impacts the conductive line spacing such that desired minimum spacing adjacent conductive lines is not achieved. Hence, integrated circuitry cannot be packed as densely upon a wafer as is desirable when the widened landing pads are used. This invention grew out of concerns associated with enhancing the efficiency with which wafer real estate is used to support integrated circuitry. This invention also grew out of concerns associated with improving the methods and structures through which contact is made relative to conductive lines. SUMMARY OF THE INVENTION Methods of forming contact openings, making electrical interconnections, and related integrated circuitry are described. Integrated circuitry formed through one or more of the inventive methodologies is also described. In one implementation, a conductive runner or line having a contact pad with which electrical communication is desired is formed over a substrate outer surface. A conductive plug is formed laterally proximate the contact pad and together therewith defines an effectively widened contact pad. Conductive material is formed within a contact opening which is received within insulative material over the effectively widened contact pad. In a preferred implementation, a pair of conductive plugs are formed on either side of the contact pad laterally proximate thereof. The conductive plug(s) can extend away from the substrate outer surface a distance which is greater or less than a conductive line height of a conductive line adjacent which the plug is formed. In the former instance and in accordance with one aspect, such plug(s) can include a portion which overlaps with the contact pad of the associated conductive line. BRIEF DESCRIPTION OF THE DRAWINGS Preferred embodiments of the invention are described below with reference to the following accompanying drawings. FIG. 1 is a top plan view of a prior art semiconductor wafer fragment and a plurality of conductive lines supported thereon. FIG. 2 is a view which is taken along line 2--2 in FIG. 1 at a subsequent processing step. FIG. 3 is a diagrammatic sectional view of a semiconductor wafer fragment at one processing step in accordance with one implementation of the invention. FIG. 4 is a view of the FIG. 3 wafer fragment at another processing step. FIG. 5 is a view of the FIG. 3 wafer fragment at another processing step. FIG. 6 is a view of the FIG. 3 wafer fragment at another processing step. FIG. 7 is a view which is similar to the FIG. 6 view, but which shows an alternate embodiment in accordance with another implementation of the invention. FIG. 8 is a view of the FIG. 3 wafer fragment at another processing step. FIGS. 9 and 10 are top plan views of semiconductor wafer fragments which have been processed in accordance with the inventive methodologies. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS This disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8). Referring to FIG. 3, like numerals from the above-described embodiment are utilized where appropriate, with differences being indicated by the suffix "a" or with different numerals. Accordingly, a plurality of conductive runners or lines 16a, 18a, and 20a are formed over outer surface 14, and can be formed over oxide isolation regions 40. Exemplary isolation regions include shallow trench isolation regions or field oxide regions formed through LOCOS techniques. The conductive lines comprise respective outermost surfaces 44 portions of which define respective conductive line heights h outwardly of outer surface 14. Diffusion regions 42 can be provided between the conductive lines, and preferably comprise n-type regions having doping concentrations of 1.times.10@18 cm@-3. The diffusion regions can be provided in a separate doping step, or through outdiffusion of dopant from conductive material which will become more apparent below. An outer contact opening target area B is defined by conductive line 18a. Referring to FIG. 4, an insulating material layer 46 is formed over substrate 12. An exemplary material is borophosphosilicate glass. Referring to FIG. 5, at least one, and preferably a pair of contact openings 48, 50 are formed through layer 46 and preferably outwardly expose respective portions of outer surface 14. The contact openings can be formed through a suitable masked etch of layer 46. Preferably, the individual contact openings are essentially self-aligned at and to the substrate at two locations 48a, 48b, and 50a, 50b respectively, along a line extending laterally from conductive runner or line 18a. In a preferred implementation, one of the two locations for the individual contact openings is defined by conductive runner 18a. Even more preferably, the other of the two respective locations are defined by respective next adjacent conductive lines 16a, 20a. Referring to FIG. 6, and in accordance with a first implementation, first conductive material 52, 54 is formed within contact openings 48, 50, between the illustrated conductive lines and laterally proximate or adjacent the contact pad defined by conductive line 18a. An exemplary and preferred first conductive material is conductively doped polysilicon, which can serve as a source of outdiffused dopant for regions 42. The polysilicon can be chemical vapor deposited over the substrate and subsequently removed through conventional processing to provide conductive plugs 56, 58. Such conventional processing can include planarization processing to isolate conductive material within the respective contact openings, followed by a suitable timed etch to recess the conductive material within the contact openings. In the illustrated example, conductive plugs are formed on both sides of conductive line 18a. It is possible, however, for only one conductive plug to be formed on either side of conductive line 18a. The individual conductive plugs are essentially self-aligned at and to the substrate at the same locations as are the contact openings in which each is formed. Referring still to FIG. 6, the illustrated conductive plugs are formed to preferably extend outwardly from outer surface 14 a distance which is greater than conductive runner height h. Because the plugs in this example are formed atop the same surface (outer surface 14) atop which the conductive lines are formed, each extends elevationally beyond the respective conductive line heights. Such plugs could, however, be formed to extend from outer surface 14 a distance which is less than or no further than the conductive runner height. This could, for example, be done by conducting a timed etch for a longer period of time than is suitable for forming the illustrated FIG. 6 plugs. An exemplary construction is shown in FIG. 7. In one implementation, individual conductive plugs include portions which overlap with portions of conductive line 18a and the respective next adjacent conductive lines 16a, 20a. In a preferred implementation, the respective plugs overlap with the outermost surfaces of the conductive lines adjacent which each is formed. Accordingly, portions of at least one, and preferably both conductive plugs can overlap target area B. Collectively, the conductive material of conductive plugs 56, 58, and the conductive material of conductive line 18a define an effective contact pad having an outermost surface 60, which defines an effectively widened target area A'. The widened target area reduces the wafer area which was formerly required by the prior art widened landing pad (FIGS. 1 and 2) described above. Alternately considered, effective contact pad outermost surface 60 defines a generally non-planar surface. In a preferred implementation, at least one of the conductive plugs, and preferably both, define a region of outermost surface 60 having a higher topographical elevation than the region defined by the contact pad of line 18a. Referring to FIG. 8, a layer 62 of insulative material is formed over the substrate and the effective contact pad. A contact opening 64 is etched or otherwise formed through layer 62 to outwardly expose portions of the effective contact pad. Preferably, the contact pad of line 18a is exposed, with any mask misalignment resulting in exposure of conductive material of either or both of conductive plugs 56, 58. Subsequently, a second conductive material 66 is formed within contact opening 64 and in electrical communication with at least portions of the contact pad and, if exposed, an associated portion of a conductive plug. A bit line 68 can then be formed over the substrate and in electrical communication with material 66. Referring to FIG. 9, conductive lines 16a, 18a and 20a have first respective line widths w1 at respective first locations and second line widths w2 at respective second locations, an exemplary second line width and location being shown for line 18a. The second line width corresponds to a line location where at least a portion of contact opening 64 is formed. In one implementation, the first and second line widths are essentially the same or equivalent. This is made possible because the above-described conductive plugs 56, 58 (shown in dashed lines in FIGS. 9 and 10) reduce, if not eliminate, the requirement of the FIG. 1 widened landing pad. The illustrated conductive plugs provide an effective contact pad width which is greater than second line width w2, and include respective portions proximate the first line width w1 which overlap with or extend elevationally over the conductive portions, e.g. the contact pad, of line 18a. The plugs can also include portions which overlap with corresponding portions of conductive lines 16a, 20a. This compensates for a contact opening mask misalignment by enabling desired contact to be made through a respective one of the conductive plugs as discussed above. Referring to FIG. 10 and in accordance with another implementation, localized first and second line widths w1, w2 respectively, are different with second line width w2 being greater than first line width w1. In this example, the second line width defines a portion of a landing pad which is smaller in dimension than the FIG. 1 landing pad. Portions of conductive lines 16b and 20b laterally proximate respective conductive plugs 56, 58 can be tapered or otherwise configured to accommodate the somewhat wider landing pad. In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.
The present invention provides a breakdown resistant transistor structure for amplifying communication signals. This structure includes a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal. The first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith. Also included is a second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor. The second gate is disposed above a second insulator, the second NMOS transistor has a second transconductance and a second breakdown voltage associated therewith, and the second insulator may be thicker than the first insulator. This results in the first transconductance being greater than the second transconductance, and the second breakdown voltage being greater than the first breakdown voltage.
An apparatus for amplifying a differential radio frequency signal comprising:an integrated circuit chip, the integrated circuit chip including:a first differential amplification stage (210) including first cascoded MOS transistors (212, 214) that receive the differential radio frequency signal and produce a first stage amplified differential radio frequency signal;a first level shift stage (230) including first blocking capacitors (232, 234) and first shunt inductors (236, 238) that allow for transfer of the first stage amplified differential radio frequency signal therethrough; anda second differential driving stage (250) including second cascoded MOS transistors (252, 254) that receive the first stage amplified differential radio frequency signal from the first level shift stage (230) and produce a second stage amplified differential radio frequency signal.An apparatus according to claim 1 wherein each driving stage of the second differential driving stage comprises:a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal, wherein the first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith; anda second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor, wherein the second gate is disposed above a second insulator, the second NMOS transistor having a second transconductance and a second breakdown voltage associated therewith.An apparatus according to claim 2 wherein the second insulator is thicker than the first insulator so that the first transconductance is greater than the second transconductance.An apparatus according to claim 3 wherein the second breakdown voltage is greater than the first breakdown voltage.An apparatus according to claim 2 wherein the second breakdown voltage is greater than the first breakdown voltage.An apparatus according to claim 2 wherein the second insulator is substantially the same thickness as the first insulator.An apparatus according to claim 1 further including:a second level shift stage including second blocking capacitors and second shunt conductors that allow for transfer of the second stage amplified differential radio frequency signal therethrough; anda third differential stage including third cascoded MOS transistors that receive the second stage amplified differential radio frequency signal from the second level shift stage and produce a third stage amplified differential radio frequency signal.An apparatus according to claim 7 wherein each driving stage of the third differential driving stage comprises:a first NMOS transistor having a source connected to ground and a first gate for receiving the input radiofrequency signal, wherein the first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith; anda second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor, wherein the second gate is disposed above a second insulator, the second NMOS transistor having a second transconductance and a second breakdown voltage associated therewith.An apparatus according to claim 8 wherein the second insulator is thicker than the first insulator so that the first transconductance is greater than the second transconductance.An apparatus according to claim 9 wherein the second breakdown voltage is greater than the first breakdown voltage.An apparatus according to claim 8 wherein the second breakdown voltage is greater than the first breakdown voltage.An apparatus according to claim 8 wherein the second insulator is substantially the same thickness as the first insulator.An apparatus according to claim 2 wherein the integrated circuit chip is packaged in a semiconductor package, the semiconductor package containing terminal around only the periphery of one side of the package, and containing a metal ground plane on the one side of the package within the periphery, the differential input amplification stage, the differential driver amplification stage, and the differential output stage being disposed above the metal ground plane, the metal ground plane thereby providing a heat sink for thermal energy generated by the differential input amplification stage, the differential driver amplification stage, and the differential output stage.An apparatus for amplifying a differential radio frequency signal comprising:an integrated circuit chip (100), the integrated circuit chip (100) including:a differential first amplification stage (210) including first cascoded MOS transistors (212, 214) that receive the differential radio frequency signal and produce a first stage amplified differential radio frequency signal, the differential first amplification stage (210) being supplied with a predetermined first supply voltage;a differential second amplification stage (250) including second cascoded MOS transistors (252, 254) that receive the first stage amplified differential radio frequency signal from the first amplification stage (210) and produce a second stage amplified differential radio frequency signal, the differential second amplification stage (250) being supplied with a predetermined second supply voltage that is greater than the first supply voltage.
Field of the InventionThe present invention relates to an integrated circuit power amplifier, and more specifically a power amplifier that is integrated with other complementary metal oxide semiconductor (CMOS) circuit component that allows for substantially linear operation within a gigahertz frequency band of interest.Background of the Related ArtA transceiver is a well-known circuit containing a transmitter and a receiver, which are thus capable of transmitting and receiving communication signals, respectively. Conventionally, the 10 transmitter contains a power amplifier (also known as "PA") that provides the last stage of amplification of the signal to be transmitted.In most conventional designs, the power amplifier is implemented as a component that is physically separate from other parts of the transmitter and/or transceiver. Power amplifier's made from gallium arsenide (GaAs) or Silicon bipolar junction transistors (SiBJT) are typically used 15 because they have an inherently higher breakdown voltage than transistors made in CMOS circuit, whether the transistors are n-channel or p-channel transistors. While such designs allow for a power amplifier that has the desired amplification characteristics, they do so at the expense of cost. Not only is a GaAs, SiBJT or other non-CMOS power amplifier costlier than a transistor in a CMOS integrated circuit, but the non-CMOS power amplifier cannot be formed on the same integrated circuit chip as 20 the components of the transmitter and/or transceiver. Both of these factors add to the overall cost of the resulting transceiver.It has been recognized that it would be beneficial to have a transceiver in which most of the transmitter and receiver circuits are on a single chip, including the power amplifier. For example, in the article entitled A Single Chip CMOS Direct-Conversion Transceiver for 900 MHz Spread 25 Spectrum Digital Cordless Phones by T. Cho etal. that was presented at the 1999 IEEE International Solid State Circuits Conference, there is described a CMOS transceiver chip that includes an integrated power amplifier. This power amplifier is implemented as a three-stage class AB amplifier. While this power amplifier is integrated on the same integrated circuit chip many of the other transceiver components, the power amplifier described has a number of disadvantages.One of these is that this circuit is not designed to tolerate supply voltages that significantly exceed the transistor breakdown voltages. In particular, transistors used in deep-submicron CMOS circuits having a high-transconductance cannot reliably tolerate junction voltages that are significantly higher than the supply voltage. An integrated RF power amplifier, however, is most efficient when the voltage at the RFout node swings from 0 to at least 2*Vdd, an amplitude made possible by the 35 inductive load at the output of the circuit. The inductive load is typically an inductor connected between the supply and the drain of the output transistors of the power amplifier. Furthermore, since the RFout node is typically connected directly to the antenna, the possibility of transmitted power 30 reflecting backwards to the power amplifier causes the maximum voltage at the RFout node to approach 4*Vdd. This voltage is well beyond the breakdown voltage of modem CMOS devices, and can cause unpredictable performance or device damage.Another disadvantage is that the integrated power amplifier presented above provides non-5 linear operation. Further, it is intended for operation in the range of 900 MHz, and not substantially higher frequencies in the gigahertz range.Still furthermore, when an integrated power amplifier is made on a CMOS chip with a substantial number of the transmitter and receiver components, there is a corresponding increase in the number of pins required. Just adding pins, however, will not necessarily result in a usable circuit. 10 This is because, as the present inventors have found, that there is needed a semiconductor package that provides for dissipation of the thermal energy generated by the power amplifier during operation.Accordingly, a power amplifier integrated with a CMOS chip that overcomes various ones, and preferably all, of the above disadvantages would be desirable. SUMMARY OF THE INVENTION The present invention provides, in a preferred embodiment, a breakdown resistant transistor structure for amplifying communication signals, such as electromagnetic signals, and typically radio frequency signals. This structure includes a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal. The first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown , 20 voltage associated therewith. Also included is a second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor. The second gate is disposed above a second insulator, the second NMOS transistor has a second transconductance and a second breakdown 25 voltage associated therewith, and the second insulator may be thicker than the first insulator. This results in the first transconductance being greater than the second transconductance, and the second breakdown voltage being greater than the first breakdown voltage.The present invention also provides, in a preferred embodiment, an integrated circuit chip apparatus for amplifying a differential communication signal that includes a differential input 30 amplification stage, a first level shift, a differential driving stage, a second level shift stage, and a differential output stage.Furthermore, the present invention includes, in a preferred embodiment, an integrated circuit chip that is packaged in a semiconductor package containing terminals around only the periphery of one side of the package, and contains a metal ground plane on the one side of the package. Within the 35 periphery area, and above it on the semiconductor chip, is disposed the differential input amplification stage, and the differential driver amplification stage. The differential output stage is disposed above the metal ground plane to act as a heat sink for thermal energy generated by the differential input amplification stage, the differential driver amplification stage, and the differential output stage.Accordingly, the present invention can advantageously provide a power amplifier integrated with other CMOS transceiver chip components that provides substantially linear operation.The present invention can advantageously provide a power amplifier integrated with other CMOS transceiver chip components that provides for operation at frequencies in the gigahertz range.The present invention can also advantageously provide a power amplifier integrated with other CMOS transceiver components that provides for level shifting in order to increase the efficiency of the power amplifier transistors.The present invention can further advantageously provide an inductive bias with level shifting in a power amplifier integrated with other CMOS transceiver components in order to reduce the effects of gate capacitance and noise.The present invention can further advantageously provide a breakdown-resistance cascode structure for the power amplifier integrated with other CMOS transceiver components. The present invention can still further advantageously provide a semiconductor package for a power amplifier integrated with other CMOS transceiver components that provides for dissipation of the thermal energy generated by the power amplifier during operation. BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features, and advantages of the present invention are further 20 described in the detailed description which follows, with reference to the drawings by way of non-limiting exemplary embodiments of the present invention, wherein like reference numerals represent similar parts of the present invention throughout several views and wherein:Fig. 1 illustrates a breakdown resistance transistor structure according to the present invention;Fig. 2 illustrates a block diagram of an integrated transceiver chip according to the present invention;Fig. 3 illustrates a block diagram of a power amplifier portion of the transmitter of the integrated transceiver chip according to the present invention;Fig. 4 illustrates a circuit diagram of the power amplifier portion of the transmitter of the 30 integrated transceiver chip according to the present invention;Figs. 5A-5C illustrate diagrams of the integrated transceiver chip and packaging and circuit component locations according to the present invention;Figs. 6A-6B illustrate a top view and cross section of bond pads according to the present invention; andFig. 7 illustrates another embodiment of the power amplifier portion of the transmitter of the integrated transceiver chip according to the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Fig. 1 illustrates a breakdown resistant transistor structure 10, which is one aspect of the present invention and is used in the final output stage of a power amplifier that is intended to be integrated with other CMOS circuit components, as described further hereinafter. The basic topology 5 of this output stage is two NMOS transistors 12 and 14. As illustrated, a communication signal, such as a electromagnetic signal and typically a radio frequency signal, described herein as radio frequency input signal Rfin, is input at the gate of transistor 10, and the gate of transistor 14 is connected to the power supply voltage Vdd. Transistor 12 provides the transconductance necessary for power-amplification, and transistor 14 protects transistor 12 from the high voltage swings that result 10 on the RFout node. Since transistor 14 is connected such that it has a unity current gain, it does not significantly degrade the transconductance of transistor 12 and can tolerate voltage amplitudes at RFout to be 2*Vdd without exceeding its breakdown voltage. Further, the voltage appearing at the source-drain connection between transistors 12 and 14 is a divided version of the RFout voltage, and as such excessive voltage swings do not appear at the junction of transistor 12.Also, since the performance of the amplifier is set primarily by the transconductance of transistor 12, transistor 14 can be chosen to optimize breakdown-voltage by using a thicker gate-oxide. In particular, certain integrated circuit technologies allow different transistors to have different gate-oxide thicknesses. In certain processes, two different thicknesses are available. If such a process is used, if transistor 14 is made to have a thicker gate oxide thickness than transistor 12, then 20 transistor 12 is optimized for higher transconductance but would cause a lower breakdown voltage if used alone. Since, however, transistor 14, which has a thicker gate-oxide, is optimized to produce a higher breakdown voltage, this protects transistor 12 from the high voltage swings that result on the RFout node. Further, it potentially allows the power amplifier to be used with a higher supply voltage, which eases the power amplifier design and potentially improves efficiency. The reduced 25 transconductance of transistor 14 does not degrade the performance of the overall circuit, and is advantageous over a structure that uses a single transistor that has a higher breakdown and lower transconductance characteristic.The above referenced transistor structure 10 is used in a CMOS integrated circuit transceiver, as described hereinafter.Fig. 2 illustrates a block diagram of a transceiver integrated circuit 100 according to the present invention, and the components that serve as inputs and outputs to it. As shown in Fig. 2 , there is a receive signal path 52 and a transmit signal path 54. Although the transmitter block 200 within the transceiver IC 100 will be described in most detail since it is the transmitter block 200 that contains the integrated power amplifier, these other components will be described at a high level in order to put the invention in the appropriate context.Along the receive signal path 52, radio frequency signals, and preferably those that have a 20 or 40 MHz band centered at 5 GHz are input at the antenna 60. In the receive mode, a switch 63 is configured so that the receive signal path 52 is being used. Bandpass filter 62, balun 64, and capacitors 66 shape and match the received RF input signals so that the receiver 150 within the transceiver IC 100 can downconvert them to produce baseband quadrature (I and Q) input signals.These I and Q input signals are low pass filtered by a low pass filter 68, and are each digitized by an analog to digital converters 70, and then input into a digital signal processor (DSP) 72 for further processing.Along the transmit signal path 54, output digital signals from the DSP 72 are converted to baseband quadrature I and Q output signals by digital to analog converters 80, that are each then low pass filtered by a low pass filter 82 and then received by the transmitter 200 within the transceiver IC 10 100. The transmitter 200 upconverts and amplifies the received baseband I and Q output signals to obtain the RF output signals. The RF output signals are then shaped and matched to the characteristics of the antenna 60 using capacitors 84, balun 86 and band pass filter 62 when the switch 63 is configured so that the transmit signal path 54 is being used.Also shown in Fig. 2 are other substantial components within and external to the integrated 15 transceiver 100, including the frequency synthesizer 160, the external crystal 162, operating at 32 MHz in the preferred embodiment, an external loop filter 164 for the synthesizer and a low pass filter 166 for channel selection.Fig. 3 provides a more detailed diagram of the power amplifier portion 205 of the transmitter 200. An input stage 210 receives as its input upconverted fully differential RF signals Rfin+ and 20 Rfin- having true and complement components, as is known. The upconversion can be made using quadrature mixers with 4GHz and 1GHz local oscillator signals.As shown, the received Rfin+ and Rfin- upconverted signals are amplified by the input stage 210, level shifted using a first level shift stage 230, and then amplified by driver stage 250. The output of the driver stage 250 is then again level shifted using a second level shift stage 270 before 25 being input to the output stage 290. The output stage 290 is composed of the transistor structure previously described with reference to Fig. 1 , as will be further noted hereinafter.The input stage 210, the driver stage 250 and the output stage 290 are each formed of a common-source and a common-gate amplifier combined in a cascode configuration, as will become apparent hereinafter. As also shown in Fig. 3 , the driver and output stages 250 and 290 are biased by 30 bias blocks 310 and 320, in order to provide gate bias voltages thereto.Further, a charge pump 330 is used to provide a reference voltage (such as 3.3 volts) that is above the Vdd reference voltage (such as 2.5 volts) to the input stage 210 and the driver stage 250, as will become apparent hereinafter. It should be noted that in any amplifier stage, such as the input stage 210 described further hereinafter, care should be taken to avoid the lower voltage supply from 35 having an actual voltage that is higher than nominal at the same time that the higher voltage supply has an actual voltage that is lower than nominal. The charge pump thus preferably will have the higher voltage level vary in tandem with the lower voltage level.Fig. 4 Illustrates the circuit of Fig. 3 in more detail. As shown, the input stage 210 is comprised of NMOS transistors 212 and 214 having a common source, which is connected to current source 215. Each also inputs at its respective gate one of the previously upconverted Rfin+ and Rfin-signals. NMOS transistors 216 and 218 each have their source connected to the drain of transistors 5 212 and 214, respectively, and are tied to an input gate voltage that is the charge pump voltage higher than Vdd, such as 3.3 volts. The drain of transistors 216 and 218 form the output to the first level shift stage 230. Disposed between the supply voltage that is higher than Vdd, such as 3.3 volts, and the drain of each of transistors 216 and 218 are inductors 220 and 222, respectively, which will typically be in the range of 0.5n to 5n henries.The first level shift stage 230 includes blocking capacitors 232 and 234, and shunt inductors 236 and 238. Since the size of capacitors 232 and 234 are limited by the real estate available on the integrated circuit 100, the size of capacitors 232 and 234 are each typically between 0.1p and 10p farads and the inductors 236 and 238 will typically be in the range of 0.2n and 5n henries. As a result, the presence of the blocking capacitors allows setting the gate bias of the driver stage 250 to a voltage 15 lower than VDD, which improves the ability of the driver transistors to tolerate a large voltage swing at the drain while remaining in saturation. However, a blocking capacitor of the size of capacitors 232 and 234 alone would create a voltage divider between the blocking capacitor (such as 232) and the capacitor created at the gate of the driver stage (such as transistor 252 discussed hereinafter), resulting in an undesired signal attenuation. Thus, shunt inductors (such as 236) are used in parallel with the 20 gates of the driver stage transistors (such as 252) to substantially resonate out the gate capacitance, and thus improve the signal transfer across the blocking capacitors. The gate bias voltage to the driver stage is applied from a bias block 240 through the shunt inductors 236 and 238.In the driver stage 250, NMOS transistors 252 and 254 having a common source, which is connected to ground. Each also inputs at its respective gate the previously upconverted fully 25 differential output signals that have been amplified in the first input stage 210 and level shifted by the first level shift stage 230. NMOS transistors 256 and 258 each have their source connected to the drain of transistors_252 and 254, respectively, and are tied to an input gate voltage of Vdd. The drain of transistors 256 and 258 form the input to the second level shift stage 270. Disposed between a voltage source that is higher than Vdd and the drain of each of transistors 256 and 258 are inductors 30 260 and 262, respectively, which will typically be in the range of 0.5n to 5n henries.The second level shift stage 270 includes blocking capacitors 272 and 274, and shunt inductors 276 and 278. The size of capacitors 272 and 274 are each typically between 1-3 picofarads and the inductors 236 and 238 will typically be in the range of 0.5 -2 nanohenries. The second level shift stage provides the same functionality as the first level shift stage above, so that the gate bias of 35 the output stage 290 can be set to a voltage lower than VDD, and also minimize undesired signal, thereby improving the signal transfer across the blocking capacitors, as discussed above. The gate bias voltage to the output stage is applied from a bias block 320 through the shunt inductors 276 and 278.The output stage 290 uses the breakdown resistant transistor structure 10 described above on each of the I and Q signal paths. Thus, each of NMOS transistors 292 and 294 are optimized as the 5 high transductance transistors, whereas transistors 296 and 298 are optimized to produce a higher breakdown voltage, as previously discussed. As shown, the gate of transistors 296 and 298 are each connected to a power amplifier on (paon) control signal controlled by the DSP 71.The three stage fully-differential, linear class-A power amplifier 205 described above is capable of producing output power of 24dBm (250mW) under typical conditions (50 C). The 10 maximum linear power of the amplifier (defined by PldB) is approximately 22.5 dBm (178mW). Thus the power amplifier 205 can transmit an average power of at least 17.5 dBm (56.5mW) with 5dB of backoff from the IdB gain compression power, for the specific design and intended use provided above.With respect to operation of the power amplifier 205 and the frequencies of interest, which 15 are typically RF frequencies, the geometries of the transistors must be properly chosen. Since the speed of the transistors that make up the amplifier stages 210, 250 and 270 are inversely proportional to the length of the channel, all of the transistors in the signal path are preferably designed with the minimum channel length that the design rules will allow, such as 240 nm. Additionally, since large device widths can result in undesired gate resistance, each transistor is sized such that its width does 20 not exceed some measure. Sum has been determined a useful maximum for design rules in which the minimum channel length is 240mn. Accordingly, to achieve the large transistor sizes that are necessary for desired output power, a cell with a width of approximately 5um and a length of 240nm is replicated to form a transistor with the necessary size. In a preferred embodiment, for example transistors 212 and 214 in the input stage 210 together contain 48 devices used in parallel (24 on each 25 differential side), the transistors 252 and 254 in the driver stage 250 together contain 72 devices used in parallel (36 on each differential side) and the transistors 292 and 294 in the output stage 290 together contain 220 devices used in parallel (110 on each differential side) to achieve the desired output power. A similar number of devices are preferably used for the other pairs of transistors (transistor pairs 216-218; 256-258; 296-298) in each output stage.In operation, since all the amplifier stages 210, 250 and 270 are differential, the AC current through the ground bonds is ideally zero. This effectively nullifies the inductance of the ground bondwires, enabling each amplifier stage to have reasonable power gain without the low-inductance custom packaging or backside ground-contacts that are typically found in higher-performance GaAs RF power amplifiers. This approach typically requires the external balun 86 to drive the antenna 60 35 with a single-ended signal, and can result in appreciable insertion loss through the balun 86 of-about 0.5 - 1 dB, thus resulting in the need for a higher power target from the power amplifier 205. Nevertheless, has been determined that the advantages of an integrated power amplifier greatly outweigh the potential disadvantage of complying with the above-mentioned potential requirements. In the above discussion, the reference to Class-A operation means that the quiescent current in each of the amplifier stages 210, 250 and 270 is set high enough such that the output stage transistors 5 are always conducting current throughout the AC swing. The maximum theoretical drain efficiency of a power amplifier operating in this mode is 50% (2mW of DC power is required for every ImW delivered to the load). Class-A amplifiers also dissipate a constant DC power regardless of output signal amplitude, resulting in much lower efficiency when the signal envelope is below maximum levels. Despite obvious disadvantages in power dissipation, the Class-A methodology is used to 10 maximize gain and linearity performance. Higher efficiency modes of operation, while dissipating less DC current, have of the system being set by stages more difficult to tune and debug within an integrated amplifier chain.Further, the output power delivered by the power amplifier 205 is preferably a linear function of the input voltage amplitude in the range of operation. Information is contained in the amplitude of 15 the transmitted signal, and distortion of the amplitude levels through the power amplifier will cause degradation in link quality. The degree to which the signal envelope varies can be characterized by the "peak-to-average ratio", which is the ratio of the maximum signal amplitude to the average signal amplitude, and is usually expressed in dB. The implication of linear operation of the power amplifier is that the peak-to-average ratio must be subtracted from the maximum linear power capability of the 20 power amplifier to determine the average power that is achievable. In a preferred embodiment, the maximum linear power of the power amplifier is 22.5dBm, and the expected peak-to-average ratio is 5dB, so the average power available for transmission is 17.5dBm. Furthermore, due to class-A operation, the power amplifier always draws enough DC current to be able to transmit aLpeak power, so as the peak-to-average signal is increased, the operating efficiency of the power amplifier 25 decreases. With a peak-to-average ratio of 5dB, the maximum drain efficiency of the ideal class-A amplifier is reduced from 50% to 16%.Another aspect of the present invention is the inclusion of a 3-bit register in each of the bias blocks 310 and 320 in order to vary the quiescent current of the driver stage 250 and the output stage 290. The bias blocks 310 and 320 each contain bias configurations that allow one of eight different 30 biases to be used in order to vary the gate bias voltage, and thus the quiescent current, of the driver stage 250 and the output stage 290, respectively. Each of these bias blocks 310 and 320 are implemented using a current mirror in which separate branches can be switched and summed together, depending upon the state of the 3 bit registers.While the above integrated power amplifier design provides advantages not found in current 35 integrated power amplifier designs, other considerations can be taken into account to improve performance even further. One consideration is the thermal characteristics of the integrated circuit transceiver. When the power amplifier is integrated with other transceiver components, the thermal effects of the; power amplifier do affect the other circuit components much more severely than if the power amplifier were not integrated. Further, since according to the present invention a power amplifier with linear characteristics is desired, dissipating the thermal energy built up by the power amplifier can assist in allowing such linear characteristics over a variety of conditions.In an integrated transceiver 100, which contains pins or terminals for establishing connections to external circuit components for the various signals received and sent to the receiver 150 and the transmitter 200, including the power amplifier portion 205, the required number of such pins or terminals is large, in excess of 50 in the preferred embodiment. Accordingly, if conventional design philosophy were used, one would use an integrated circuit packaging technique that allows for the pins to be disposed along the entire underside of the integrated circuit package. While such a design could provide the pin count required, it has been found difficult to satisfy the thermal concerns that result from the build up of thermal energy in the power amplifier.Accordingly, as shown in Fig. 5A , the present invention uses a leadless plastic chip carrier semiconductor package 400 that contains terminals 410 along the periphery of the package 400. Further, as shown in Fig. 5B , the package 400 has a metal ground plane 420 along the underside of the package 400. As shown in Fig. 5A , component connections 101 to electrical ground within the integrated circuit chip 100 are electrically grounded to this ground plane.Fig. 5A illustrates the location of the various components of the integrated transceiver chip, including components that made up the receiver 150 and the transmitter 200. With respect to the power amplifier 205 that is part of the transmitter 200, it is positioned at the edge of the integrated circuit so that the output of the output stage 290 of the power amplifier 205 is within about 500um from the edge, thus allowing for short bond wires 101 to connect the power amplifier ground to the ground plane, as well as have the wires 102 that connect to the terminals 410 be as short as possible. With the layout of the power amplifier 205 taking into account the location of the ground plane, thermal energy from the power amplifier can be dissipated in the ground plane.Fig. 5C shows a top-view of the location of the various components of the power amplifier 205. Input stage 210, the level shift stage 230, the drive stage 250, the level shift stage 270, and the output stage 290 are configured so that the output stage 290, and specifically the outputs of the output stage 290, are nearest to the RF bond pads 112 and the standard bond pads 110.Also, the outputs of the transistors in the output stage 290 of the power amplifier 205 are integrated into the bond pads of the integrated circuit so as to reduce the parasitic resistance in series with the output transistors. As shown in Figs. 6A and 6B , in order to reduce parasitic capacitance associated with a standard bond pad 110, metal 1 and metal2 layers are not used in the RF bond pads 112, while the metal1 and metal2 layers are used in standard bond pads 110. That is, only metal, 4, and 5 are present underneath the passivation opening of bond pads 112. To further reduce the resistive loss associated with the parasitic capacitance, a silicided p+ diffusion shield 120 is used beneath the metal3 of the bond pads 112.In another embodiment, an integrated power amplifier 205A, shown in Fig 7 , the cascode structure of the various amplifier stages is retained, particularly the usage of transistor 10 having a high transductance along with a transistor 12 having a high breakdown voltage as described in Fig. 1 . In this embodiment, however, the level shift stages described above with respect to Figs. 3 and 4 are 5 omitted, and the output of one gain stage is directly coupled to the input of the next gain stage. As shown in Fig. 7 , a first gain stage 510 is directly connected to a second gain stage 530.The first gain stage 510, unlike the gain stage 210 of Fig. 3 , is connected to the Vdd voltage (such as 2.5 volts) rather than any voltage that exceeds Vdd (such as 3.3 volts). Thus, this first gain stage 510 is comprised of NMOS transistors 512 and 514 having a common source, which is 10 connected to a current source 580. Each also inputs at its respective gate one of the previously upconverted fully differential signals. NMOS transistors 516 and 518 each have their source connected to the drain of transistors 512 and 514, respectively, and are tied to an input gate voltage that is the charge pump voltage higher than Vdd, such as 3.3 volts. The drain of transistors 516 and 518 form the output to the first level shift stage 230. Disposed between the Vdd voltage source and 15 the drain of each of transistors 516 and 518 are inductors 520 and 522, respectively, which will typically be in the range of 0.5n to 5n henries.The second gain stage 530 is directly connected to the first gain stage 510. The second gain stage 530 includes NMOS transistors 532 and 534 having a common source, which is connected to a current source 590. Each also inputs at its respective gate one of the previously upconverted fully 20 differential signals that have been amplified in the first gain stage 510. NMOS transistors 536 and 538 each have their source connected to the drain of transistors 532 and 534, respectively, and are tied to an input gate voltage that is the charge pump voltage higher than Vdd, such as 3.3 volts. The drain of transistors 536 and 538 form the output from the second gain stage 230. Disposed between the charge pump voltage source that is higher than Vdd, such as 3.3 volts, and the drain of each of 25 transistors 536 and 538 are inductors 540 and 542, respectively, which will typically be in the range of 0.5n to 5n henries.Thus, in this embodiment of Fig. 7 , savings in terms of area can be achieved since the on-chip level shift capacitors that require a large amount of area are not needed. This is obtained, however, at the expense of an output voltage that can swing only a lesser amount than the topology of 205. In this 30 design, the output swing cannot be lower than 2.5V without compromising the signal linearity. This is because the supply voltage at the input gain stage 510 is lower and the gate voltage of the transistors such as transistors 512 and 514 is limited to be near the supply voltage of 510, since in this embodiment the DC component of the input signal is between about 2.0 and 2.5 volts. In the embodiment of 205, the use of a level-shifter allows the input signal to be at a lower voltage of 35 between about 0.8 to 1.5V.While the present invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosure, and it will be appreciated that in some instances some features of the invention will be employed without a corresponding use of other features without departing from the spirit and scope of the invention as set forth in the appended claims. ALTERNATIVE ASPECTS Alternative aspects are set out in the following clauses:1. A integrated circuit transistor structure for amplifying a radio frequency signal in a circuit having a reference DC voltage to obtain an amplified radio frequency signal to an output that has a load associated therewith comprising:a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal, wherein the first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith; anda second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor, wherein the second gate is disposed above a second insulator, the second NMOS transistor having a second transconductance and a second breakdown voltage associated therewith.2. A transistor structure according to clause 1 wherein the second insulator is thicker than the first insulator so that the first transconductance is greater than the second transconductance.3. A transistor structure according to clause 2 wherein the second breakdown voltage is greater than the first breakdown voltage.4. A transistor structure according to clause 1 wherein the second breakdown voltage is greater than the first breakdown voltage.5. A transistor structure according to clause 1 wherein the second insulator is substantially the same thickness as the first insulator.6. A transistor structure according to clause 1 wherein the integrated circuit transistor structure is disposed within a semiconductor chip package that contains a metal ground plane, and wherein each of the first and second NMOS transistors have a portion that is electrically connected to the ground plane.7. A transistor structure according to clause 6 wherein the electrical connection to the ground plane includes an electrical connection through a bond pad.8. A transistor structure according to clause 6 wherein the output for the amplified radio signal within 500 um of an edge of the integrated circuit.9. A transistor structure according to clause 8 wherein the output for the amplified radio signal is connected to a terminal on a semiconductor chip package through a radio signal bond pad, the radio signal bond pad including less than all of a plurality of metal layers capable of being associated therewith.10. A transistor structure according to clause 9 wherein the radio signal bond pad includes-a diffusion layer in the substrate disposed therebelow.11. A transistor structure according to clause 9 wherein the output for the amplified radio signal is not connected through the bottom two electrical layers on the radio signal bond pad that is capable of having five layers.12. An apparatus for amplifying a differential radio frequency signal comprising:an integrated circuit chip, the integrated circuit chip including:a first differential amplification stage including first cascoded MOS transistors that receive the differential radio frequency signal and produce a first stage amplified differential radio frequency signal;a first level shift stage including first blocking capacitors and first shunt inductors that allow for transfer of the first stage amplified differential radio frequency signal therethrough; anda second differential driving stage including second cascoded MOS transistors that receive the first stage amplified differential radio frequency signal from the first level shift stage and produce a second stage amplified differential radio frequency signal.13. An apparatus according to clause 12 wherein each driving stage of the second differential driving stage comprises:a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal, wherein the first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith; anda second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor, wherein the second gate is disposed above a second insulator, the second NMOS transistor having a second transconductance and a second breakdown voltage associated therewith.14. An apparatus according to clause 13 wherein the second insulator is thicker than the first insulator so that the first transconductance is greater than the second transconductance.15. An apparatus according to clause 14 wherein the second breakdown voltage is greater than the first breakdown voltage.16. An apparatus according to clause 13 wherein the second breakdown voltage is greater than the first breakdown voltage.17. An apparatus according to clause 13 wherein the second insulator is substantially the same thickness as the first insulator.18. An apparatus according to clause 12 further including: a second level shift stage including second blocking capacitors and second shunt inductors that allow for transfer of the second stage amplified differential radio frequency signal therethrough; and a third differential stage including third cascoded MOS transistors that receive the second stage amplified differential radio frequency signal from the second level shift stage and produce a third stage amplified differential radio frequency signal.19. An apparatus according to clause 12 wherein each driving stage of the third differential driving stage comprises: a first NMOS transistor having a source connected to ground and a first gate for receiving the input radio frequency signal, wherein the first gate is disposed above a first insulator and the first NMOS transistor having a first transconductance and a first breakdown voltage associated therewith; and a second NMOS transistor having a source connected to the drain of the first NMOS transistor, a gate connected to the reference DC voltage, and a drain that provides the output for the amplified radio signal, the load being disposed between the reference DC voltage and the drain of the second NMOS transistor, wherein the second gate is disposed above a second insulator, the second NMOS transistor having a second transconductance and a second breakdown voltage associated therewith.20. An apparatus according to clause 19 wherein the second insulator is thicker than the first insulator so that the first transconductance is greater than the second transconductance.21. An apparatus according to clause 20 wherein the second breakdown voltage is greater than the first breakdown voltage.22. An apparatus according to clause 19 wherein the second breakdown voltage is greater than the first breakdown voltage.23. An apparatus according to clause 19 wherein the second insulator is substantially the same thickness as the first insulator.24. An integrated circuit according to clause 13 wherein the integrated circuit chip is packaged in a semiconductor package, the semiconductor package containing terminal around only the periphery of one side of the package, and containing a metal ground plane on the one side of the package within the periphery, the differential input amplification stage, the differential driver amplification stage, and the differential output stage being disposed above the metal ground plane, the metal ground plane thereby providing a heat sink for thermal energy generated by the differential input amplification stage, the differential driver amplification stage, and the differential output stage.25. An apparatus for amplifying a differential radio frequency signal comprising: an integrated circuit chip, the integrated circuit chip including: a differential first amplification stage including first cascoded MOS transistors that receive the differential radio frequency signal and produce a first stage amplified differential radio frequency signal, the differential first amplification stage being supplied with a predetermined first supply voltage; a differential second amplification stage including second cascoded MOS transistors that receive the first stage amplified differential radio frequency signal from the first amplification stage and produce a second stage amplified differential radio frequency signal, the differential second amplification stage being supplied with a predetermined second supply voltage that is greater than the first supply voltage.
A mobile computing device with multiple modes, for example, wireless communication and personal computing, has an application processor and a communication processor. In the computing mode, the application processor is the master processor. In the communication mode, the application processor is deenergized to conserve battery power, with the communication processor functioning as the master processor by accessing the device's peripheral bus using the memory interface of the communication processor.
CLAIMS WHAT IS CLAIMED IS : 1. A multi mode mobile device, comprising: a housing holding a battery; at least one communication processor configured to facilitate wireless communication using the device, the communication processor being supported on the housing and being powered at least in part by the battery; and at least one application processor configured to execute at least one application, the application processor being supported on the housing and being powered at least in part by the battery, wherein the device has at least a communication mode and a computing mode, and when the device is in the communication mode, a core of the application processor is not energized. 2. The device of Claim 1, wherein the communication processor is associated with a memory bus communicating with one or more memory devices and the application processor is associated with a processor local bus (PLB), and the memory bus communicates with the PLB. 3. The device of Claim 2, wherein the memory bus communicates with a PLB bridge processor to facilitate the communication processor functioning as a master of the PLB. 4. The device of Claim 3, wherein the communication processor accesses peripheral hardware associated with the PLB. 5. The device of Claim 1, further comprising a PLB bridge processor interposed between the memory bus and PLB. <Desc/Clms Page number 10> 6. The device of Claim 1, wherein the application processor is energized when the device is in the computing mode. 7. The device of Claim 2, further comprising at least one peripheral hardware component connected to the PLB. 8. The device of Claim 7, wherein the peripheral hardware component is at least one of: a touch panel controller, and a storage interface. 9. A multi mode mobile device, comprising: a housing holding a battery; at least one communication processor configured to facilitate wireless communication using the device, the communication processor being supported on the housing and being powered at least in part by the battery; and at least one application processor configured to execute at least one application, the application processor being supported on the housing and being powered at least in part by the battery, wherein the device has at least a communication mode and a computing mode, and when the device is in the communication mode, the communication processor functions as a master processor. 10. The device of Claim 9, wherein when the communication processor functions as a master processor, it at least controls at least one peripheral hardware component on the device. 11. The device of Claim 9, wherein when the device is in the communication mode, the application processor is not energized. 12. The device of Claim 9, wherein the communication processor is associated with a memory bus communicating with one or more memory devices and the application <Desc/Clms Page number 11> processor is associated with a processor local bus (PLB), and the memory bus communicates with the PLB. 13. The device of Claim 12, wherein the memory bus communicates with the PLB using a PLB bridge processor to facilitate the communication processor functioning as a master of the PLB. 14. The device of Claim 13, wherein the communication processor accesses peripheral hardware associated with the PLB. 15. The device of Claim 14, further comprising a PLB bridge processor interposed between the memory bus and PLB. 16. The device of Claim 11, wherein the application processor is energized when the device is in the computing mode. 17. The device of Claim 10, wherein the peripheral hardware component communicates with the PLB. 18. A method for effecting mobile computing, comprising: supporting an application processor and a communication processor in a housing; and selectively establishing one of the processors as a master processor based on a mode of operation. 19. The method of Claim 18, wherein a processor is a master processor at least in part based on its control of at least one peripheral hardware component supported on the housing. <Desc/Clms Page number 12> 20. The method of Claim 19, comprising undertaking wireless communication using the communication processor in a communication mode with the application processor being deenergized and the communication processor being a master processor. 21. The method of Claim 19, comprising executing at least one application using the application processor in an application mode, wherein the application processor is a master processor and the communication processor is a peripheral processor. 22. The method of Claim 18, further comprising energizing at least one processor using at least one battery. 23. The method of Claim 18, comprising establishing communication between the communication processor and a bus of the application processor using a memory interface of the communication processor. 24. The method of Claim 18, further comprising disposing a PLB bridge processor on the housing to undertake the act of selectively establishing. 25. A system for effecting mobile computing, comprising: a housing; application processing means for executing logic, the application processing means being mounted on the housing; communication processing means for executing logic, the communication processing means being mounted on the housing; and means for selectively establishing one of the processoring means as a master processor based on a mode of operation. 26. The system of Claim 25, wherein a processing means is a master processor at least in part based on its control of at least one peripheral hardware component supported on the housing. <Desc/Clms Page number 13> 27. The system of Claim 26, comprising means for undertaking wireless communication using the communication processing means in a communication mode with the application processing means being deenergized and the communication processing means being a master processor. 28. The system of Claim 26, comprising means for executing at least one application using the application processing means in an application mode, wherein the application processing means is a master processor and the communication processing means is a peripheral processor. 29. The system of Claim 25, further comprising means for energizing at least one processor using at least one battery. 30. The system of Claim 25, comprising means for establishing communication between the communication processing means and a bus of the application processing means using a memory interface of the communication processing means.
<Desc/Clms Page number 1> LOW POWER DUAL PROCESSOR ARCHITECTURE FOR MULTI MODE DEVICES I. Field of the Invention [0001] The present invention relates generally to multi mode devices such as wireless telephones that can also undertake ancillary computer functions. II. Background of the Invention [0002] Multi mode mobile computing devices have been proposed which have multiple capabilities. For example, mobile telephones might be expected to undertake personal computing tasks now undertaken by notebook computers, in addition to their communication functions. As recognized herein, multiple processors might be required to support multiple modes of operation. As also recognized herein, using the same internal operation independent of the operational mode means that a main processor typically functions as a master device that controls peripheral devices and that treats the other device processors (e. g. , a telephone modem processor) as peripherals. Such a design requires that the main processor be active in all modes, including, e. g. , the main processor needs to be active in the telephone mode, in which the modem processor is active, simply to provide the modem processor access to device hardware (e. g. , a data display, non volatile storage, audio input/output) that is controlled by the main processor. In other words the main processor is here simply mediating on behalf of the modem processor, because the hardware architecture does not allow the modem processor direct access to some of the hardware resources in the device. [0004] As understood herein, it would be advantageous to minimize when possible, the use of hardware intermediaries (such as the main processor in the example above) to allow power efficient execution of tasks, to conserve the battery. Moreover by use of methods described in this invention it may be possible to power off processors that don't need to serve such intermediary role further extending device battery life. Furthermore, requiring a single main processor to always function as a device master means that software and software changes that might apply only to a modem processor must be coordinated or otherwise integrated with the main processor as well, complicating software management. <Desc/Clms Page number 2> In particular the large base of software presently available for cellular phone type devices, which functions on the modem processor cannot be used unchanged in a device in which the modem processor is a peripheral to a main application processor. The present invention can allow the reuse of this large base legacy of application software by architecting the hardware so that it appears to the legacy software as it would in current single processor devices. SUMMARY OF THE INVENTION [0005] A multi mode mobile device includes a housing holding a battery and a communication processor that may be embodied in a module configured to facilitate wireless communication using the device. The communication processor module is supported on the housing and is powered by the battery. An application processor that may be embodied in a module is configured to execute applications is also supported on the housing and powered by the battery. A module in this description means a collection of hardware, assembled of discrete components or within an integrated circuit package, that performs a function through coordinated use of its hardware components. In particular a communication processor module consists of a communications processor core in addition to other hardware resources that function as peripherals of the communications processor (for example Qualcomm's MSM 3300,5100, 5500 which an ARM processor core are in the current view communications processor modules). Similarly in the current view an application processor module consists of an application processor core together with- assisting hardware (for example Qualcomm's MSP1000 or IBM's 405GP which have ARM and PowerPC processor cores are examples of application processor modules). In accordance with this aspect, the device has a communication mode and a computing mode, and when the device is in the communication mode, a core of the application processor is not energized. The application processor core is energized, however, when the device is in the computing mode. [0006] Preferably, the communication processor module is associated with a memory bus that communicates with one or more memory devices and the application processor module is associated with a processor local bus (PLB). The preferred memory bus communicates with the PLB through hardware interfaces between the communication processor module <Desc/Clms Page number 3> and the application processor module. More specifically, the preferred memory bus communicates with a PLB bridge processor to facilitate the communication processor functioning as a master of the PLB. The communication processor can thereby access peripheral hardware associated with the PLB. In another aspect, a multi mode mobile device includes a housing holding a battery and a communication processor configured to facilitate wireless communication using the device. The communication processor is supported on the housing and is powered by the battery. An application processor is configured to execute applications, and the application processor is supported on the housing and powered by the battery. The device has at least a communication mode and a computing mode, and when the device is in the communication mode, the communication processor functions as a master processor. [0008] In still another aspect, a method for effecting mobile computing includes supporting an application processor and a communication processor in a housing. The method also includes selectively establishing one of the processors as a master processor based on a mode of operation. The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which: While the description of the invention is presented in the context of distinct communication and application processor modules, it is recognized that this is only done for clarity of exposition. In particular it is envisaged that the communication and application processor modules could be realized on the same integrated circuit module, whether this be through a multi-chip-module packaging technique or through the design of the entire circuit as a single chip with the both (application and communications) processor cores on it. <Desc/Clms Page number 4> BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES [0010] Figure 1 is schematic diagram of a preferred non-limiting multi mode mobile computing device; [0011] Figure 2 is a block diagram of a preferred non-limiting implementation of the present multi mode mobile device architecture; and [0012] Figure 3 is a flow chart illustrating the logic of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0013] Referring initially to Figure 1, a mobile multi mode computing device is shown, generally designated 10. In an exemplary non-limiting embodiment, the device 10 can be used to undertake wireless voice and/or data communication as well as personal computing application-based functions, such as but not limited to word processing. In any case, the device 10 includes a preferably lightweight portable housing 12 that holds the components discussed herein. A battery 14 can be engaged with the housing 12 to provide a source of power to the components disclosed below. The battery 14 preferably is rechargeable in accordance with portable computing principles known in the art, but when the device 10 is not connected to an electrical outlet, the battery 14 is the sole source of power to the components of the device 10. A mode selector 16 can be provided on the housing 12. The mode selector 16 can be a user-manipulable input device to select the operational mode of the device 10, e. g., communication or computing. The mode selector 16 can be implemented in any number of ways, e. g. , it can be a switch, or a portion of a touchscreen display that is used in conjunction with appropriate software to select the mode, or other equivalent input structure. Or, the mode selector 16 can be automatically implemented by software responsive to the user's activities, e. g. , if the user starts to dial a number the mode selector can be software that automatically configures the device 10 in the communication mode. Now referring to Figure 2, the device 10 includes a communication processor 18, preferably a type of processor referred to as a mobile system modem (MSM) that can access synchronous dynamic random access memory (SDRAM) 20 over, e. g. , a 16/32 bit bus 22 and that can be implemented in a communication processor module. Also, the <Desc/Clms Page number 5> communication processor 18 can access, using, for instance, a 16 bit memory interface bus 24, MSM flash memory 26 and MSM static random access memory (SRAM) 28. Communication-related applications, such as the present assignee's"BREW"applications, can be stored in one or more of the memories 20,26, 28 for execution thereof by the communication processor 18. As also shown in Figure 2, the communication processor 18 accesses wireless communication circuitry 30 to effect wireless communication in accordance with means known in the art. In other words, the communication processor 18, associated memories 20,26, and 28, and circuitry 30 establish a wireless voice and/or data communication portion, generally designated 32. In one non-limiting embodiment, the communication portion 32, also referred to as a"mobile station ("MS"), is a mobile telephone-type device made by Kyocera, Samsung, or other manufacturer that uses Code Division Multiple Access (CDMA) principles and CDMA over-the-air (OTA) communication air interface protocols such as defined in but not limited to IS-95A, IS-95B, WCDMA, IS-2000, and others to communicate with wireless infrastructure, although the present invention applies to any wireless communication device. For instance, the wireless communication systems to which the present invention can apply, in amplification to those noted above, include GSM, Personal Communications Service (PCS) and cellular systems, such as Analog Advanced Mobile Phone System (AMPS) and the following digital systems: CDMA, Time Division Multiple Access (TDMA), and hybrid systems that use both TDMA and CDMA technologies. A CDMA cellular system is described in the Telecommunications Industry Association/Electronic Industries Association (TIA/EIA) Standard IS-95. Combined AMPS and CDMA systems are described in TIA/EIA Standard IS-98. Other communications systems are described in the International Mobile Telecommunications System 2000/Universal Mobile Telecommunications Systems (IMT-2000/UM), standards covering what are referred to as wideband CDMA (WCDMA), cdma2000 (such as cdma2000 lx or 3x standards, for example) or TD-SCDMA. Still referring to Figure 2, a main processor 34 that can be embodied in a module holds an application processor core 36, which in one non-limiting illustrative embodiment <Desc/Clms Page number 6> can be an IBM 405 LP processor or equivalent. While Figure 2 shows that the processors 18,36 can be on separate chips from each other, it is to be appreciated that they can also be disposed on the same chip. The application processor core 36 accesses one or more software applications that can be stored in various memories to execute the applications. For example, the application processor core 36 can access an SRAM/Flash memory 38 over, e. g. , a 16-bit memory bus 40, and it can also access an SDRAM memory 42 (where software applications typically will be preferentially stored) over a preferably 32-bit bus 44. Figure 2 also shows that the application processor core 36 accesses a processor local bus (PLB) 46. In one non-limiting embodiment, the PLB bus 46 can be a 64-bit bus. Various supporting devices and peripherals are accessed by the application processor core 36 using the PLB 46 in accordance with principles known in the art. For example, the PLB 46 (and, hence, application processor core 36) can be connected to a SDRAM controller 48 for controlling the SDRAM memory 42. Also, the PLB 46 can communicate with a personal computer memory card interface architecture (PCMCIA) interface or other storage interface 50. Moreover, the PLB 46 (and, hence, application processor core 36) can be connected to a liquid crystal display (LCD) controller 52, which drives an LCD display that can be provided on the housing of the device 10. [0022] In addition to the components discussed above, the application processor 34 which bears the application processor core 36 can also hold an on-chip peripheral bus (OPB) 54 which in one non-limiting embodiment can be a 32 bit bus. The OPB 54 is connected to the PLB 46 through a PLB/OPB bridge device 56. The bridge device 56 can translate 32 bit data to 64 bit data and vice versa. Various peripheral devices can communicate with the OPB 54. By way of non-limiting examples, a touch panel interface 58 can be connected to the OPB 54. Also, other storage interfaces 60 can be connected to the OPB 54. Further non-limiting examples of peripheral devices that can be connected to the OPB 54 include a USB, a UART, an interrupt (UC), and an AC97 device. [0023] In accordance with the present invention, the communication processor 18 can also communicate with the PLB 46 over its memory interface 24. Specifically, as shown in Figure 2, in one exemplary embodiment the memory interface 24 of the communication processor 18 is connected to the PLB 46 by a PLB bridge processor 62. In one <Desc/Clms Page number 7> implementation, the PLB bridge processor 62 is implemented in hardware by a logic device, such as, e. g. , a processor. In this way, the communication processor 18 can access the devices connected to the PLB 46. If desired, the functions of the PLB bridge processor 62 can be implemented by, e. g. , a dedicated portion of the communication processor 18. Figure 3 shows the logic that is executed by the PLB bridge processor 62 to negotiate which processor 18,36 controls the peripherals shown in Figure 2. At decision diamond 64 it is determined whether the device 10 is in the communication mode as indicated by, e. g. , the mode selector 16 or other user activity discussed above. If not, meaning that the device 10 is in the computing mode, the logic flows to block 66, wherein the PLB bridge processor 62 designates the application processor core 36 to be the master processor in control of the PLB 46 and OPB 54. In this mode, the communication processor 18 can be treated by the application processor core 36 as a peripheral device. On the other hand, if the device 10 is in the communication mode, the logic moves from decision diamond 64 to block 68, wherein at least the application processor core 36 of the application processor 34 is deenergized. That is, in the communication mode, according to present principles the application processor core 36 is deenergized. Consequently, the communication processor 18 is assigned (by, e. g. , the PLB bridge processor 62) the role of master processor at block 70, controlling the peripheral devices connected to the PLB 46 and OPB 54. While the particular LOW POWER DUAL PROCESSOR ARCHITECTURE FOR MULTI MODE DEVICES as herein shown and described in detail is fully capable of attaining the above-described objects of the invention, it is to be understood that it is the presently preferred embodiment of the present invention and is thus representative of the subject matter which is broadly contemplated by the present invention, that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean"one and only one"unless explicitly so stated, but rather "one or more". All structural and functional equivalents to the elements of the above- described preferred embodiment that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to <Desc/Clms Page number 8> be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed under the provisions of 35 U. S. C.'112, sixth paragraph, unless the element is expressly recited using the phrase"means for"or, in the case of a method claim, the element is recited as a"step"instead of an"act".
A method of forming a capacitor includes forming first capacitor electrode material over a semiconductor substrate. A silicon nitride comprising layer is formed over the first capacitor electrode material. The semiconductor substrate with silicon nitride comprising layer is provided within a chamber. An oxygen comprising plasma is generated remote from the chamber. The remote plasma generated oxygen is fed to the semiconductor substrate within the chamber at a substrate temperature of no greater than 750 DEG C effective to form a silicon oxide comprising layer over the silicon nitride comprising layer. After the feeding, a second capacitor electrode material is formed over the silicon oxide comprising layer. Methods of forming capacitor dielectric layers are also disclosed.
CLAIMS 1. A method of forming a capacitor comprising: forming first capacitor electrode material over a semiconductor substrate; forming a silicon nitride comprising layer over the first capacitor electrode material; providing the semiconductor substrate with silicon nitride comprising layer within a chamber; generating an oxygen comprising plasma remote from the chamber; feeding the remote plasma generated oxygen to the semiconductor substrate within the chamber at a substrate temperature of no greater than 750 C effective to form a silicon oxide comprising layer over the silicon nitride comprising layer ; and after the feeding, forming second capacitor electrode material over the silicon oxide comprising layer. 2. The method of claim 1 wherein the substrate temperature during the feeding is no greater than 550 C. 3. The method of claim 1 wherein the substrate temperature during the feeding is no greater than 500 C. 4. The method of claim 1 wherein the chamber comprises a rapid thermal processing chamber. 5. The method of claim 1 wherein the feeding is for no longer than 1 minute. 6. The method of claim 1 wherein the feeding is for no longer than 30 seconds. <Desc/Clms Page number 9> 7. The method of claim 1 wherein the feeding is for no longer than 15 seconds. 8. The method of claim 1 wherein the chamber is essentially void of hydrogen during the feeding. 9. The method of claim 1 wherein the oxygen comprising plasma is at least in part derived from a gas selected from the group consisting of 2s 03, NyOxX and mixtures thereof. 10. The method of claim 9 wherein the oxygen comprising plasma is at least in part generated from an inert gas. 11. The method of claim 1 wherein the oxygen comprising plasma is at least in part derived from 0 and N2. 12. The method of claim 1 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He. 13. The method of claim 1 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He, the feeding being void of feeding of N2 but for N2 produced from dissociation of N2O in the remote generated plasma. 14. A method of forming a capacitor comprising: forming first capacitor electrode material over a semiconductor substrate ; forming a silicon nitride comprising layer over the first capacitor electrode material; providing the semiconductor substrate with silicon nitride comprising layer within a chamber; generating an oxygen comprising plasma remote from the chamber ; feeding the remote plasma generated oxygen to the semiconductor substrate within the chamber at a substrate temperature of no greater <Desc/Clms Page number 10> than 550 C and for no longer than 30 seconds effective to form a silicon oxide comprising layer over the silicon nitride comprising layer ; and after the feeding, forming second capacitor electrode material over the silicon oxide comprising layer. 15. The method of claim 14 wherein the substrate temperature during the feeding is no greater than 500 C. 16. The method of claim 14 further comprising forming a silicon oxide comprising layer over the first capacitor electrode material prior to forming the silicon nitride comprising layer, the capacitor being formed to have a dielectric region consisting essentially of an ONO composite consisting essentially of said silicon oxide comprising layer and said silicon nitride comprising layer. 17. The method of claim 14 wherein the feeding is for no longer than 15 seconds. 18. The method of claim 14 wherein the chamber is essentially void of hydrogen during the feeding. 19. The method of claim 14 wherein the oxygen comprising plasma is at least in part derived from a gas selected from the group consisting of 02, 03, NyOxç and mixtures thereof. 20. The method of claim 19 wherein the oxygen comprising plasma is at least in part generated from an inert gas. 21. The method of claim 14 wherein the oxygen comprising plasma is at least in part derived from 02 and N2. 22. The method of claim 14 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He. <Desc/Clms Page number 11> 23. The method of claim 14 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He, the feeding being void of feeding of N2 but for N2 produced from dissociation of N20 in the remote generated plasma. 24. A method of forming a capacitor comprising: forming first capacitor electrode material comprising silicon over a semiconductor substrate; forming a silicon nitride comprising layer over the first capacitor electrode material, the silicon nitride comprising layer comprising pinholes formed therein ; providing the semiconductor substrate with silicon nitride comprising layer within a chamber; generating an oxygen comprising plasma remote from the chamber; feeding the remote plasma generated oxygen to the semiconductor substrate within the chamber at a substrate temperature of no greater than 550 C and for no longer than 30 seconds effective to form a silicon oxide comprising layer over the silicon nitride comprising layer and effective to fill said pinholes with silicon oxide, the chamber being essentially void of hydrogen during the feeding; and after the feeding, forming second capacitor electrode material over the silicon oxide comprising layer. 25. The method of claim 24 wherein the substrate temperature during the feeding is no greater than 500 C. 26. The method of claim 24 further comprising forming a silicon oxide comprising layer over the first capacitor electrode material prior to forming the silicon nitride comprising layer, the capacitor being formed to have a dielectric region consisting essentially of an ONO composite consisting essentially of said silicon oxide comprising layer and said silicon nitride comprising layer. <Desc/Clms Page number 12> 27. The method of claim 24 wherein the feeding is for no longer than 15 seconds. 28. The method of claim 24 wherein the oxygen comprising plasma is at least in part derived from a gas selected from the group consisting of 02, 03, NyOx, and mixtures thereof. 29. The method of claim 28 wherein the oxygen comprising plasma is at least in part generated from an inert gas. 30. The method of claim 24 wherein the oxygen comprising plasma is at least in part derived from 0 and N2. 31. The method of claim 24 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He. 32. The method of claim 24 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He, the feeding being void of feeding of N2 but for N2 produced from dissociation of N2O in the remote generated plasma. 33. A method of forming a capacitor dielectric layer, comprising: forming a silicon nitride comprising layer over a substrate; providing the substrate with silicon nitride comprising layer within a chamber; generating an oxygen comprising plasma remote from the chamber; and feeding the remote plasma generated oxygen to the substrate within the chamber at a substrate temperature of no greater than 750 C effective to form a silicon oxide comprising layer over the silicon nitride comprising layer. <Desc/Clms Page number 13> 34. The method of claim 33 wherein the chamber comprises a rapid thermal processing chamber. 35. The method of claim 33 wherein the substrate temperature during the feeding is no greater than 550 C. 36. The method of claim 33 wherein the substrate temperature during the feeding is no greater than 500 C. 37. The method of claim 33 wherein the feeding is for no longer than 1 minute. 38. The method of claim 33 wherein the feeding is for no longer than 30 seconds. 39. The method of claim 33 wherein the feeding is for no longer than 15 seconds. 40. The method of claim 33 wherein the chamber is essentially void of hydrogen during the feeding. 41. The method of claim 33 wherein the oxygen comprising plasma is at least in part derived from a gas selected from the group consisting of 02, 03, NYOX, and mixtures thereof. 42. The method of claim 41 wherein the oxygen comprising plasma is at least in part generated from an inert gas. 43. The method of claim 33 wherein the oxygen comprising plasma is at least in part derived from 0 and N2. 44. The method of claim 33 wherein the oxygen comprising plasma is at least in part derived from N2O and at least one of Ar and He. <Desc/Clms Page number 14> 45. The method of claim 33 wherein the oxygen comprising plasma is at least in part derived from N20 and at least one of Ar and He, the feeding being void of feeding of N2 but for N2 produced from dissociation of N20 in the remote generated plasma. 46. The method of claim 33 further comprising forming a silicon oxide comprising layer over the first capacitor electrode material prior to forming the silicon nitride comprising layer, the capacitor being formed to have a dielectric region consisting essentially of an ONO composite consisting essentially of said silicon oxide comprising layer and said silicon nitride comprising layer.
<Desc/Clms Page number 1> DESCRIPTION METHODS OF FORMING CAPACITORS AND METHODS OF FORMING CAPACITOR DIELECTRIC LAYERS Technical Field This invention relates to methods of forming capacitors and to methods of forming capacitor dielectric layers. Background Art Capacitors are commonly-used electrical components in semiconductor circuitry, for example in DRAM circuitry. As integrated circuitry density increases, there is a continuing challenge to maintain sufficiently high storage capacitance despite decreasing capacitor area. A typical capacitor is comprised of two conductive electrodes separated by a non-conducting dielectric region. The dielectric region is preferably comprised of one or more materials preferably having a high dielectric constant and low leakage current characteristics. Example materials include silicon compounds, such as SiO2, and Si3N4. Si3N4 is typically preferred due to its higher dielectric constant than SiO2. Numerous capacitor dielectric materials have been and are being developed in an effort to meet the increasing stringent requirements associated with the production of smaller and smaller capacitor devices used in higher density integrated circuitry. Most of these materials do, however, add increased process complexity or cost over utilization of conventional SiO2 and Si3N4 capacitor dielectric materials. One dielectric region in use today includes a composite of silicon oxide and silicon nitride layers. Specifically, a first capacitor electrode is formed to have a silicon oxide comprising layer, typically silicon dioxide, of 6 to 10 Angstroms thereover. Such might be formed by deposition, or more typically by ambient or native oxide formation due to oxidation of the first electrode material (for example conductively doped polysilicon) when exposed to clean room ambient atmosphere. Thereafter, a silicon nitride layer is typically deposited by low pressure chemical vapor deposition. This can, however, undesirably produce very small pinholes in the silicon <Desc/Clms Page number 2> nitride layer, particularly with thin layers of less than 200 Angstroms, with the pinholes becoming particularly problematic in layers of less than or equal to about 75 Angstroms thick. These pinholes can undesirably reduce film density and result in undesired leakage current in operation. One technique for filling such pinholes is to wet oxidize the substrate, for example at 750 C-800 C, atmospheric pressure, and feeding 5 sipm H2, 10 slum 02 for 15-60 minutes. Such forms silicon oxide material which fills the pinholes and forms a silicon oxide layer typically from about 5 Angstroms to about 25 Angstroms thick over the silicon nitride. It is generally desirable, however, to overall minimize the thermal exposure of the wafer/substrate upon which integrated circuitry is being fabricated. Exposure to 750 C-800 C for from 15 minutes-60 minutes is significant in this regard. Disclosure of the Invention The invention includes methods of forming capacitors and methods of forming capacitor dielectric layers. In one implementation, a method of forming a capacitor dielectric layer includes forming a silicon nitride comprising layer over a substrate. The substrate with silicon nitride comprising layer is provided within a chamber. An oxygen comprising plasma is generated remote from the chamber. The remote plasma generated oxygen is fed to the substrate within the chamber at a substrate temperature of no greater than 750 C effective to form a silicon oxide comprising layer over the silicon nitride comprising layer. In one implementation, a method of forming a capacitor includes forming first capacitor electrode material comprising silicon over a semiconductor substrate. A silicon nitride comprising layer is formed over the first capacitor electrode material. The silicon nitride comprising layer has pinholes formed therein. The semiconductor substrate with silicon nitride comprising layer is provided within a chamber. An oxygen comprising plasma is generated remote from the chamber. The remote plasma generated oxygen is fed to the semiconductor substrate within the chamber at a substrate temperature of no greater than 550 C and for no longer than 30 seconds effective to form a silicon oxide comprising layer <Desc/Clms Page number 3> over the silicon nitride comprising layer and effective to fill said pinholes with silicon oxide. The chamber is essentially void of hydrogen during the feeding. After the feeding, a second capacitor electrode material is formed over the silicon oxide comprising layer. Brief Description of the Drawings Preferred embodiments of the invention are described below with reference to the following accompanying drawings. Fig. 1 is a diagrammatic sectional view of a semiconductor wafer fragment in process in accordance with an aspect of the invention. Fig. 2 is a diagrammatic view of processing equipment. Fig. 3 is a view of the Fig. 1 wafer fragment at a processing step subsequent to that shown by Fig. 1. Fig. 4 is a view of the Fig. 3 wafer fragment at a processing step subsequent to that shown by Fig. 3. Best Modes for Carrying Out the Invention and Disclosure of Invention This disclosure of the invention is submitted in furtherance of the constitutional purposes of the U. S. Patent Laws"to promote the progress of science and useful arts" (Article 1, Section 8). Referring initially to Fig. 1, a wafer fragment in process in accordance with a method of forming a capacitor in accordance with an aspect of the invention is indicated generally with reference numeral 10. Such comprises a bulk monocrystalline silicon substrate 12. In the context of this document, the term"semiconductor substrate"or "semiconductive substrate"is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate"refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Also in the context of this document, the term"layer"includes both the plural and the singular unless otherwise indicated. An insulative layer 14, for example doped or <Desc/Clms Page number 4> undoped silicon dioxide, or silicon nitride, is formed over bulk substrate 12. A first capacitor electrode material 16 is formed over insulative layer 14. At this point, or preferably later in the process, electrode material 16 is ultimately patterned/provided into some desired first capacitor electrode shape. Exemplary materials for electrode 16 include silicon (for example polysilicon) metals, conductive metal oxides, and any other conductive layer. An exemplary thickness in one preferred embodiment, and particularly where layer 16 comprises polysilicon, is 600 Angstroms. A first or inner silicon oxide comprising layer 18 is formed over, and"on"as shown, first capacitor electrode 16. An exemplary method for forming layer 18 is by oxidizing an outer portion of electrode material 16, for example by exposure to clean room ambient. This oxide layer is not preferred, but rather an effect of an exposed silicon or other oxidizable substrate. Typical thickness for layer 18 is less than or equal to 15 Angstroms. Layer 18 preferably consists essentially of silicon dioxide. A silicon nitride comprising layer 20 is formed over first capacitor electrode material 16 and in the illustrated preferred embodiment is formed on first or inner silicon oxide comprising layer 18. An exemplary thickness is from 30 Angstroms to 80 Angstroms. In but one embodiment, silicon nitride comprising layer 20 is formed to have a plurality of pinholes 22 formed therein. Such are shown in exaggerated width/size in the figures for clarity. In the illustrated embodiment, at least some pinholes extend completely through layer 20 to silicon oxide comprising layer 18. Silicon nitride comprising layer 20 might be deposited by any existing or yet-to-be developed technique, with chemical vapor deposition or plasma enhanced chemical vapor deposition being but examples. One exemplary process whereby a silicon nitride layer 20 is deposited by chemical vapor deposition includes NH3 at 300 sccm, dichlorosilane at 100 sccm, 750 mTorr, 600 C, and 60 minutes of processing. Referring to Fig. 2, semiconductor substrate 10 with silicon nitride comprising layer 20 is provided within a processing chamber 60. The <Desc/Clms Page number 5> processing chamber might be the same or different from any chamber utilized to produce any of the Fig. 1 construction. An example preferred processing chamber is a rapid thermal processor, with the invention being reduced to practice using an Applied Materials RTP-XE Chamber having a volume of 2700 cc. A suitable remote plasma generator 62 is diagrammatically shown and provided upstream of processing chamber 60. Any suitable remote plasma generation is contemplated, whether existing or yet-to-be-developed, with by way of example only microwave and RF plasma generation being examples. The invention was reduced to practice using an ASTEX F120160-02 power source with a microwave unit number Ax3151-1, available from ASTEX of Wilmington, MA. Fig. 2 depicts a suitable oxygen gas feed and an inert gas feed to the diagrammatic remote plasma generator 62. An oxygen comprising plasma is generated remote from chamber 60, for example in generator 62. The remote plasma generated oxygen is then fed to the semiconductor substrate within chamber 60, with the substrate temperature being no greater than 750 C, effective to form a silicon oxide comprising layer 24 (Fig. 3) over, preferably"on"as shown, silicon nitride comprising layer 20, and effective to fill pinholes 22 with silicon oxide. More preferably, the substrate temperature during the feeding is maintained at no greater than 550 C, and even more preferably no greater than 500 C. Further preferably, the feeding is for no longer than 1 minute, with a feeding of less than or equal to 30 seconds being more preferred, and a feeding of less than or equal to 15 seconds being most preferred. In the most preferred embodiment, layers 18,20 and 24 constitute a dielectric region 27 of the capacitor being formed, with such dielectric region consisting essentially of an ONO composite which consists essentially of such silicon oxide comprising-silicon nitride comprising-silicon oxide comprising layers. The oxygen comprising plasma is preferably derived, at least in part, from a gas selected from the group consisting of 2s 03, NyOX (with"x" and"y"being greater than zero) and mixtures thereof. Further as shown in the Fig. 2 embodiment, the oxygen comprising plasma is preferably generated, at least in part, from a suitable inert gas in addition to an <Desc/Clms Page number 6> oxygen feed gas. Examples include N2, Ar and He. One specific example includes an oxygen comprising plasma derived, at least in part, from feeding 02 and N2. Another exemplary embodiment in accordance with the above parameters includes forming an oxygen comprising plasma derived, at least in part, from N20 and at least one of Ar and He. Preferably in such latter example, the ultimate feeding of the remote generated plasma material to chamber 60 is void of feeding of N2 but for N2 which is inherently produced from the dissociation of N2O in the generation of the remote plasma. Further preferably, and contrary to the prior art described above, chamber 60 is essentially void of hydrogen during the feeding, thereby preventing any steam formation. In the context of this document,"essentially void"means below detectable levels. A specific example with respect to the Fig. 2 processing with the ASTEX and Applied Materials equipments includes a feed of 2000 sccm with 02 and 1000 sccm of N2. Pressure within the remote plasma unit was maintained at approximately 2.9 Torr with microwave power provided at 2000 Watts. Temperature of the wafer was 650 C, with pressure maintained at 2.9 Torr. By way of example only, and with respect to the above-identified reduction-to-practice equipment, pressure is preferably maintained within the system at from 1 to 8 Torr, power supplied to the remote plasma generator at from 500 Watts to 3000 Watts, and temperature maintained within the system at from 500 C to 750 C. Preferred flow ranges for each of 02 and N2 are from 0.5 slm to 5 sim. Temperatures as low as 350 C might be used with other equipment. The above-described preferred embodiments, in the fabrication of a capacitor dielectric region such as region 27, reduces the thermal exposure as compared to the prior art, in the most preferred embodiment, from in excess of 750 C to less than 550 C, and further with the preferred embodiment reducing the exposure time even at the reduced temperature to well less than 1 minute. Properties of the capacitor dielectric region formed as described above appear comparable to ONO layers produced by prior art methods. For example, exposure of the dielectric region nitride to a remote oxygen plasma at 2000 Watts for 10 seconds resulted in a capacitor dielectric region having capacitance and <Desc/Clms Page number 7> leakage approximately equivalent to a prior art control wet oxidation. Further, an improvement in breakdown voltage for the 2000 Watt, 10 second treatment indicates that an increased capacitance via reduced thickness might be feasible, also. Referring to Fig. 4, and after the feeding, a second capacitor electrode material 40 is formed over silicon oxide comprising layer 24. In the preferred and illustrated embodiment, second capacitor electrode material 40 is formed on (in contact with) oxide layer 24. An exemplary thickness for layer 40 is from 300 Angstroms to 600 Angstroms. Second electrode material 40 might comprise the same or different materials from first electrode material 16. In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.
Multi-channel audio alignment schemes are disclosed. One aspect of the present disclosure provides for accumulation of audio samples across multiple related audio channels at an audio source. Related audio channels indicate their interrelatedness, and when all the related audio channels have data to transmit, the source releases the data onto the time slots of the Serial Low-power Inter-chip Media Bus (SLIMbus), such that the related audio channels are within a given segment window of the time slot. This accumulation is repeated at the boundary of every segment window. Similarly, accumulation may be performed at the audio sink. Components within the audio sink may only read received data if status signals from all related sinks indicate that predefined thresholds have been reached. By providing such accumulation options, audio fidelity is maintained across multiple audio data channels.
What is claimed is:1. A method of controlling an audio stream, comprising:providing first data associated with a first audio channel from an audio stream to a first port in a master audio source;providing second data associated with a second audio channel from the audio stream to a second port in the master audio source;at the first port, accumulating the first data in a first first in, first out (FIFO) register;at the second port, accumulating the second data in a second FIFO register; programming the first and second ports to operate at identical channel rates; and at a segment window boundary, draining the first and second FIFO registers, such that equivalent audio samples in the first audio channel and the second audio channel are able to be grouped and placed into a segment window corresponding to the segment window boundary in a time division format.2. The method of claim 1, further comprising pushing the first data from the master audio source to a slave audio sink.3. The method of claim 1, further comprising having the first data pulled from the master audio source by a slave audio sink.4. The method of claim 1, further comprising detecting an error condition.5. The method of claim 4, further comprising outputting null data from the master audio source after detecting the error condition.6. The method of claim 1, further comprising detecting if the first data and the second data exceed a predefined watermark level.7. The method of claim 6, further comprising skipping data output if either the first data or the second data do not exceed the predefined watermark level.8. A method of controlling an audio stream, comprising:providing first data associated with a first audio channel from an audio stream to a first port in a slave audio source;providing second data associated with a second audio channel from the audio stream to a second port in the slave audio source;at the first port, accumulating the first data in a first first in, first out (FIFO) register;at the second port, accumulating the second data in a second FIFO register; programming the first and second ports to operate at identical channel rates; and at a segment window boundary, draining the first and second FIFO registers, such that equivalent audio samples in the first audio channel and the second audio channel are able to be grouped and placed into a segment window corresponding to the segment window boundary in a time division format.9. The method of claim 8, further comprising pushing the first data from the slave audio source to a master audio sink.10. The method of claim 8, further comprising having the first data pulled from the slave audio source by a master audio sink.11. The method of claim 8, further comprising detecting an error condition.12. The method of claim 11, further comprising outputting null data from the slave audio source after detecting the error condition.13. The method of claim 8, further comprising detecting if the first data and the second data exceed a predefined watermark level.14. The method of claim 13, further comprising skipping data output if either the first data or the second data do not exceed the predefined watermark level.15. A method of controlling an audio stream, comprising:receiving first data associated with a first audio channel from an audio bus at a first port in a master audio sink;receiving second data associated with a second audio channel from the audio bus at a second port in the master audio sink;at the first port, accumulating the first data in a first first in, first out (FIFO) register;at the second port, accumulating the second data in a second FIFO register; programming the first and second ports to operate at identical channel rates; comparing a first count at the first FIFO register to a first predefined threshold; setting a first ready signal if the first count exceeds the first predefined threshold;comparing a second count at the second FIFO register to a second predefined threshold;setting a second ready signal if the second count exceeds the second predefined threshold; andallowing contents of the first and second FIFO registers to be read if the first ready signal and the second ready signal are set.16. A method of controlling an audio stream, comprising:receiving first data associated with a first audio channel from an audio bus at a first port in a slave audio sink;receiving second data associated with a second audio channel from the audio bus at a second port in the slave audio sink;at the first port, accumulating the first data in a first first in, first out (FIFO) register;at the second port, accumulating the second data in a second FIFO register; programming the first and second ports to operate at identical channel rates; comparing a first count at the first FIFO register to a first predefined threshold; setting a first ready signal if the first count exceeds the first predefined threshold;comparing a second count at the second FIFO register to a second predefined threshold;setting a second ready signal if the second count exceeds the second predefined threshold; andallowing contents of the first and second FIFO registers to be read if the first ready signal and the second ready signal are set.
MULTI-CHANNEL AUDIO ALIGNMENT SCHEMESPRIORITY CLAIM[0001] The present application claims priority to U.S. Patent Application Serial No. 14/541,577, filed on November 14, 2014, and entitled "MULTI- CHANNEL AUDIO ALIGNMENT SCHEMES," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to the Serial Low-power Inter-chip Media Bus (SLIMbus) specification announced by MIPI® and particularly for managing multiple related audio channels using a SLIMbus.II. Background[0003] Electronic devices, such as mobile phones and computer tablets, have become common in contemporary society for supporting various everyday uses. These electronic devices each commonly include a microphone and speakers. Typical microphones and speakers used in electronic devices have analog interfaces, requiring dedicated two (2) port wiring to connect each device. However, electronic devices may include multiple audio devices, such as multiple microphones and/or speakers. Thus, it may be desired to allow for a microprocessor or other control device in such electronic devices to be able to communicate audio data to multiple audio devices over a common communications bus. Further, it may also be desired to provide a defined communications protocol for transporting digital data relating to audio channels to different audio devices in an electronic device over a common communications bus.[0004] The MIPI®Alliance has set forth the Serial Low -power Inter-chip Media Bus (SLIMbus™) standard, version 1.01 of which was released to adopters on December 3, 2008. Copies of this standard can be found to members of the MIPI®Alliance at www.mipi.org/specifications/serial-low-power-inter-chip-media-bus- slimbussm-specification. SLIMbus is designed as an interface for audio data in the mobile terminal industry, allowing communication between modems, application processors, and standalone codec chips. SLIMbus is a time division multiplexed (TDM) bus with contiguous time slots carrying samples of a given audio channel. More than one channel can be defined on the bus at the same time as bandwidth permits. SLIMbus has been generally adopted by many within the mobile terminal industry.[0005] When more than one channel is provided in a computing device that uses a SLIMbus, the SLIMbus standard does not address how these data channels can be aligned at the destination side so as to provide optimal audio fidelity. Accordingly, the SLIMbus standard may be improved by providing related channel alignment with corresponding increases in audio fidelity.SUMMARY OF THE DISCLOSURE[0006] Aspects disclosed in the detailed description include multi-audio channel alignment schemes. In particular, aspects of the present disclosure provide for accumulation of audio samples across multiple related audio channels at an audio source. Related audio channels indicate their interrelatedness, and when all the related audio channels have data to transmit, the source releases the data onto the time slots of the Serial Low-power Inter-chip Media Bus (SLIMbus), such that the related audio channels are within a given segment window of the time slot. This accumulation is repeated at the boundary of every segment window. Similarly, accumulation may be performed at the audio sink. Components within the audio sink may only read received data if status signals from all related sinks indicate that predefined thresholds have been reached. By providing such accumulation options, audio fidelity is maintained across multiple audio data channels.[0007] In this regard in one aspect, a method of controlling an audio stream is defined. The method comprises providing first data associated with a first audio channel from an audio stream to a first port in an audio service. The method also comprises providing second data associated with a second audio channel from the audio stream to a second port in the audio source. The method further comprises, at the first port, accumulating the first data in a first first in, first out (FIFO) register. The method also comprises, at the second port, accumulating the second data in a second FIFO register and programming the first and second ports to operate at identical channel rates. The method further comprises, at a segment window boundary, draining the first and second FIFO registers, such that equivalent audio samples in the first audio channel and the second audio channel are able to be grouped and placed into a segment window corresponding to the segment window boundary in a time division format.[0008] In another aspect, a method of controlling an audio stream is defined. The method comprises providing first data associated with a first audio channel from an audio stream to a first port in a slave audio source. The method also comprises providing second data associated with a second audio channel from the audio stream to a second port in the slave audio source. The method further comprises, at the first port, accumulating the first data in a first FIFO register and at the second port, accumulating the second data in a second FIFO register. The method further comprises programming the first and second ports to operate at identical channel rates. The method also comprises a segment window boundary, draining the first and second FIFO registers, such that equivalent audio samples in the first audio channel and the second audio channel are able to be grouped and placed into a segment window corresponding to the segment window boundary in a time division format.[0009] In another aspect, a method of controlling an audio stream is defined. The method comprises receiving first data associated with a first audio channel from an audio bus at a first port in a master audio sink. The method also comprises receiving second data associated with a second audio channel from the audio bus at a second port in the master audio sink. The method further comprises at the first port, accumulating the first data in a first FIFO register. The method also comprises at the second port, accumulating the second data in a second FIFO register. The method also comprises programming the first and second ports to operate at identical channel rates. The method further comprises comparing a first count at the first FIFO register to a first predefined threshold. The method comprises setting a first ready signal if the first count exceeds the first predefined threshold. The method also comprises comparing a second count at the second FIFO register to a second predefined threshold. The method further comprises setting a second ready signal if the second count exceeds the second predefined threshold. The method also comprises allowing contents of the first and second FIFO registers to be read if the first ready signal and the second ready signal are set.[0010] In another aspect, a method of controlling an audio stream is disclosed. The method comprises receiving first data associated with a first audio channel from an audio bus at a first port in a slave audio sink. The method also comprises receiving second data associated with a second audio channel from the audio bus at a second port in the slave audio sink. The method further comprises at the first port, accumulating the first data in a first FIFO register. The method also comprises at the second port, accumulating the second data in a second FIFO register. The method also comprises programming the first and second ports to operate at identical channel rates. The method further comprises comparing a first count at the first FIFO register to a first predefined threshold. The method comprises setting a first ready signal if the first count exceeds the first predefined threshold. The method also comprises comparing a second count at the second FIFO register to a second predefined threshold. The method further comprises setting a second ready signal if the second count exceeds the second predefined threshold. The method also comprises allowing contents of the first and second FIFO registers to be read if the first ready signal and the second ready signal are set.BRIEF DESCRIPTION OF THE FIGURES[0011] Figure 1 is a block diagram of an exemplary mobile terminal with audio elements;[0012] Figure 2 is a block diagram of an exemplary mobile terminal driving an external audio system;[0013] Figure 3 is a simplified diagram of a SLIMbus with associated components;[0014] Figure 4 is a simplified block diagram of ports within SLIMbus components and a SLIMbus extending between two components;[0015] Figure 5 is a simplified timing diagram of how related audio channels are provided within a single segment window on the SLIMbus;[0016] Figure 6 is a simplified block diagram of the elements within an audio source component according to an exemplary aspect of the present disclosure;[0017] Figure 7 is a simplified block diagram of the elements within an audio sink component according to an exemplary aspect of the present disclosure;[0018] Figure 8 is a flow chart of the process for the source accumulating and transmitting related channels; [0019] Figure 9 is a flow chart of the process for the sink receiving and accumulating related channels;[0020] Figure 10 is a flow chart of an exemplary process associated with a master sink pulling data from a slave source;[0021] Figure 11 is a flow chart of an exemplary process associated with a slave sink pulling data from a master source;[0022] Figure 12 is a flow chart of an exemplary process associated with a slave source pushing data to a master sink; and[0023] Figure 13 is a flow chart of an exemplary process associated with a master source pushing data to a slave sink.DETAILED DESCRIPTION[0024] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0025] Aspects disclosed in the detailed description include multi-channel audio alignment schemes. In particular, aspects of the present disclosure provide for accumulation of audio samples across multiple related audio channels at an audio source. Related audio channels indicate their interrelatedness, and when all the related audio channels have data to transmit, the source releases the data onto the time slots of the SLIMbus, such that the related audio channels are within a given segment window of the time slot. This accumulation is repeated at the boundary of every segment window. Similarly, accumulation may be performed at the audio sink. Components within the audio sink may only read received data if status signals from all related sinks indicate that predefined thresholds have been reached. By providing such accumulation options, audio fidelity is maintained across multiple audio data channels.[0026] Before addressing exemplary methods and processes associated with the present disclosure, an overview of the hardware elements in which such methods and processes may be implemented are provided with reference to Figures 1-7. Exemplary processes are provided with reference to Figures 8 and 9. [0027] In this regard, Figure 1 illustrates an example of a mobile terminal 10. While a mobile terminal 10 is specifically illustrated, other processor-based systems that employ a time division multiplexed bus for multi-channel audio may also benefit from aspects of the present disclosure. In this example, the mobile terminal 10 includes one or more central processing units (CPUs) 12, each including one or more processors 14. The processors 14 may include one or more applications processors that handle audio processing. The CPU(s) 12 may have cache memory 16 coupled to the processor(s) 14 for rapid access to temporarily stored data. The CPU(s) 12 is coupled to a system bus 18 and can intercouple devices included in the mobile terminal 10. As is well known, the CPU(s) 12 communicates with these other devices by exchanging address, control, and data information over the system bus 18. For example, the CPU(s) 12 can communicate bus transaction requests to a memory controller 20 to access memory units 22(0)-22(N). Although not illustrated in Figure 1, multiple system buses 18 could be provided, wherein each system bus 18 constitutes a different fabric. Likewise, in an exemplary aspect, one of the system buses 18 may be a Serial Low-power Inter-chip Media Bus (SLIMbus) for audio. In another exemplary aspect, a SLIMbus may be present for one or more input devices (e.g., a microphone) and for one or more output devices (e.g., a speaker).[0028] Other devices can be connected to the system bus 18. As illustrated in Figure 1, these devices can include a memory system that includes memory controller 20 and memory units 22(0)-22(N), one or more input devices 24, one or more output devices 26, one or more network interface devices 28, and one or more display controllers 30, as examples. The input device(s) 24 can include any type of input device, including but not limited to input keys, switches, microphones, voice processors, etc. In the event that an input device 24 is a microphone, it may be connected to a SLIMbus. The output device(s) 26 can include any type of output device, including but not limited to audio, such as speakers, video, other visual indicators, etc. In the event that an output device 26 is a speaker, it may be connected to a SLIMbus. The network interface device(s) 28 can be any devices configured to allow exchange of data to and from a network 32. The network 32 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and the Internet. The network interface device(s) 28 can be configured to support any type of communications protocol desired.[0029] The CPU(s) 12 may also be configured to access the display controller(s) 30 over the system bus 18 to control information sent to one or more displays 34. The display controller(s) 30 sends information to the display(s) 34 to be displayed via one or more video processors 36, which process the information to be displayed into a format suitable for the display(s) 34. The display(s) 34 can include any type of display, including but not limited to a cathode ray tube (CRT), a light emitting diode (LED) display, a liquid crystal display (LCD), a plasma display, etc.[0030] While the mobile terminal 10 may include plural speakers and/or plural microphones coupled by a SLIMbus, the mobile terminal 10 may be coupled to an external sound system such as through a docking station (or wirelessly). In this regard, Figure 2 illustrates a 5.1 channel surround sound system 40 with mobile terminal 10 associated with a docking station 42. The docking station 42 may include a center speaker 44 and couple to front speakers 46(L) and 46(R) as well as rear speakers 48(L) and 48(R) and a sub-woofer 50. As is well understood, each speaker 44, 46(L), 46(R), 48(L), and 48(R), and sub-woofer 50 may have a separate audio channel. When the output of the speakers 44, 46(L), 46(R), 48(L), and 48(R) and sub-woofer 50 is properly aligned, a listener 52 may experience high audio fidelity.[0031] Regardless of whether the audio components are internal to the mobile terminal 10 (or other processor based device) or an external system, the mobile terminal 10 (or other processor based device) may include a SLIMbus to move audio data between audio components such as modems, codecs, and/or applications processors. In this regard, a simplified audio system 60 is illustrated in Figure 3. Simplified audio system 60 may include a master 62 (sometimes referred to as a master device, but because "device" sometimes has additional connotations, referred to simply as "master" hereinafter) and slave devices 64(l)-64(4) communicatively coupled to a SLIMbus communications bus 66 as components. In an exemplary aspect, the slave devices 64(l)-64(4) may be microphones, speakers, or other audio devices. The master 62 may be an application processor, a codec, or a modem, and communicates with the slave devices 64(l)-64(4) using two signals: a clock signal 68 communicated over a common clock wire 70, and a data signal 72 communicated on a common data wire 74. While only four slave devices 64(l)-64(4) are illustrated in Figure 3, it should be appreciated that more or fewer components may be coupled to the SLIMbus communications bus 66. It should be appreciated that the master 62 may have a control system (CS) 76 associated therewith, which may be a hardware implemented processor with associated software stored in memory associated with the processor. In an exemplary aspect, the control system 76 is part of the system on a chip (SoC) of the master 62. In an alternate exemplary aspect, the control system 76 may be associated with the CPU 12 of the mobile terminal 10. In further exemplary aspects, the slave devices 64(l)-64(4) each have a respective slave control system 78(l)-78(4).[0032] It should be appreciated that each component within the simplified audio system 60 may include multiple ports, each of which may be assigned to different audio channels. Exemplary aspects of this arrangement are illustrated in Figure 4. In particular, an audio system 80 may include a first component 82(1) and a second component 82(2). First component 82(1) may include plural ports 84, of which 84(m) and 84(n) are illustrated. Similarly, second component 82(2) may include plural ports 84, of which 84(x) and 84(y) are illustrated. Ports 84 receive audio channels 86. In particular, port 84(m) receives first audio channel 86(1) and port 84(n) receives second audio channel 86(2). A serializer (not illustrated) assembles the audio data and places the audio data on the data wire 74. The second component 82(2) uses a deserializer (not illustrated) to extract the data and pass the data to an appropriate port 84. In this example, the data for first audio channel 86(1) is passed to port 84(x) and the data for second audio channel 86(2) is passed to port 84(y). The ports 84 pass the separated audio channels 86(1) and 86(2) to appropriate signal processing blocks 88(1) and 88(2).[0033] Exemplary aspects of the present disclosure provide for accumulating audio data for related audio channels 86 and placing the corresponding samples for the respective related audio channels 86 into a segment window within the TDM signal on the common data wire 74. In this regard, Figure 5 provides an illustration of a signal flow 90 where channel samples sl l and sl2 are sampled out of the first audio channel 86(1) and channel samples s21 and s22 are sampled out of the second audio channel 86(2). The samples from the same general sampling point are accumulated and placed onto the common data wire 74 in the same segment window 92. The accumulation is done at every segment window boundary. The second component 82(2) serializes the data on the common data wire 74 and reassembles the samples. The reassembled samples 94 are aligned at the receiver.[0034] To get the samples aligned at the source, first in, first out (FIFO) registers may be used at each port. Figure 6 provides a block diagram of the FIFO registers within a source. In this example, the source is first component 82(1) (and may also be the master 62). The first component 82(1) includes a control system, which may be CS 76. While illustrated as a processor in Figure 6, it should be appreciated that the processor may be replaced with some other signal processing entity and still be the CS 76. The CS 76 communicates with a direct memory access (DMA) module 100. While illustrated as a DMA, it should be appreciated that some other data fetch entity may be used. The DMA module 100 generates the first audio channel 86(1) and second audio channel 86(2). The first audio channel 86(1) is provided to a FIFO 102 at port 84(m). A serializer (Parallel to Serial (P2S)) 104 takes the output of the FIFO 102 and passes the serialized signal to a multiplexer (MUX) 106. Similarly, the second audio channel 86(2) is provided to a FIFO 108 at port 84(n). A serializer 110 takes the output of the FIFO 108 and passes the serialized signal to the MUX 106. Clock signals from the clock wire 70 are provided as needed, or desired, to the ports 84. A TDM control signal controls the MUX 106 to put the respective sample onto the data wire 74. Signals are passed from the ports 84 to the MUX 106 through switches 112, 114 controlled by segment window logic 116. In use, the FIFOs 102, 108 collect (or accumulate) data for the respective audio channels 86 and set a flag or status indicator when a predetermined amount of data has been accumulated. Based on when all the related channels have indicated sufficient data accumulation, the segment window logic 116 releases the data to the MUX 106. In this fashion, data for related samples of the audio channels 86 end up in the same segment window on the data wire 74. Thus, the accumulation provides sample alignment at each segment window after initialization. This alignment helps improve audio fidelity.[0035] On the receive side, both sample and phase alignment may be desirable to help improve audio fidelity. The structure of such receive side components is provided with reference to Figure 7. Audio data is received from the data wire 74 at a demultiplexer (DEMUX) 120, which splits the received signal and provides the split signals 122(x) and 122(y) to respective ports 84(x), 84(y). The ports 84 also receive a clock signal 68 from the clock wire 70. The port 84(x) receives the split signal 122(x) at a deserializer (serial to parallel (S2P)) 124(x) associated with a FIFO 126(x). The FIFO 126(x) provides a status message to error generation logic 128(x) and a count to a comparator 130(x). The comparator 130(x) compares the count to a watermark (or other predefined threshold) 132(x) and outputs a ready signal 134(x) based on the comparison (i.e., if the count exceeds the watermark 132(x), then the ready signal 134 is enabled). The error generation logic 128(x) selectively provides an error signal to an error bus 136. The ready signal 134(x) is provided to a ready bus 138.[0036] With continued reference to Figure 7, exemplary aspects of the present disclosure perform error handling by evaluating the information on the error bus 136 to see if any of the channels of the multi-channel group has an error condition, such as an underflow or overflow condition. If there is an error condition, an exemplary aspect of the present disclosure halts the channel and substitutes null data until the stream is recovered or other corrective action is taken. When corrective action is taken, the stream is restored or recovered as a group.[0037] With continued reference to Figure 7, the port 84(x) also includes a grouping register 140(x) that sets a status for first comparator 142(x) and second comparator 144(x). The first comparator 142(x) receives signals from the ready bus 138. The second comparator 144(x) receives signals from the error bus 136. Based the comparison of the comparators 142(x), 144(x), switches 146(x), 148(x) are opened or closed to provide a clock signal from a clock 150 to the FIFO 126(x). Based on whether the clock signal is provided to the FIFO 126(x), data is pulled from the FIFO 126(x) to a signal processing block 152(x) for further processing (e.g., passing to a speaker). Clock signals from the clock 150 are also passed to signal processing blocks 152(x) and 152(y). By clocking the signal processing blocks 152(x) and 152(y) with the same clock signal used with the FIFO 126(x) and FIFO 126(y), sample alignment is preserved and audio fidelity is improved.[0038] With continued reference to Figure 7, port 84(y) has similar elements performing similar functions, albeit designated with a (y). It should be appreciated that the values of the watermark 132 and the information in the grouping register 140 may be programmed by message control or a programming entity as needed or desired. [0039] Against this backdrop of structure, an exemplary process 160 is provided illustrating how related ports at the first component 82(1) are linked. As illustrated, first component 82(1) is a source component. The process 160 begins with the control system 76 gathering audio data to be sent out through the two (or more) audio channels (block 162). The control system 76 and the DMA 100 prefill the FIFO 102 of port(m) with first channel audio data (block 164). The control system 76 and the DMA 100 then prefill the FIFO 108 of port(n) with second channel audio data (block 166). A manager device (not shown) programs the ports 84 to be of the same channel rate (e.g., 48 kHz) (block 168).[0040] With continued reference to Figure 8, the manager device activates the channel on both ports 84 at the same time (block 170). A given numbered sample of the two audio channels from the two ports 84 get populated in the same segment window (block 172). The manager determines if this is the end of the data (block 174), with the process repeating as noted or ending and resetting the ports (block 176) if block 174 is answered affirmatively.[0041] Figure 9 illustrates a process 180 that illustrates an exemplary technique to link the channels on the receive side. That is second component 82(2) is a sink component. In this regard, the process 180 begins with the processor programming the watermark 132(x) and the grouping register 140(x) for the port(x) (block 182). The processor programs with watermark 132(y) and the grouping register 140(y) for the port(y) (block 184). Note that the processor may be in the second component 82(2) or may be in the first component 82(1) and the programming may be effectuated by messages sent across the data wire 74.[0042] With continues reference to Figure 9, a manager device (not shown) may programs the ports 84(x) and 84(y) with the same channel rate (block 186). The manager device activates the channel on both ports at the same time (block 188). A variety of things may happen. In a first instance, the FIFO 126(x) starts to fill and the ready signal 134(x) is constantly updated as well (block 190). The ready signal 134(x) is passed through the ready bus 138 to the port 84(y). In a second instance, the FIFO 126(y) starts to fill and the ready signal 134(y) is constantly updated as well (block 192). The ready signal 134(y) is passed through the ready bus 138 to the port 84(x). At the same time, the clock 150 is turned on and provided to the ports 84(x) and 84(y) and other signal processing blocks 152(x) and 152(y) (block 194). Once all involved ports signal ready (block 196), the read clock goes through the FIFO 126(x) and 126(y) when both ports 84(x) and 84(y) signal ready (block 198).[0043] With continued reference to Figure 9, the ports 84(x) and 84(y) continue to get filled with data from the data wire 74 (block 200) and the same numbered sample of both audio channels is pulled from the FIFO 126(x) and 126(y) to the respective signal processing blocks 152(x) and 152(y) at the same time (block 202). The controller checks to see if there is an error signal from any port (block 204). If there is an error, the controller disables the read of both FIFO 126(x) and 126(y) and waits for processor intervention (block 206). If there is no error at block 204, then the controller checks to see if there is an end of the audio data (block 208). If there is an end, the process 180 ends (block 210). Otherwise, the process 180 repeats as indicated.[0044] While the above discussion contemplates the general concepts behind accumulating data to promote channel alignment of multi-channel audio streams, there are several possible ways that this may be implemented depending on the master/slave nature of the source and sinks. That is, the source may be a master or slave, and the sinks may likewise be masters or slaves. Further, the source may push data or the sink may pull data. Exemplary aspects of these different variations are provided in Figures 10-13.[0045] In this regard, Figure 10 illustrates an exemplary process 220 where the source is a slave and the master sink pulls data from the slave source. In process 220, the flow rate of the data is determined by the master sink and passed to the transmitting source FIFO register. Thus, after a reset (block 222) where the bus ports are placed in an idle state (block 224), the components monitor whether a bus channel has been enabled (block 226). While this answer is negative, the process 220 repeats as noted. Once the bus channel has been enabled, the process 220 bifurcates.[0046] With continued reference to Figure 10, initially the bus port is placed in an active state (block 228). The control system 78 determines if the channel is at a segment window boundary (block 230). If the answer to block 230 is no, the process 220 realizes that the bus port is active before port data is ready (block 232). If, however, the answer to block 230 is yes, the control system 78 checks to see if all related channels are at the watermark level (block 234). If the answer to block 234 is no, the process realizes that the bus port is active before port data is ready (block 232). If, however, the answer to block 234 is yes, then the bus port starts up and indicates a data ready state (block 236).[0047] With continued reference to Figure 10, after realizing that the bus port is active before port data is available (block 232), the control system 78 determines if the transmitter has reached a transmitted time slot (block 238). If the answer to block 238 is no, the process 220 returns to block 230. If, however, the answer to block 238 is yes, the transmitter outputs null data with no presence (block 240) and this null data is provided to the external bus (block 242). Null data is continued while the control system 78 determines if the channel has been disabled (block 244). If the answer to block 244 is no, the process returns to block 230 with any appropriate error handling if the bus starts before the internal data sink (block 246). If however, the answer to block 244 is yes, the port enters a shutdown state (block 248) and the process returns to block 224.[0048] With continued reference to Figure 10, and returning to block 236, the control system 78 determines if a transmitted timeslot has been reached (block 250). If the answer to block 250 is no, the determination repeats. If there is an error, the error signal is provided to the error bus and an error state is indicated (block 252). From the error state of block 252, the port enters a shutdown state (block 248) and the process returns to block 224. If there is no error at block 250 and the timeslot has been reached, the source outputs the first valid data with presence status set (block 254) and data is sent to the external bus (block 242). The process 220 continues with the determination of whether a transmit timeslot has been reached (block 256). If the answer to block 256 is no, the process 220 repeats, as noted. If the answer to block 256 is that an error has occurred, the process 220 enters an error state (block 252), as previously described. If the answer to block 256 is yes, a transmit timeslot has been reached, the control system 78 determines if there is a master sink data-pull indication on the bus - i.e. a sample request "SRQ" tag set by the sink to complement the data-present "P" bus tag set by the source to indicate valid data for this transmit timeslot (block 258). While SRQ gets set in a pull-protocol by a sink that wants to pull data, in a push protocol, a data strobe ("STR") tag may be set. If there is an error, the error state is asserted (block 252), the port enters a shutdown state (block 248), and the process returns to block 224, as previously described. If the answer to block 258 is no, the source defers or skips data output (block 260). If the answer to block 258 is yes, then valid data is output (block 262). The control system 78 determines if the channel has been disabled (block 264). If the answer to block 264 is no, the process 220 returns to block 256, as noted. If the answer to block 264 is yes, the port enters a shutdown state (block 248), and the process returns to block 224, as previously described.[0049] With continued reference to Figure 10, and returning to block 226, the source also determines if the internal data source has been enabled (block 266). If the answer to block 266 is no, the process 220 loops, as illustrated. Once the answer to block 266 is yes, the internal source enters a data start-up state (block 268). The source determines if there is an internal data source request to send (block 270). If there is an error at block 270, an error state is asserted (block 252), the port enters a shutdown state (block 248), and the process returns to block 224. If the answer to block 270 is no, the process 220 loops, as illustrated. Once the answer to block 270 is yes, the source determines if all the related channels in the multi-channel group are at, or above, the watermark level (block 272). If an error is detected, an error state is asserted (block 252), the port enters a shutdown state (block 248), and the process returns to block 224. If the answer to block 272 is no, then the data is ignored (block 274). If, however, the answer to block 272 is yes, then the valid data is input and an acknowledgment (ACK) response is generated (block 276). The data is then pulled from the external source (block 278). The control system 78 determines if the channel has been disabled (block 280). If the answer to block 280 is no, then the process loops back to block 270, as noted. If the answer to block 280 is yes, then the port enters a shutdown state (block 248) and the process returns to block 224.[0050] Figure 11 shows a flow chart of process 290 associated with an exemplary aspect where the slave sink pulls data from the master source. In this regard, the process 290 starts with a reset (block 292) and the bus port entering an idle state (block 294). The process 290 determines if the bus channel has been enabled (block 296). As long as block 296 is negative, the process 290 loops, as indicated. Once the bus channel has been enabled, the bus port enters an active state (block 298). The process determines if the internal data sink has been enabled (block 300). As long as block 300 is negative, the process 290 loops, as indicated. Once the internal data sink has been enabled, the process 290 bifurcates.[0051] With continued reference to Figure 11, the process 290 continues with the internal sink entering a data start-up state (block 302). The control system 78 determines if all the related channels are at the designated watermark level (block 304). If the answer to block 304 is negative, the internal data sink enters a null data state (block 306). The control system 78 determines if there is an internal data sink request (block 308). If the answer to block 308 is no, the process 290 loops back, as indicated. If the answer to block 308 is yes, there iS an internal data sink request, then null data is output (block 310). This data is provided to the internal sink (312). The control system 78 determines if the channel has been disabled (block 314). If the answer to block 314 is negative, the process loops back, as indicated. If the answer to block 314 is positive, the process 290 continues to a port shutdown state (block 316) and the process 290 returns to the bus port in the idle state (block 294), as indicated. Error handling may occur if the bus starts before the internal data sink starts.[0052] With continued reference to Figure 11, if the answer to block 304 is yes, the channels are at the watermark level or above, then the data sink enters a start-up state (block 318). The control system 78 determines if there is an internal data sink request (block 320). As long as there is not an internal data sink request, the process 290 loops back, as indicated. If there is an internal data sink request at block 320, the source outputs valid data (block 322). This data is provided to the internal sink (block 324). The control system 78 determines if the channel has been disabled (block 326). If the answer to block 326 is negative, the process 290 loops back to block 320, as indicated. If the answer to block 326 is yes, the channel has been disabled, the port enters a shutdown state (block 316) and loops back to the bus port being in an idle state (block 294), as previously described. If there is an error associated with the internal data sink request at block 320, then the sink enters an error state (block 328) and the port enters a shutdown state (block 316), as previously described.[0053] With continued reference to Figure 11 and block 300, concurrently with the internal sink entering a data start-up state, the bus port starts up and indicates the bus port is in a data ready state (block 330). The control system 78 determines if the data is at a segment window boundary (block 332). As long as the answer to block 332 is negative (and there is no error), the process 290 loops back, as indicated. If there is an error, the process 290 enters an error state (block 328), as previously discussed. If the answer to block 332 is yes, the data is at a segment window boundary, then the control system 78 determines if all the related channels are at or above the watermark level (block 334). Again, if there is an error, the process 290 enters an error state (block 328), as previously discussed. If the answer to block 334 is negative, the control system 78 determines if a transmit timeslot has been reached (block 336). If there is an error, the process 290 enters an error state (block 328), as previously discussed. If there is no error, then as along as the transmit timeslot has not been reached, the process 290 loops, as indicated. Once the transmit timeslot has been reached, the data is ignored and the bus sample request bus tag bit (SRQ) is not asserted by the sink (block 338). If, however, block 334 is answered affirmatively (i.e., the related channels are at or above the watermark level), then the control system 78 determines if the transmit timeslot has been reached (block 340). If there is an error, the process 290 enters an error state (block 328), as previously discussed. As long as there is no error and the transmit timeslot has not been reached, the process 290 loops, as indicated. Once the transmit timeslot has been reached, the valid data is inputted and the SRQ tag bit is asserted by the sink to acknowledge the data presence from the source (block 342). The data is pulled from the bus (block 344). The control system 78 determines if the channel has been disabled (block 346). If the answer to block 346 is negative, the process 290 returns to block 332 as indicated, otherwise, the port enters a shutdown state (block 316), as previously discussed.[0054] Figure 12 shows a flow chart of process 350 associated with an exemplary aspect where the slave source pushes data to the master sink. The process 350 begins with a reset (block 352) and the bus port entering an idle state (block 354). The control system 78 determines if the bus channel has been enabled (block 356). As long as the bus channel has not been enabled, the process 350 loops, as indicated. Once the bus channel has been enabled, the process 350 bifurcates. Following one path, the bus port enters an active state (block 358). The control system 78 determines if the data is at a segment boundary window (block 360). If the answer to block 360 is no, then the bus port is active before the port data is available (block 362). The control system 78 determines if a transmit timeslot has been reached (block 364). If the answer to block 364 is no, then the process 350 loops back to block 360, as indicated. If the answer to block 364 is yes, then null data is output with no presence indication (block 366) sent with the data provided to the external bus (block 368). The control system 78 determines if the channel has been disabled (block 370). If the channel has not been disabled, the process 350 returns to block 360, as indicated. If the channel has been disabled, the port enters a shutdown state (block 372) and then returns to block 354, as indicated.[0055] With continued reference to Figure 12, and returning to block 360, if block 360 is answered affirmatively, the control system 78 determines if all the related channels are at the watermark level (block 374). If the answer to block 374 is negative, then the process 350 goes to block 362, as indicated. If the answer to block 374 is affirmative, the bus port enters a start-up state with data ready (block 376). The control system 78 determines if a transmit timeslot has been reached (block 378). If there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372), as previously discussed. As long as the transmit timeslot has not been reached, the process 350 loops, as indicated. Once the transmit timeslot has been reached, the valid data is output with the presence status set, i.e. bus presence ("P") tag and STR tag set (block 382). The data is sent to the external bus (block 368). The control system 78 determines if the data is at a segment window boundary (block 384). If there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372), as previously discussed. If there is no error, and as long as the segment window boundary has not been reached, the process 350 loops, as indicated. Once the segment window boundary is reached, the control system 78 determines if all the related channels are at, or above, the watermark level (block 386). Again, if there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372), as previously discussed. If there is no error, and the channels are not all above the watermark level, the control system 78 determines if the transmit timeslot has been reached (block 388). If there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372), as previously discussed. If there is no error and the transmit timeslot has not been reached, the process 350 loops, as indicated. Once the transmit timeslot has been reached, the source outputs null data with no presence set, i.e. no bus P tag or STR tag set (block 390). The null data is output to the external bus (block 392). If there is no error and all the related channels are above the watermark threshold, the control system 78 determines if a transmit timeslot has been reached (block 394). If there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372) as previously discussed. If there is no error and the transmit timeslot has been reached, the source outputs valid data with the presence status set, i.e. bus P tag and STR tag set (block 396). The data is output to the external bus (block 392). The control system 78 determines if the channel has been disabled (block 398). If the answer to block 398 is negative, the process 350 returns to block 384, as indicated. If the channel has been disabled, the port enters a shutdown state (block 372), as previously indicated.[0056] With continued reference to Figure 12, and returning to block 356, concurrently, the control system 78 determines if the internal data source is enabled (block 400). If the answer to block 400 is negative, the process 350 loops as indicated. Once the internal data source is enabled, the internal source enters a data start-up state (block 402). The control system 78 determines if the internal data source has a request to send (block 404). If there is an error, the process 350 enters an error state (block 380) and then the port enters a shutdown state (block 372), as previously discussed. If there is no error and the answer to block 404 is negative, the process 350 loops, as indicated. Once there is a request to send, the internal data source inputs valid data (block 406). The data comes from the internal data source 408. The control system 78 determines if the channel has been disabled (block 410). If the answer to block 410 is negative, the process 350 returns to block 404. If the answer to block 410 is positive, the port enters a shutdown state (block 372), as previously discussed.[0057] Figure 13 shows a flow chart of process 420 associated with an exemplary aspect where the master source pushes data to the slave sink. The process 420 begins with a reset (block 422) and the bus port entering an idle state (block 424). The control system 78 determines if the bus channel has been enabled (block 426). As long as the bus channel has not been enabled, the process 420 loops, as indicated. Once the bus channel has been enabled, the bus port enters an active state (block 428). The control system 78 determines if the internal data sink has been enabled (block 430). As long as the answer to block 430 is negative, the process 420 loops, as indicated. Once the answer to block 430 is affirmative, the process 420 bifurcates.[0058] With continued reference to Figure 13, the process 420 continues with the internal sink entering a data start-up state (block 432). The control system 78 determines if all the related channels are at the watermark level (block 434). If the answer to block 434 is negative, the data sink enters a null data state (block 436). The control system 78 determines if there is an internal data sink request (block 438). If the answer to block 438 is negative, the process 420 loops back to block 434, as indicated. If the answer to block 438 is affirmative, null data is output (block 440) to the internal sink (block 442). The control system 78 determines if the channel is disabled (block 444). If the answer to block 444 is no, the process 420 loops back to block 434, as indicated. If the answer to block 444 is yes, then the port enters a shutdown state (block 446) and returns to block 424, as indicated.[0059] With continued reference to Figure 13, if the answer to block 434 is yes, the data sink enters a start-up state (block 448). The control system 78 determines if there is an internal data sink request (block 450). If there is an error, the process 420 enters an error state (block 452) and then the port enters a shutdown state (block 446), as previously discussed. If there is no error and the answer to block 450 is negative, the process 420 loops back to block 450, as indicated. If there is no error, and the answer to block 450 is affirmative, the control system 78 determines if all the related channels are at, or exceed, the watermark level (block 454). If there is an error, the process 420 enters an error state (block 452) and then the port enters a shutdown state (block 446), as previously discussed. If there is no error and the answer to block 454 is negative, the process 420 skips the output (block 456). If the answer to block 454 is affirmative, then valid data is sent and an ACK response is provided (block 458). The data is sent to the internal sink (block 460). The control system 78 determines if the channel has been disabled (block 462). If the answer to block 462 is negative, the process 420 loops back to block 450, as indicated. If the answer to block 462 is affirmative, the port enters a shutdown state (block 446), as previously indicated.[0060] With continued reference to Figure 13, after block 430, the process 420 also causes the bus port to start-up and enter a data ready state (block 464). The control system 78 determines if a transmit timeslot has been reached (block 466). If there is an error, the process 420 enters an error state (block 452) and then the port enters a shutdown state (block 446), as previously discussed. If there is no error and the answer to block 466 is negative, the process 420 loops, as indicated. If the answer to block 466 is affirmative, a transmit timeslot has been reached, and the data source then will assert a valid sample request strobe STR tag and associated data-present P tag to indicate there is valid data for this transmit timeslot (block 468). If there is an error, the process 420 enters an error state (block 452) and then the port enters a shutdown state (block 446), as previously discussed. If there is no error and the answer to block 468 is negative, the data input slot is skipped (block 470). If the answer to block 468 is affirmative, then valid data is inputted (block 472). The data is received from the bus source data (block 474). The control system 78 determines if the channel has been disabled (block 476). If the channel has not been disabled, the process 420 loops back to block 466, as indicated. If the channel has been disabled, the port enters a shut down state (block 446), as previously discussed.[0061] Note that while Figures 10-13 are presented from what the slave control system 78 does, it should be appreciated that exemplary aspects of the present disclosure extend these concepts to the master control system 76. Further, the concept of using the watermark to define when to start-up is present on both the slave and the master. The concept of using a watermark to assert presence or SRQ/STR on a sample by sample basis assumes proximity to an audio time-reference, which is common on the slave side, but can also be found on the master side.[0062] As alluded to above, the multi-channel audio alignment schemes according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player. While any such device may benefit from aspects of the present disclosure, the present disclosure is particularly well suited for use with devices that operate according to the SLIMbus protocol. [0063] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0064] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0065] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0066] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0067] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Technologies disclosed herein provide cryptographic computing. An example processor includes a core to execute an instruction, where the core includes a register to store a pointer to a memory location and a tag associated with the pointer. The tag indicates whether the pointer is at least partially immutable. The core also includes circuitry to access the pointer and the tag associated with the pointer, determine whether the tag indicates that the pointer is at least partially immutable. The circuitry is further, based on a determination that the tag indicates the pointer is at least partially immutable, to obtain a memory address of the memory location based on the pointer, use the memory address to access encrypted data at the memory location, and decrypt the encrypted data based on a key and a tweak, where the tweak including one or more bits based, at least in part, on the pointer.
A method comprising:accessing, from a register, a pointer to a memory location and a tag associated with the pointer, wherein the tag indicates whether the pointer is at least partially immutable;determining whether the tag indicates that the pointer is at least partially immutable; andbased on a determination that the tag indicates the pointer is at least partially immutable:obtaining a memory address of the memory location based on the pointer;using the memory address to access encrypted data at the memory location; anddecrypting the encrypted data based on a key and a tweak, the tweak including one or more bits derived, at least in part, from the pointer.The method of claim 1, further comprising restricting, based on a determination that the tag indicates the pointer is not immutable, the memory address of the memory location from being obtained based on the pointer.The method of claim 1, wherein the pointer is an encoded pointer, and obtaining the memory address further comprises decoding the encoded pointer.The method of claim 1, wherein the pointer is cryptographically encoded, and obtaining the memory address comprises cryptographically decoding the pointer to obtain the memory address.The method of claim 1, wherein the pointer is a plaintext format.The method of any one of claims 1-5, wherein the pointer is to a base address for a memory location storing one or more instructions for execution.The method of any one of claims 1-5, further comprising executing an instruction to overwrite the pointer, and clearing the tag associated with the pointer based on executing the instruction to overwrite the pointer.The method of any one of claims 1-5, further comprising:accessing an instruction to store the pointer to memory;determining whether the instruction is of a type authorized to store pointers to memory; andexecuting the instruction based on a determination that the instruction is of the type authorized to store pointers to memory.The method of any one of claims 1-5, further comprising:accessing an instruction to modify a word in memory;determining that the word has an associated tag that is set to indicate the word is storing a pointer;determining whether the instruction is of a type authorized to modify pointers; andexecuting the instruction based on a determination that the instruction is of the type authorized to modify pointers.The method of any one of claims 1-5, further comprising:accessing an instruction to copy a set of words stored in memory;determining that a least one word to be copied has an associated tag that is set to indicate the word is storing a pointer;determining whether the instruction is of a type authorized to copy pointers; andexecuting the instruction based on a determination that the instruction is of the type authorized to copy pointers.The method of any preceding claim, wherein the tweak is further based on a type of the pointer or a stack frame for the memory address.The method of any preceding claim, wherein a set of words and a set of tags associated with the set of words are stored in a same cacheline of the cache, the set of tags being inaccessible by software.The method of any preceding claim, wherein the cache comprises a first set of ways to store first words and first tags associated with the first words, and a second set of ways to store second words without tags.An apparatus comprising means to implement a method as claimed in any preceding claim.One or more computer-readable media with code stored thereon, where the code is executable to cause a machine to implement a method or realize an apparatus as claimed in any preceding claim.
CROSS REFERENCE TO RELATED APPLICATIONSThis Application claims the benefit under 35 U.S.C. §119 of U.S. Provisional Application No. 62/868,884 filed June 29, 2019 and entitled "Cryptographic Computing". The disclosure of the prior application is considered part of and is hereby incorporated by reference in its entirety in the disclosure of this application.TECHNICAL FIELDThis disclosure relates in general to the field of computer systems, more particularly, to data encryption based on immutable pointers.BACKGROUNDProtecting memory in computer systems from software bugs and security vulnerabilities is a significant concern. A buffer overflow, which can affect memory safety, occurs when a program writes data to a buffer and overruns a boundary of the buffer such that adjacent memory locations are overwritten. Similarly, reading past the end of a buffer into another page may trigger an access violation or fault. Another memory safety violation is referred to as a dangling pointer. A dangling pointer is a reference that is not resolved to a valid destination. This may occur when memory is deallocated without modifying the value of an existing pointer to the deallocated (or freed) memory. If the system reallocates the freed memory and the dangling pointer is used to access the reallocated memory, unpredictable behavior, including system failure, may occur. Current computing techniques have used architecture and metadata to provide data protection. For example, in previous solutions, a processor would use lookup tables to encode policy or data about the data for ownership, memory size, location, type, version, etc. However, this metadata requires additional storage (memory overhead) and negatively impacts performance, particularly for implementations with fine-grain metadata. Thus, different approaches are needed to provide memory safety to computing systems.BRIEF DESCRIPTION OF THE DRAWINGSTo provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, where like reference numerals represent like parts, in which:FIGURE 1 is a diagram showing an example of tag storage and propagation for words in a memory hierarchy;FIGURE 2 is a diagram of example hidden inline metadata in a cacheline;FIGURE 3 is a diagram of an example process for handling data with hidden inline metadata in accordance with embodiments of the present disclosure;FIGURE 4 is a diagram of an example pointer overwrite scenario where tagged pointers are utilized;FIGURE 5 is a diagram of an example pointer-based encryption process;FIGURE 6 is a diagram of an example pointer-based decryption process;FIGURE 7 is a diagram of an example buffer overread scenario where encryption is bound to pointer values;FIGURE 8 is a diagram of an example scenario for passing base address information to a data encryption unit of a processor when memory is accessed;FIGURE 9 is a diagram of example data layout with associated base addresses used for data binding;FIGURE 10 is a diagram of an example encoded pointer that may be used in embodiments of the present disclosure;FIGURE 11 is a diagram of an example process of encrypting and decrypting pointers;FIGURE 12 is a diagram of an example cache arrangement with particular ways for storing tagged data;FIGURE 13 is a flow diagram of an example process of storing a cacheline in a cache according to certain embodiments;FIGURE 14 is a flow diagram of another example process of storing a cacheline in a cache according to certain embodiments;FIGURE 15 is a flow diagram of an example process of accessing encrypted data based on a tagged pointer;FIGURE 16 is a simplified block diagram of an example computing device configured with secure memory access logic according to at least one embodiment of the present disclosure;FIGURE 17 is a simplified environment diagram illustrating an application of the secure memory access logic of FIGURE 16 according to at least one embodiment of the present disclosure;FIGURE 18A is a simplified sequence diagram illustrating an application of memory retrieval instruction logic according to at least one embodiment;FIGURE 18B is a simplified sequence diagram illustrating an application of a memory store instruction logic according to at least one embodiment;FIGURE 19 is a simplified flow diagram of at least one embodiment of a process for providing security for an indirect address as disclosed herein, which may be executed by the computing device of FIGURE 16 ;FIGURE 20 is a simplified flow diagram of at least one embodiment of a process for verifying a previously secured indirect address as disclosed herein, which may be executed by the computing device of FIGURE 16 ;FIGURE 21 is a block diagram illustrating an example cryptographic computing environment according to at least one embodiment;FIGURE 22 is a block diagram illustrating an example processor core and memory according to at least one embodiment;FIGURE 23A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with certain embodiments;FIGURE 23B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor in accordance with certain embodiments;FIGURE 24 is a block diagram of an example computer architecture according to at least one embodiment; andFIGURE 25 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the present disclosure.DETAILED DESCRIPTIONThe following disclosure provides various possible embodiments, or examples, for implementation of cryptographic computing. Cryptographic computing is an important trend in the computing industry, with the very foundation of computing itself becoming fundamentally cryptographic. Cryptographic computing represents a sea change, a fundamental rethinking of systems security with wide implications for the industry.Memory safety vulnerabilities (e.g., buffer overflow and use-after-free) are the most frequently-reported vulnerabilities in software. In addition, speculative side channels can be used to leak information based on object plaintext. Preventing unauthorized pointer mutations and binding data encryption to immutable pointers can mitigate such vulnerabilities and enable per data object granular protections. For example, data may be encrypted using an encryption key and a tweak value, where the tweak value is based on an address at which the encrypted data is to be stored. The encrypted data may be subsequently decrypted using a decryption key and a tweak value, where the tweak value is based on a pointer to the address at which the encrypted data is stored. In particular embodiments, a tag may be associated with the pointer (making it a "tagged pointer"), with the tag indicating whether the pointer is immutable or at least partially immutable. A variety of possible encodings may be used for tagged pointers to enforce memory safety and associated instructions for updating tagged pointers. Binding data encryption to tagged pointers may allow for defense-in-depth and mitigation of temporal safety vulnerabilities, e.g., use-after-free. Certain embodiments may accordingly efficiently enforce memory safety for both 32- and 64-bit pointers with deterministic detection of pointer corruption.There are two primary categories of memory safety vulnerabilities: spatial and temporal. Spatial safety vulnerabilities include buffer overflows and underruns in the heap, stack, and global data areas, while temporal safety vulnerabilities include use-after-free and uninitialized use. Type safety vulnerabilities are a related category labeled as type confusion. Binding data encryption to tagged pointers can mitigate each of these categories of vulnerabilities.For instance, in certain embodiments, pointers are extended to specify a tag value in an unused slice of pointer bits and that tag value is incorporated into the encryption of the data referenced by the pointer. Adjacent objects are assigned different tag values so that adjacent buffer overflows may be detected with high probability or deterministically. Non-adjacent overflows may be detected with a lower probability, since only a small number of bits are used to encode the tag, and an adversary need only guess the tag value for the targeted object to succeed in accessing it. Use-after-free vulnerabilities may be mitigated, since allocations sharing the same location at different times are likely to be assigned different tag values. In further embodiments, additional information may be encoded beyond a tag in the pointer, such as the size of the object. By binding this encoding to the data encryption, the amount of information that the adversary must learn or guess to succeed in accessing an object increases. In some implementations, the pointer itself can be encrypted in a way that is bound to parameters such as the tag and object size to further reduce the probability of an adversary successfully forging a pointer to access data.Pointer tagging can further protect the integrity and confidentiality of pointers used for binding data encryption. For example, one or more tag bits may be associated with each word of memory while it is in swapped memory, persistent memory, DRAM, cache, and registers to indicate whether that word contains a pointer. Only words with set tag bits (e.g., indicating that the pointer is immutable) can be used as pointers. Operations that corrupt pointers result in the corresponding tag bit being unset, so that subsequent attempts to use the pointer generate a fault. In certain embodiments, 32-bit words can be tagged to support 32-bit pointer storage. This may result in wasted tag bits for applications that store 64-bit pointers; however, in some instances, two tag bits can be usefully combined to provide more deterministic memory safety enforcement in such a configuration while still only requiring a single tag bit in the register file.Pointer tagging may provide a number of benefits, including, but not limited to the following. First, it may potentially eliminate the requirement to encrypt all or a portion of pointers to prevent forgery, since the tag bit serves that role. This may have the benefit of reducing overheads for processing pointers. Second, it may mitigate certain spatial safety vulnerabilities more effectively than data encryption alone. Third, it may eliminate the requirement to encode any additional data into pointer bits, especially if combined with tweaked encryption of the entire pointer, which may be important for compatibility or for supporting full 64-bit address spaces. Other benefits may be realized through the use of pointer tagging and binding data encryption/decryption to tagged pointers. In embodiments of the present disclosure, all general purpose registers that may contain a pointer are tagged as described herein.FIGURE 1 is a diagram showing an example of tag storage and propagation for words in a memory hierarchy. In the example shown, each word in memory is associated with a tag that includes one or more tag bits, and the tags propagate through the memory hierarchy along with their associated words. For instance, tags 114 are associated with respective words in data 112. The tags 114 and data 112 are stored together in persistent memory 110. As words of the data 112 is moved from the persistent memory 110 into the cache 120, a cacheline 122 is populated with the words of the data 112 (e.g., W0, W1, ...,W15) along with the set of tags 124 associated with each word loaded into the cache 120. Further, as a word in the cacheline 122 is loaded into the register 130 (as word 132), its associated tag 134 is also loaded into the register. For instance, if word W0 in the cacheline 120 is moved into the register as word 132, then the tag T0 in the set of tags 124 will be moved into the register as tag 734. The words and their associated tags move together in a similar manner as they propagate the other direction in the memory hierarchy.Although the example shown in FIGURE 1 illustrates an example embodiment where tags are stored outside the same cacheline as their associated words, some embodiments may incorporate tags within the same cacheline as the associated words. Co-locating tags in the same cacheline as its associated data so that they are immediately available may provide security features while enabling processors to continue using and benefiting from performing speculative operations in a cache coherent manner. In certain embodiments, the tags may be hidden from software, which may allow legacy compatibility to be maintained as software may access virtual/linear memory contiguously without needing to ignore or skip over metadata regions, while the hardware may still enforce the metadata policies on the data. The co-located tag metadata may be referred to as hidden inline metadata. The hidden inline metadata may be hidden at the linear address/virtual address level as memory is seen by software in a contiguous manner, but the metadata is available for the purposes of memory tagging (such as tag compare with a pointer tag value in a linear address), capabilities (such as data structure length, permissions), and/or fine grain memory access control as enforced by the hardware.For instance, referring to FIGURE 2 , a diagram of example hidden inline metadata in a cacheline is shown. As shown in FIGURE 2 , a cacheline 200 includes a data portion 210 and a metadata portion 220, which includes a set of tags each associated with a respective word in the data portion 210. The metadata portion 220 is hidden for purposes of contiguous linear address/virtual address operations 240, but may be conditionally visible and available to the physical hardware and privileged software operations 250 (e.g., memory tagging, capabilities, and fine grain memory control).The use of the hidden inline metadata may provide multiple advantages in the operation of an apparatus, system, or process in comparison with conventional technology to provide metadata, including improved performance with a single cycle required to access data and hidden inline metadata; cache efficiency, with no additional metadata being required in the cache area; memory efficiency with metadata only being included when required; precision with both load and store checks being provided; and side channel protection with the parallel metadata being present to avoid speculation in data attacks.In some embodiments, memory tagging allows software to select the tag bits within a linear address by setting non-canonical bits to the tag value (e.g., utilizing a C or C++ pointer). The linear address tags are then compared with the metadata tags stored in the hidden memory to determine if the memory access is authorized. For example, to detect use-after-free exploits, a memory allocation routine (e.g., malloc) is to set the authorized memory tag(s) (StoreMetadata) for the allocated memory location(s), and then provide software with a pointer value containing the matching tag value (color) addressing the allocated memory buffer. When the software executes and causes the allocated memory to be loaded (e.g., into a processor register or GPR) or stored to memory, the processor will first compare the tag value in the pointer (non-canonical bits of the linear address) with the metadata tag value stored in hidden memory for the specified memory location (linear address). Because the metadata tags are co-located with the data (hidden from software), no additional memory lookups or caching is required to fetch and compare the stored tag values. In this manner, an efficient solution for memory tagging and access control is provided. Meanwhile, OS kernel/VMM (Virtual Machine Monitor) is provided access to memory without the metadata page table bit set in its memory mapping to page-in/page-out memory pages including the tag metadata (metadata physical memory is larger than in LA space). Finally, an overflow memory region is used to store both extra data and metadata that goes beyond a physical page size.Referring now to FIGURE 3 , an example process 300 for handling data with hidden inline metadata in accordance with embodiments of the present disclosure is shown. In particular, the example process illustrates hidden inline tag metadata 330 that maintains one or two tag bits per each 32-bit / 4-byte word slot (e.g., 332) on cacheline 340, with extended paging to hold page overflow content based on page offset. In the example shown, the pointer tags in the tag metadata 330 are hidden (inaccessible to software) by the hardware and stored with the associated data (e.g., the words in each slot) inside every cacheline. This allows the processor to simultaneously load a pointer from memory and (in parallel) check the tag metadata to determine if the register access is an immutable pointer (or partially immutable encoded pointer). The parallelism of the tag and data access may prevent speculative side channels as the processor is aware that memory contents are expected to be used as a pointer (memory reference) or are normal program data (not to be used as a memory reference). Likewise, the loads are faster (when compared to loading from a separate table in memory), as there is no need to perform a separate memory read to access the metadata after the data as both reside on the same cacheline. In some cases, same-cycle access to metadata may also be offered via other mechanisms, e.g., storing tag bits in dedicated DRAM storage similarly to how ECC bits are stored; however, such approaches may still have some other drawbacks, e.g., requiring extra silicon area in caches for storing tag bits regardless of whether the data in the caches is actually tagged (as was noted above) and similarly wasting DRAM capacity if the stored data is untagged.As illustrated in FIGURE 3 , a pointer / linear address 301 (e.g., a 64-bit address) is utilized for a page table lookup and TLB (Translation Lookaside Buffer) cache 306. The linear address 301 includes a linear page address 302 and a page offset 304. In an operation, a CPU (or other processor) is to execute at 308 a load or store instruction for the memory address (the linear address/location portion). At 310, it is determined whether the memory address is a metadata page. In some embodiments, a determination whether the memory address is a metadata page may be determined by checking an identifier in a memory or storage, including, for example, checking whether one or more bits in a page table entry (which may be referred to as a metadata bit) are set to indicate the presence of metadata in a cacheline for the cachelines corresponding to the associated page. If the memory address is determined to be a metadata page at 310, then the cacheline 340 and lookup tags(s) from the tag metadata 330 for corresponding slots in the cacheline 340 are loaded at 312 based on an address index.The actual data location may be calculated based on the page offset 304. For example, the address may be calculated (e.g., at 326) according to the following: Address = PageAddress + PageOffset + (PageOffset/DataBytesPerLine) ∗ MetaDataSize, if it is determined (e.g., at 322) that (PageOffset + MetadataPage) is less than PageSize. Otherwise, there is an overflow condition and lines that overflow are accessed at 324 at PhysicalAddress plus Offset, and thus PageAddress = OverflowOffset + (PageAddress/PageSize).At 314, it is determined whether the tag value indicates an immutable pointer. If so, then the immutable register content is protected at 318, and the immutable processor register accesses at 320 a slot (e.g., 332 in the example shown) of the cacheline 340 (e.g., slot 332 (Slot5) in the example shown). In particular, if the load is for an immutable pointer, the associated register also tracks the immutable state, preventing modification of the register content as it is immutable, copying such state on register moves. Such a processor register may then be used as a memory operand (e.g., used by software as a pointer). If the tag indicates the slot contains program data (not an immutable pointer), then the register is loaded and may be modified by software, but may not be used as a pointer to reference memory. If not, then the data is treated as read/write data at 316. It will be understood that different embodiments may use different tag metadata and slot sizes. Further, the location of the tag metadata 330 in the cacheline 340 and the format of the tag metadata may vary in different embodiment.The pointer / linear address 301 shown in FIGURE 3 is in a non-encoded, non-encrypted format. In embodiments using plaintext pointers, the input operand to an instruction for initializing a pointer (e.g., an instruction called "InitPointer") may include an integer value that is to be converted directly to the pointer without being transformed. The destination operand is the register or memory location to which the pointer should be stored.To set a tag bit in a register or word of memory, a particular instruction (e.g., InitPointer) for initializing pointers may be used. The typical threat model for memory safety enforcement assumes that the adversary is unable to insert new instructions into the code stream, so this implies that only authorized instructions for generating pointers are executed. However, the threat model may assume that an adversary can form a gadget to misuse an authorized instruction to generate pointers in an unauthorized manner as part of an exploit. Accordingly, in certain embodiments, overwriting a pointer with data lacking a set tag bit clears the tag bit in the destination location, which may be sufficient to block some exploits that depend on overwriting a pointer with a maliciously-crafted data value and then causing the program to use that crafted value as a pointer later, since the tag bit for the crafted value would then be unset. An example of this is shown in FIGURE 4 , which depicts an example pointer overwrite scenario where tagged pointers are utilized. In the example shown, a new pointer 406 is installed in memory 402 during a memory overwrite (e.g., a buffer overflow operation) to replace an original pointer 404, which is tagged by way of the tag 405 being set. The overwrite with untagged data causes the tag 405 to be unset, such that when a software program later attempts to use the pointer 406 the program is blocked from accessing the memory location to which the pointer points.However, in some instances, an adversary may still use a pointer initialization gadget that can perform such an exploit. Thus, in some embodiments, randomizing data locations may be used to make it more difficult for an adversary to locate the pointer to be overwritten as well as to craft a value with which to overwrite the pointer that will permit the exploit to proceed. Yet, memory disclosure vulnerabilities may still enable an adversary to gather enough information to construct a working exploit.Accordingly, embodiments, of the present disclosure may bind data encryption/decryption to the pointer value to further harden a system by disrupting memory disclosure. For example, in some embodiments, data encryption can be bound to the base address for the object by incorporating the base address as a tweak in a tweakable cipher. Example techniques for binding data encryption and decryption to base addresses are shown in FIGURES 5 and 6 , respectively.FIGURE 5 is a diagram of an example pointer-based encryption process 500. In the example shown, a keystream generator 510 accepts as inputs an encryption key 508 and a tweak 506, which is based at least in part on a base address 502 (which may be obtained from a pointer 501 as shown) for the data to be encrypted. In some embodiments, the tweak 506 is further based on other variable-length tweak values 504, which may include a block-aligned, pointer-derived offset within the allocation for the current access. The keystream generator 510 generates a keystream 511 based on the inputs. An XOR unit 514 performs an XOR operation on plaintext input data 512 and the keystream 511 to generate output ciphertext data 516. A portion of the keystream 511 may be discarded and the keystream generator 510 may be invoked multiple times to align the keystream 511 with the output data block 516.FIGURE 6 is a diagram of an example pointer-based decryption process 600. The example process 600 may correspond to the encryption process 500 of FIGURE 5 . In the example shown, a keystream generator 610 accepts as inputs a decryption key 608 and a tweak 606, which is based at least in part on a base address 604 for the encrypted data to be decrypted. The base address 604 is derived from the pointer 602 for the memory location at which the encrypted data is stored. The base address 604 may be derived directly from a plaintext address of the pointer 602, or by decoding the pointer 602 to obtain the address (e.g., by decoding an encoded pointer such as pointer 1000 of FIGURE 10 ). In some embodiments, the tweak 606 is further based on other variable-length tweak values 605, which may include a block-aligned, pointer-derived offset within the allocation for the current access. The keystream generator 610 generates a keystream 611 based on the inputs. An XOR unit 614 performs an XOR operation on the ciphertext data 612 (i.e., the data to be decrypted) and the keystream 611 to generate output plaintext data 616. A portion of the keystream 611 may be discarded and the keystream generator 610 may be invoked multiple times to align the keystream 611 with the input data block 612.If an adversary attempted to disclose memory by performing a buffer overread, the base address for the buffer would be used as the tweak for all of the data read. Since data outside of the buffer would have been encrypted using a different tweak value (i.e. the base addresses for each portion of that data), the data read from those regions would be garbled. In fact, this does not even depend on pointer tagging; it is a property of binding the data encryption to the base pointer value.FIGURE 7 is a diagram of an example buffer overread scenario where encryption is bound to pointer values. In the example shown, memory contains objects 702, 704, 706, which are encrypted using their respective base addresses (i.e., 0x40000, 0x40200, and 0x40500 respectively in the example shown). A buffer overread 700 is performed during a memory disclosure exploit attempt using the base address for the object 702 (i.e., 0x40000) as the tweak, whereby the adversary is attempting to gain access to the data of objects 702, 704, 706. The adversary will be able to obtain the actual data 712 for object 702 (since the data is encrypted using the base address 0x40000). However, since the objects 704, 706 are encrypted using different tweak values, the data obtained from the decryption attempt using the base address for object 702 will result is garbled data 714, 716 as shown. In embodiments where integrity checking is utilized, an integrity violation may be signaled by the garbled data, halting the exploit at the point that an incorrect tweak value is first used.Data corruption exploits may also be disrupted by binding data encryption to base addresses, since different base addresses would likely be used when maliciously overwriting data and pointers versus reading it out later. Further, binding data encryption to base addresses can also mitigate temporal safety vulnerabilities such as Use-After-Free (UAF) as well as type confusion. UAF and type confusion may involve writing to objects with the wrong size to corrupt an adjacent object, which would again result in the overwritten data/pointers being garbled when used later. Even if that's not the case, simply quarantining object base addresses (not object storage, just base addresses, i.e. the same storage could be reused for an object with a different base) may suffice for mitigating UAF.One challenge inherent in this approach is communicating the correct base address to the processor whenever memory is accessed. At the point of access, the base address can be passed through a register in a memory operand, and the memory operand may also accept one or more additional inputs that can be combined to compute the final effective address. Since the base address is provided in a separate register, it can be forwarded to the data encryption/decryption unit. For example, Intel Architecture already supports a "Scale Index Base" (SIB) format for memory operands, which permits the base address to be supplied in a distinct register. An example of this is shown in FIGURE 8 , which illustrates an example scenario for passing base address information to a data decryption unit of a processor when memory is accessed. In the example shown, an encrypted data object 810 in memory 820 is accessed based on generating an effective address for the object 810 using the scale 802 (where scale =1 in the example shown), index 804, and base 806 as operands. The base 806 is passed to the decryption unit 830 along with the encrypted data from the encrypted data object 810, and the decryption unit 830 decrypts the encrypted data to provide the plaintext data.A related approach may be used in certain embodiments to convey the base address for a code region in embodiments where code encryption is utilized. To support code encryption, a new register (e.g., a "RIPBASE" register) could be defined in addition to an existing register (e.g., a RIP register, also sometimes referred to as an instruction pointer, which contains a pointer to the memory location of the next instruction to execute) to hold the base address for the current code region. The RIPBASE register may be initialized using a new branch instruction (e.g., JBASE for "Jump and change Base") that accepts two register operands: 1) the base address for the code region, and 2) the branch destination. The JBASE instruction may update both the RIPBASE register and RIP register with those operands, respectively. In another embodiment, a single-operand variant of JBASE may set both the RIPBASE register and the RIP register to the same value, or existing indirect branch instructions could be redefined to operate in that manner. In other embodiments, the base address can be encoded directly in the address itself to provide maximum legacy software compatibility, as shown in FIGURE 10 and described further below. RIPBASE may be saved to the stack alongside RIP (the return address) during calls and restored from the stack as the return address is reloaded into RIP during returns.Another challenge may include passing the base address through the program to all of the memory operands, since it is common for programs to pass pointers referencing individual array entries or fields within larger structures to subroutines. A compiler may be able to accomplish this by identifying cases in which a pointer to the interior of an object is passed to a subroutine and encrypting the interior portion with the base address of just that portion. This identification can be performed recursively, such that interior portions of previously-identified interior portions can be encrypted with the base address of that finest-grained portion. An example data layout with associated base addresses used for data binding is illustrated in FIGURE 9 . In the example shown, an array 900 of structures 902 is shown. Each structure 902 includes a structure 904, within which is stored an int 906. The base addresses 910 may be used for data binding so that pointers to any array element and to any field within an array element (that may be passed as a distinct pointer to a subroutine) can be generated.A compiler may be able to determine necessary base addresses for data binding by analyzing the flow of pointers through the program and which structure elements are passed by reference to subroutines along particular flows or anywhere in the program if it is infeasible to statically determine all possible flows. However, it is not always possible to perform whole-program static analysis, e.g., due to the compiler being invoked separately for different source files with link-time optimization disabled. In that case, the compiler may need to conservatively assume that any field within any structure that may be passed to a different compilation unit may have pointers generated to any of its fields and passed to subroutines. Where the compiler can statically verify that structures are not passed to other compilation units, it may be able to reduce the number of distinct data binding base addresses used. Some benefits of reducing the number of distinct data binding addresses may include helping to preserve available base addresses to be used for future allocations and making it more difficult for adversaries to guess a valid base address. It is important to avoid reusing base addresses for different allocations, since reusing base addresses may enable an adversary to exploit a use-after-free vulnerability. In some embodiments, the memory allocator maintains a list of quarantined base addresses that should not be reused, at least for a certain period of time.While tagged plaintext pointers may provide adequate protection against certain vulnerabilities, in certain embodiments, the pointer may be encoded with certain context information, providing one or more additional security benefits. FIGURE 10 is a diagram of an example encoded pointer 1000 that may be used in embodiments of the present disclosure. In the example shown, the encoded pointer 1000 is a partially immutable pointer. Partially immutable encoded pointers may contain immutable regions and modifiable regions indicated by a size field within the pointer format, with the size field always being immutable. For instance, as shown, the encoded pointer 1000 includes a size metadata region 1002, an immutable base address region 1004, and a mutable region 1006. The size metadata indicates a number of mutable bits in the mutable region 1006, with the remainder of the bits of the pointer in the immutable region 1004 being immutable. However, in other embodiments, the size metadata indicates a number of immutable bits in the immutable region 1004, with the remainder of the bits of the pointer being in the mutable region 1004. In embodiments of the present disclosure, the encoded pointer 1000 has one or more tag bits associated therewith that indicate the encoded pointer is at least partially immutable. In certain embodiments, the size metadata region 1002 is immutable (along with the immutable base address region 1004), as determined by a tag associated with the pointer 1000. In some embodiments, the information in the size metadata region 1002 may be incorporated into hidden inline metadata instead of being part of the pointer 1000 as shown.The size metadata region 1002 may indicate the number of bits in the pointer 1000 that are immutable. For example, a size value of 0 may indicate that the entire pointer address is immutable (i.e., all bits other than the size metadata 1002 bits are in the immutable region 1004, with no bits in the mutable region 1006). As other examples, a size value of 1 may indicate only the least significant bit of the pointer is part of the mutable region 1006, and a size value of 2 may indicate the last two least significant bits of the pointer are part of the mutable region 1006. While FIGURE 10 only shows 5 bits in the size metadata region 1002, in some embodiments, a 64bit address may include a 6 bit size field (e.g., to enable this for every bit position).In some embodiments, other immutable fields may be included in the pointer 1000 as well. As one example, a version field (e.g., 4 bits) may be included in the pointer 1000 as additional pointer metadata, where the version may be matched with tag metadata (e.g., 330 in FIGURE 3 ) corresponding to a data access (e.g., 332 in FIGURE 3 ). In such embodiments, the pointer version field value is expected to match the same metadata value (e.g., in tag metadata 330) for the data access (e.g., 332) referenced by said pointer. This would be used to detect when a pointer to an allocation (from a memory allocator such as malloc) that was previously freed (e.g., free in glibc) is used to access data allocated to a new function (where the 330 metadata would be updated with a new version that no longer matches the freed immutable pointer), which may be referred to as a use-after-free attack. In this way, tagging may extend to indicating mutable and versioned data as well as indicating immutable pointers.Encryption of data based on a pointer such as pointer 1000 of FIGURE 10 will produce encrypted pointers that are encrypted when accessed as encrypted data. For example, if there is a linked list structure with a next pointer in it, the current pointer 1000 will decrypt the linked list node structure including the next pointer (struct Node {int data; struct Node* next;} ). The next pointer will be tagged as an immutable pointer (e.g., indicated via the tag metadata (e.g., 330) hidden metadata tag) while the data portion will be marked as data/mutable. When the processor decrypts the node using the current pointer, it will also decrypt the next pointer based on the base address and size of the current pointer 1000. Then it will have the next pointer, which the processor will use to decrypt the next node and so on.Further, in some embodiments, the entire pointer may be encrypted. Encryption of entire pointers may be tweaked by the type of the pointers in certain implementations, which may obviate the need to have metadata encoded into the pointer. One benefit of cryptographically binding pointers to particular types is that no extra metadata storage is needed to distinguish different types of pointers. Storage of a single tag bit that distinguishes pointers from non-pointer data may accordingly suffice in certain instances.FIGURE 11 is a diagram of an example process 1100 of encrypting and decrypting pointers according to such embodiments. In the example shown, an instruction 1102 is accessed. The instruction 1102 may be an instruction to load or store a pointer, initialize a pointer in a register, or use a pointer value from a register in a memory operand. Based on the instruction 1102, certain metadata for the pointer is obtained. For instance, in the example shown, a pointer type identifier (ID) 1104 is derived from the instruction type (e.g., a RET instruction expects an "Unclonable reverse code pointer after store" pointer type), instruction operand, or a prefix indicating a pointer type. In addition, in the example shown, a pointer storage location 1106 is determined if the pointer is locked to a storage location (e.g., stack pointer value for return addresses). In some embodiments, other metadata 1108 is also obtained. The metadata 1104, 1106, 1108 are provided as a tweak 1110 to an encryption/decryption unit 1114 along with a pointer encryption key 1116 to either encrypt or decrypt a pointer. For instance, the encryption/decryption unit 1114 may use the key 1116 and tweak 1110 to encrypt a plaintext (decrypted) pointer 1118 to yield an encrypted pointer 1112. Conversely, the encryption/decryption unit 1114 may use the key 1116 and tweak 1110 to decrypt an encrypted pointer 1112 to yield plaintext pointer 1118. Pointer encryption in this manner may be used to distinguish between different types of pointers and to bind them to storage locations. In some embodiments, part or all of the metadata 1104, 1106, 1108 may be used to select key 1116 from multiple possible keys instead of being supplied as a tweak. For example, a separate key may be used for each pointer type, and pointer type ID 1104 may be used to select key 1116 from a set of different keys. Remaining metadata that is not used to select key 1116 may still be supplied as part of tweak 1110.In some embodiments, program errors and exploits may be detected through the use of specific instructions for accessing pointers. For example, instructions can be defined to manipulate tagged pointers (e.g., to change the offset of the pointer within the object to which it points, to load it from memory into a register, to store it from a register to memory, etc.). If an ordinary data write overwrites a pointer value, the processor can either clear the tag bit in memory (as described above) or generate a fault, depending on how the processor has been configured. One benefit of generating a fault is that it may ease debugging of program errors that inadvertently overwrite pointers. Conversely, some programs may wish to conveniently overwrite pointers without needing to use a separate instruction to clear the tag bit first. A way to reduce overhead and complexity from using a separate instruction to clear tag bits is to define an instruction that clears all tag bits in a range of memory.Since tag bits propagate through registers and back into memory when a register is stored, an adversary may attempt to forge a pointer by modifying a register containing a pointer and then writing it out to memory to be used later. The adversary may only be able to find a code gadget to write it back that is intended to write a data field. Thus, such an exploit may be mitigated by only permitting pointers to be written to memory using designated instructions, e.g., STPTR for "Store Pointer". Compilers are aware of program locations in which pointers are intended to be written to memory, so they can use STPTR instructions at those locations. Attempting to store a register tagged as a pointer using an instruction other than STPTR would generate a fault. Alternatively, an instruction prefix could be defined to indicate whether an instruction that has already been defined is permitted to store a pointer. Furthermore, attempting to store a register not tagged as a pointer using a STPTR instruction or a previously-defined instruction with a prefix indicating that it is permitted to store a pointer may generate a fault to prevent an adversary from constructing a gadget that uses a STPTR instruction to tag a non-pointer data value in a register as a pointer as it is being written to memory. Alternatively, software may wish to use a previously-defined instructions to overwrite a pointer in memory, but to generate a fault if the source register is not tagged as containing a pointer. Similarly, software may wish for a fault to be generated if a previously-defined instruction is used to attempt to overwrite non-pointer data in memory with a source register that is tagged as containing a pointer, or if a STPTR instruction is used to attempt to overwrite non-pointer data in memory. A prefix, additional instruction operand, or instruction variant may be defined to indicate whether a fault should be generated in the aforementioned circumstances.In some instances, instructions may modify data in place in memory (e.g., by adding some value) without first loading it into a register, and an adversary may find a gadget that can be redirected to modify a pointer instead of data. This exploit attempt can be blocked by requiring that a prefix be supplied to modify a pointer, or that a specific new type of instruction (e.g., PTRADD) be used. The compiler can determine which instructions need to modify pointers, so it can select the appropriate instruction or prefix. If an instruction that is not authorized in such a way to modify a pointer is applied to a pointer, a fault may be generated or the tag bit in the register may be cleared.In some instances, code sequences may copy a block of data larger than a single word between two memory locations. An adversary may attempt to use such a code sequence to copy pointers as part of an exploit attempt. This portion of the exploit can be disabled by defining a new copy instruction, e.g., MOVSPTR for "Move String containing Pointers", or an instruction prefix to indicate whether pointers are allowed to be copied by the instruction. If an instruction that is not authorized to copy pointers encounters a source word with a set tag bit, it may generate a fault or clear the tag bits in the destination data region. The fault generation or tag clearing behavior may be selected using a prefix, additional instruction operand, or instruction variant.In some cases, a single tag bit may be insufficient to distinguish between multiple possible types of word values. For example, consider the following set of word types (which are distinct from programming language datatypes): (1) Unprotected data: Data that is not particularly sensitive; (2) Protected data: Sensitive data that should only be accessible from a small portion of the program; (3) Data pointer; (4) Clonable forward code pointer: A pointer that can be used in indirect branches and that can be copied freely within the program; (5) Unclonable forward code pointer after store: A pointer that can be used in indirect branches, but that cannot be copied to a different location in memory; and (6) Unclonable reverse code pointer after store: A pointer that can be used as a return destination, and that cannot be copied to a different location in memory.Additional word types may be needed for unclonable forward and reverse code pointers before store, but those might not need to be represented in memory, rather only in registers or implicitly in call instructions as a return address is being generated. Encrypted words will be decrypted while in registers in certain embodiments, and a set of three tag bits in each register may indicate the type of the word in the register. If 32-bit words are used to store pointers, each word may be expanded to 64 bits while in a register. An instruction that initializes a register with a pointer value may accept the full range of representable pointer types as indicated by an additional operand, a prefix, or an instruction variant. An instruction that initializes a memory location with a pointer value may accept just the range of pointer types that can be represented in memory as indicated by an additional operand, a prefix, or an instruction variant. In certain embodiments, immediately upon decryption, an address canonicality check may be performed. Some architectures and modes perform a canonicality check on linear addresses during memory operations to verify that an appropriate number of upper address bits all have the same value. Incorrect decryption, e.g., decrypting a pointer as though it is a different pointer type than it actually is, may cause a canonicality check to fail in such implementations. However, arithmetic operations performed on the corrupted plaintext representation of the pointer may cause it to pass canonicality checks performed while using the pointer later to access memory, which may be undesirable. Performing an extra canonicality check immediately after decryption may eliminate that possibility. In alternative embodiments, e.g., those that permit software to store metadata in upper pointer bits that software removes prior to using pointers, canonicality checks may be delayed until the pointers are used. In certain embodiments, to reduce register tag bit storage, pointers may be kept in encrypted form in registers associated with fewer tag bits than are needed to represent all possible pointer types and decrypted upon use. It will be understood that if pointer encryption is bound to the pointer's storage location and pointer decryption is delayed until the time that the pointer is used in a memory access, then the storage location and any other metadata incorporated into the tweak may need to be provided at the time the pointer is used. The pointer storage location is readily available for return instructions that implicitly load return addresses from the stack (e.g., the RET instruction) and indirect forward branches that load their destinations from memory operands (e.g., the JMP and CALL instructions) and then immediately use the addresses, but it may not be readily available for other types of instructions.In some embodiments, word types to be represented in stored words can be bound to those words using tweakable encryption, supplying a different tweak value for each type as shown in Table 1. This may reduce the needed metadata storage. As shown in Table 1, in some embodiments unclonability is enforced in the example shown in Table 1 by binding pointers to their storage locations via the tweak.Table 1: Tag bits and tweak values for indicating different pointer typesTag typeTag bitTweak valueUnprotected data0N/A - not encryptedProtected data0NoneData pointer11Clonable forward code pointer12Unclonable forward code pointer after store1{3, storage location}Unclonable reverse code pointer after store1{4, storage location}In certain embodiments, when a register is stored, the tag type in the register determines whether and how the register contents are encrypted prior to being written to memory, as well as how the in-memory tag bit is set. Specific load instructions can be utilized for each of the word types besides unprotected data, and the compiler may perform static analysis to determine which type to use at every point in the program. The variant of the load instruction would indicate the appropriate tweak value and settings for the loaded register's tag type bits. Previously defined instructions may implicitly access certain types of pointers, and that would indicate the appropriate tweak value. For example, call instructions (e.g., CALL) implicitly store an "unclonable reverse code pointer after store" pointer to the top of the stack and return instructions (e.g., RET) implicitly load a pointer with that type from the top of the stack.In some embodiments, a memory copy may only preserve the tag bit and copy the mix of plaintext and ciphertext as-is. The MOVSPTR instruction and other related instructions described above may still be used for this purpose when copying memory containing pointers. Memory consisting entirely of unprotected and protected data would use standard memory move instructions (e.g., REP MOVS).Overwriting a pointer with the wrong type of pointer or cloning an unclonable pointer would probabilistically protect against misuse, since the mismatched tweak would garble the address. One potential weakness is that replay of unclonable pointers (e.g., return addresses) is possible at the same storage location. Additional context that identifies a particular stack frame may be incorporated into the tweak to mitigate this, in certain embodiments.If physical tag storage is allocated to support 32-bit words with one tag bit per word, but 64-bit words are used, then there will effectively be two tag bits per word. That can be used to deterministically distinguish some of the word types without relying on cryptography. Table 2 illustrates an example tag assignment scheme in such scenarios.Table 2: Tag bits and tweak values for indicating different pointer typesTag typeTag bitsTweak valueUnprotected data00N/A - not encryptedProtected data00NoneData pointer01N/A - not encryptedClonable forward code pointer10N/A - not encryptedUnclonable forward code pointer after store11{1, storage location}Unclonable reverse code pointer after store11{2, storage location}Tag bits may traditionally be stored in dedicated memory alongside each cacheline of a cache. For example, to store a single tag bit for each 32-bit word of memory, the cache arrangement may be as shown in the cache 120 of FIGURE 1 . However, for applications with sparse pointer storage (e.g., applications that store many long stretches of pure data with no interleaved pointers), tag bit storage in the cache can be wasteful of silicon area. Memory can also be wasted when tags are stored for memory regions that do not need to be tagged, as they do not contain pointers (e.g., code regions that are free of pointers). Aspects of the present disclosure may minimize such wastage in one of the following ways: (1) use of hidden inline metadata, and (2) configuring a cache in a particular way.Hidden inline metadata may be used in particular embodiments to store tag bits in a repurposed portion of each tagged cacheline (only for cachelines containing tag data) so that memory regions that do not contain tag data do not incur any cache area overhead. Examples of hidden inline metadata storage are shown in cacheline 200 FIGURE 2 and cacheline 340 of FIGURE 3 . The presence of tag data in a cacheline may be determined in such embodiments by consulting a specification of which memory pages or regions may contain tags. For example, extended page tables could be used to mark code pages as execute-only and implicitly consider those to be free of tag data. As another example, a new bit may be defined in extended page table entries to explicitly indicate which pages may contain pointers. This may reduce wastage compared to implicitly treating just execute-only code pages as untagged, since there may be many data pages that do not contain any set tag bits. Since a single physical page may be aliased in one or more linear address spaces, an attempt to access a page as both tagged and untagged should generate a fault.In particular embodiments, only certain cache ways are configured to contain tag information. The tag information may be stored in the conventional way using dedicated memory alongside each cacheline such that cachelines with no set tags can be stored in cache ways with no tag storage. An example cache organization like this is shown in FIGURE 12 , which illustrates an example cache arrangement with particular ways for storing tagged data. In the example shown, the cache 1200 includes a number of sets of ways. Each way is configured to store data, but only ways 1210 (i.e., Way 0 and Way 1, also referred to as "tagged ways") are configured to store tags alongside their associated data, while ways 1220 (i.e., Way 2 and Way 3, also referred to as "untagged ways") may only store data without associated tags. For instance, Way 0 is configured to store tags 1203 alongside data 1202 and Way 1 is configured to store tags 1205 alongside data 1204, while Way 2 is configured to store only data 1206 and Way 3 is configured to only store data 1208. By configuring only certain ways to store tagged data in this manner, an overall silicon area of the cache may be more efficiently utilized or optimized.Relatedly, it may be wasteful to allocate tag bit storage for untagged memory while it is not in the cache. The approaches described above for implicitly treating execute-only code pages as untagged or including an explicit page attribute to indicate whether the page is tagged can be used to determine whether to allocate tag storage for memory. When combined with hidden inline metadata, this approach can save memory by avoiding the need to allocate storage for inline metadata in untagged pages. If untagged pages can be coalesced to be stored in contiguous physical memory, then the corresponding hidden inline metadata regions can be reclaimed to store data. This can further be combined with the cache organization described above with respect to FIGURE 12 , where a cache contains a mix of tagged and untagged ways to indicate which ways can be used for memory from a particular page. The following illustrates how an attempt by software to store a pointer to a particular cacheline may be handled in certain embodiments, depending on how that cacheline is marked and where it is stored for a cache organized as a mix of tagged and untagged ways. When hidden inline metadata is used to store tag bits, tag storage may be allocated for every cacheline in a tagged region to preserve expected data alignment.FIGURE 13 is a flow diagram of an example process 1300 of storing a cacheline in a cache according to certain embodiments. The process 1300 may be implemented to store data (including tagged data) to a cache implemented similarly to the cache 1200. At 1302, a way of the cache is selected to hold an incoming cacheline from memory. At 1304, it is determined whether the cacheline is from a page or region of memory marked as tagged (i.e., that the page/region contains tags and their associated data). If so, then the process proceeds to 1308, which is described below.If the cacheline is not from a page or region of memory marked as tagged, then it is determined at 1306 whether there are any untagged ways available in the applicable set of the cache. If so, then the cacheline is stored in one of the available untagged ways of the cache (e.g., ways 1220 of FIGURE 12 ) at 1307.If there are no untagged ways available in the applicable set of the cache, then it is determined at 1308 whether there are any tagged ways available in the applicable set. If so, then the cacheline is stored into one of the available tagged ways of the cache (e.g., ways 1210 of FIGURE 12 ) at 1309.If the cacheline is not from a page or region marked as tagged, and there are no tagged or untagged ways available in the cache, then a cacheline is evicted from the cache and the incoming cacheline is stored in the freed cacheline at 1310. The cacheline to be evicted may be preferentially selected to either have tag storage or lack tag storage depending on whether the incoming cacheline requires tag storage. Independent factors such as the last time the cacheline was accessed may also influence the selection of the cacheline to evict.FIGURE 14 is a flow diagram of another example process 1400 of storing a cacheline in a cache according to certain embodiments. At 1402, a software instruction attempts to store a pointer to a cache. At 1404, it is determined whether a memory store for the pointer is within a region marked as potentially containing tagged pointers. If it is not within such a region, then a fault is generated at 1405. However, if the memory store is within such a region, then the cacheline is stored in the cache at 1406. At 1408, it is determined whether the cacheline is in a tagged way of the cache. If so, then the tag bits of the tagged way are updated at 1409 to indicate that the stored word is a pointer. If the cacheline is not in a tagged way of the cache, then the cacheline is moved to a tagged way (evicting a cacheline if necessary) at 1410 before updating the tag bits of the tagged way at 1409 to indicate that the stored word is a pointer.FIGURE 15 is a flow diagram of an example process 1500 of accessing encrypted data based on a tagged pointer. At 1502, a pointer to a memory location is accessed along with a tag associated with the pointer. The tag indicates whether the pointer is at least partially immutable. The tag could be fully or partially immutable. At 1504, it is determined whether the tag indicates that the pointer is at least partially immutable (e.g., that the tag is set). If not, then the access to the memory location is restricted.However, if the tag indicates that the pointer is at least partially immutable, then the memory address of the memory location is obtained at 1506. Obtaining the memory address may include obtaining the address directly from the pointer, where the pointer is in a plaintext format. In other embodiments, obtaining the memory address may include decoding the pointer, where the pointer is encoded. In some cases, the pointer may be cryptographically encoded as described herein, and decoding the pointer may include cryptographically decoding the pointer (e.g., as shown in FIGURE 11 and described above).At 1508, the memory address obtained at 1506 is used to access encrypted data stored at the memory location, and at 1510, the encrypted data is decrypted using a key and a tweak. The tweak is based, at least partially, on the pointer itself.The example processes described above may include additional or different operations, and the operations may be performed in the order shown or in another order. In some cases, one or more of the operations shown in the flow diagrams are implemented as processes that include multiple operations, sub-processes, or other types of routines. In some cases, operations can be combined, performed in another order, performed in parallel, iterated, or otherwise repeated or performed in another manner.Example Cryptographic Computing EmbodimentsCurrent computing techniques (e.g., page tables for process/kernel separation, virtual machine managers, managed runtimes, etc.) have used architecture and metadata to provide data protection. For example, in previous solutions, a processor would use lookup tables to encode policy or data about the data for ownership, memory size, location, type, version, etc. Dynamically storing and loading metadata requires additional storage (memory overhead) and impacts performance, particularly for fine grain metadata (such as function as a service (FaaS) workloads or object bounds information).Cryptographic computing can resolve many of the aforementioned issues (and more). Cryptographic computing may make redundant the legacy modes of process separation, user space, and kernel with a fundamentally new fine-grain protection model. With cryptographic computing, protections are cryptographic, with processors and accelerators alike utilizing secret keys and ciphers to provide access control and separation at increasingly finer granularities. Further, instead of virtual machine and process separation in current systems, with cryptographic computing, individual functions may become the boundary, allowing address spaces to be shared via pointers that are encrypted, with the encrypted pointers and keys providing controlled access down to individual data objects.Cryptographic computing embodiments disclosed herein may leverage the concept of a cryptographic addressing layer where the processor decrypts software allocated memory addresses (linear/virtual address space, sometimes referred to as "pointers") based on implicit and explicit metadata (e.g., context information, a cryptographic context identifier, etc.) and/or a slice of the memory address itself (e.g., as a tweak to a tweakable block cipher (e.g., XOR-encrypt-XOR-based tweaked-codebook mode with ciphertext stealing (XTS)). As used herein, a "tweak" may refer to, among other things, an extra input to a block cipher, in addition to the usual plaintext or ciphertext input and the key (e.g., secret key 1616(1)). A tweak comprises one or more bits that represent a value. In one or more embodiments, a tweak may compose all or part of an initialization vector (IV) for a block cipher. When decryption of an address is performed, if the information used to create the tweak (e.g., implicit and explicit metadata, plaintext address slice of the memory address, etc.) corresponds to the original allocation of the memory address by a memory allocator (e.g., software allocation method), then the processor can correctly decrypt the address. Otherwise, a random address result will cause a fault and get caught by the processor. These cryptographic addresses (or address slices) may be further used by the processor as a tweak to the data encryption cipher used to encrypt/decrypt data they refer to (data referenced by the cryptographically encoded pointer), creating a cryptographic binding between the cryptographic addressing layer and data/code encryption. It should be noted that a tweak that is used as input to a block cipher to encrypt/decrypt a memory address is also referred to herein as an "address tweak". Similarly, a tweak that is used as input to a block cipher to encrypt/decrypt data is also referred to herein as a "data tweak".By cryptographically encoding metadata into addresses and their referenced data, cryptographic computing may reduce or remove the need for extra separate memory/storage to provide policy and context information/metadata. This can save up to billions of dollars in the computing industry (e.g., in dynamic random access memory (DRAM) expenses) due to the reduction of metadata alone. Customers can reap these savings in memory costs while still getting the security, safety and error-free functionality they want with cryptographic computing. By allowing safe speculation, the fundamentally cryptographic separation policies of cryptographic computing may allow the processor to speculate freely and provide increased performance.In cryptographic computing, where data security is fundamentally linked to cryptographic memory addressing, processing and fine grain cryptographic access controls to data are important. Cryptographic computing transforms all compute vectors from the CPU to GPU, accelerators to FPGAs, etc. With cryptographic computing, protections may be cryptographic, where processors and accelerators alike utilize secret keys and ciphers to provide access control and separation at increasingly fine granularities. Further, instead of virtual machine and process separation, individual functions may become the boundary, address spaces are shared while pointers are encrypted, with keys providing controlled access down to individual data objects. Capabilities may thus become entwined in the cryptographic operations to provide granular access control to data objects while preventing buffer overflows, type confusion and temporal (e.g., use-after-free) vulnerabilities at every level of the system. Cryptographic code may execute natively, safely, and without the need for interpreters or managed runtimes to provide memory and type safety. Memory may move from isolated domains and containers to globally shared memory models where data is accessible based on cryptographic access control mechanisms and gone are difficult-to-scale distributed permissions, paging and associated control structures. Even files may be safely stored directly in memory (e.g., in non-volatile memory modules, such as non-volatile dual-inline memory modules (NVDIMMs)), being individually encrypted, cryptographically sized, and incorruptible from software errors. This may have implications for functional safety, reliability, and multi-tenancy, potentially allowing for more speculation for improving processing performance.Cryptography continues to become faster and lighter. For instance, the Advanced Encryption Standard (AES) has been the mainstay for data encryption for decades, using a 128bit block cipher. Meanwhile, memory addressing is typically 64bits today. Although embodiments herein may be illustrated and explained with reference to 64-bit memory addressing for 64 computers, the disclosed embodiments are not intended to be so limited and can easily be adapted to accommodate 32bits, 128bits, or any other available bit sizes for pointers. Likewise, embodiments herein may further be adapted to accommodate various sizes of a block cipher (e.g., 64bit, 48bit, 32 bit, 16bit, etc. using Simon, Speck, PRINCE or any other block cipher).Lightweight ciphers suitable for pointer encryption have emerged recently. The PRINCE cipher, for example, can be implemented in 3 clocks requiring as little as 799 µm2 of area in the 10nm process, providing half the latency of AES in a tenth the Silicon area. Cryptographic computing may utilize these new ciphers, as well as others, introducing novel computer architecture concepts including, but not limited to: (i) cryptographic addressing, i.e., the encryption of data pointers at the processor using, as tweaks, contextual information about the referenced data (e.g., metadata embedded in the pointer and/or external metadata), a slice of the address itself, or any suitable combination thereof; and (ii) encryption of the data itself at the core, using cryptographically encoded pointers or portions thereof, non-cryptographically encoded pointers or portion(s) thereof, contextual information about the reference data, or any suitable combination thereof as tweaks for the data encryption. A variety of encryption modes that are tweakable can be used for this purpose of including metadata (e.g., counter mode (CTR) and XOR-encrypt-XOR (XEX)-based tweaked-codebook mode with ciphertext stealing (XTS)). In addition to encryption providing data confidentiality, its implicit integrity may allow the processor to determine if the data is being properly decrypted using the correct keystream and tweak. In some block cipher encryption modes, the block cipher creates a keystream, which is then combined (e.g., using XOR operation) with an input block to produce the encrypted or decrypted block. In some block ciphers, the keystream is fed into the next block cipher to perform encryption or decryption.The "Metadata Wall" may refer to the problem of additionally fetching metadata about memory operations such as access control, object type/size, and version. Today's computer architecture requires the processor to lookup metadata, or data about data, to determine if memory accesses are allowed. The additional memory accesses for metadata can impact performance, additional storage for the metadata is required, and the metadata itself needs to be protected in order to provide security. Some current solutions that add metadata in the form of bounds tables that the hardware would use to detect buffer overflows have been shown to have up to 4X performance impact with 400% memory overheads for some workloads. Similarly, shadow stack metadata enables Control-flow Enforcement Technology, and memory tagging uses metadata for versioning and capabilities add metadata for verifying data types. Memory tagging is not suitable for mitigating type confusion and protecting against uninitialized use variables. In addition, although the overhead of memory tagging may be reduced using error-correcting code bits, it can nevertheless require additional devices, which can increase costs. Capability machines may also use fat pointers to embed security metadata in-line with pointers, imposing substantial memory overheads (e.g., 25% in pointer heavy applications) due to doubling the pointer size.In contrast, cryptographic computing may provide metadata codified as tweaks to cryptographic addressing and data, cryptographic addressing and code, or a combination thereof, removing potential performance and memory overheads caused by the inclusion of such metadata. The resulting ciphertext may need no additional protections beyond the secret key, allowing reuse of the same memory as the data. As further discussed herein, cryptographic computing may solve a myriad of vulnerabilities with the same unified mechanism, using computation instead of memory.FIGURE 16 is a simplified block diagram of an example computing device 1600 configured with secure memory access logic according to at least one embodiment of the present disclosure. In the example shown, the computing device 1600 includes a processor 1602 having a set of secure memory access logic 1650 and a number of registers 1612. The secure memory access logic 1650 utilizes metadata about an indirect address 1614, which is encoded into unused bits of the indirect address 1614 (e.g., non-canonical bits of a 64-bit address, or a range of addresses set aside, e.g., by the operating system, such that the corresponding high order bits of the address range may be used to store the metadata), in order to secure and/or provide access control to memory locations pointed to by the indirect address 1614. For example, the metadata encoding and decoding provided by the secure memory access logic 1650 can prevent the indirect address 1614 from being manipulated to cause a buffer overflow, and/or can prevent program code from accessing memory that it does not have permission to access. Address encoding logic 1652 of the secure memory access logic 1650 is invoked when memory is allocated (e.g., by an operating system, in the heap) and provided to executing programs in any of a number of different ways, including by using a function such as malloc, alloc, or new; or implicitly via the loader, or statically allocating memory by the compiler, etc. As a result, the indirect address 1614, which points to the allocated memory, is encoded with the address metadata.The address metadata can include valid range metadata. The valid range metadata allows executing programs to manipulate the value of the indirect address 1614 within a valid range, but will corrupt the indirect address 1614 if the memory is accessed using the indirect address 1614 beyond the valid range. Alternatively or in addition, the valid range metadata can be used to identify a valid code range, e.g., a range of memory that program code is permitted to access (e.g., the encoded range information can be used to set explicit ranges on registers). Other information that can be encoded in the address metadata includes access (or permission) restrictions on the indirect address 1614 (e.g., whether the indirect address 1614 can be used to write, execute, or read the referenced memory).In at least some other embodiments that will be further described herein, other metadata (or context information) can be encoded in the unused bits of indirect address 1614 such as a size of plaintext address slices (e.g., number of bits in a plaintext slice of a memory address embedded in the indirect address), a memory allocation size (e.g., bytes of allocated memory referenced by the indirect address), a type of the data or code (e.g., class of data or code defined by programming language), and/or permissions (e.g., read, write, and execute permissions of the indirect address), a location of the data or code (e.g., where the data or code is stored), the memory location where the pointer itself is to be stored, an ownership of the data or code, a version of the indirect address (e.g., a sequential number that is incremented each time an indirect address is created for newly allocated memory, determines current ownership of the referenced allocated memory in time ), a tag of randomized bits (e.g., generated for association with the indirect address), a privilege level (e.g., user or supervisor), a cryptographic context identifier (or crypto context ID) (e.g., randomized or deterministically unique value for each indirect address), etc.For example, in one embodiment, the address metadata can include size metadata that encodes the size of a plaintext address slice in the indirect address. The size metadata may specify a number of lowest order bits in the indirect address that can be modified by the executing program. The size metadata is dependent on the amount of memory requested by a program. Accordingly, if 16 bytes are requested, then size metadata is encoded as 4 (or 00100 in five upper bits of the pointer) and the 4 lowest bits of the pointer are designated as modifiable bits to allow addressing to the requested 16 bytes of memory. In some embodiments, the address metadata may include a tag of randomized bits associated with the indirect address to make the tag unpredictable for an adversary. An adversary may try to guess the tag value so that the adversary is able to access the memory referenced by the pointer, and randomizing the tag value may make it less likely that the adversary will successfully guess the value compared to a deterministic approach for generating the tag value. In some embodiments, the pointer may include a version number (or other deterministically different value) determining current ownership of the referenced allocated data in time instead of or in addition to a randomized tag value. Even if an adversary is able to guess the current tag value or version number for a region of memory, e.g., because the algorithm for generating the version numbers is predictable, the adversary may still be unable to correctly generate the corresponding encrypted portion of the pointer due to the adversary not having access to the key that will later be used to decrypt that portion of the pointer.Address decoding logic 1662 verifies the encoded metadata on memory read and write operations that utilize processor instructions such as MOV, where a general purpose register is used as a memory address to read a value from memory (e.g., load) or to write a value to memory (e.g., store), as well as on other operations that involve the "use" of memory (such as arithmetic instructions with memory operands, e.g., ADD, and control transfer instructions, e.g., CALL/JMP etc.). These are considered memory operands, which may specify a location in memory at which the destination address for the control transfer is stored. The example secure memory access logic 1650 is embodied as part of processor instructions (e.g., as part of the processor instruction set architecture), or microcode (e.g., instructions that are stored in read-only memory and executed directly by the processor 1602). In other embodiments, portions of the secure memory access logic 1650 may be embodied as hardware, firmware, software, or a combination thereof (e.g., as programming code executed by a privileged system component 1642 of the computing device 1600). For example, the secure memory access logic 1650 may be embodied in software as an instruction set emulator (e.g., a binary instrumentation tool such as a PIN Tool) that emulates the instruction logic utilizing the encoded addresses as disclosed herein.The secure memory access logic 1650 is executable by the computing device 1600 to provide security for indirect addresses "inline," e.g., during execution of a program (such as a user space software application) by the computing device 1600. As used herein, the terms "indirect address" and "pointer" may each refer to, among other things, an address (e.g., virtual address or linear address) of a memory location at which other data or instructions are stored. In an example, a register that stores an encoded memory address of a memory location where data or code is stored may act as a pointer. As such, the indirect address 1614 may be embodied as, for example, a data pointer (which refers to a location of data), a code pointer (which refers to a location of executable code), an instruction pointer, or a stack pointer. Indirect addresses may be referred to by other terminology, such as "pointer," "address pointer," or "pointer address." As used herein, "metadata" may refer to, among other things, information about or relating to an indirect address 1614, such as a valid data range, a valid code range, pointer access permissions, a size of plaintext address slice (e.g., encoded as a power in bits), a memory allocation size, a type of the data or code, a location of the data or code, an ownership of the data or code, a version of the indirect address, a tag of randomized bits, version, a privilege level of software, a cryptographic context identifier, etc.As used herein, "memory retrieval instruction" may refer to, among other things, a "MOV" or "LOAD" instruction or any other instruction that causes data to be read, copied, or otherwise accessed at one storage location, e.g., memory, and moved into another storage location, e.g., registers (where "memory" may refer to main memory or cache, e.g., a form of random access memory, and "register" may refer to a processor register, e.g., hardware), or any instruction that accesses or manipulates memory. Also as used herein, "memory store instruction" may refer to, among other things, a "MOV" or "STORE" instruction or any other instruction that causes data to be read, copied, or otherwise accessed at one storage location, e.g., register, and moved into another storage location, e.g., memory, or any instruction that accesses or manipulates memory.However, the indirect address encoding/decoding technology disclosed herein is not limited to MOV or load/store instructions. For example, control transfer instructions such as call and jump instructions can be adapted to handle encoded indirect addresses in a similar manner as described herein with respect to MOV instructions, wherein code is to execute within a valid address range. Likewise, the instruction pointer (e.g., register) may be range bound given the encoded address specified by the control transfer instruction (e.g., JMP/CALL) results in an encoded address being used for the instruction pointer, thus restricting valid program execution to within a valid address range (effectively, the program counter can increment correctly until it reaches the end of the encoded range). Furthermore, in some architectures, any number of processor instructions may have a memory operand in the form of an indirect address (e.g., arithmetic operations such as ADD, SUB, MUL, AND, OR, XOR, etc. may have a source/destination memory reference in the form of an indirect address and/or a source/destination register operand). In other architectures, however, the format of memory operands may vary. For example, registers may be combined in some way (e.g., by addition) to produce an effective address. Additionally, other parameters may optionally be included, such as a scaling factor that multiplies one of the register values (e.g., the index) and/or a constant displacement value embedded in the instruction that is directly added. Further, it should be noted that while the illustrative embodiments refer to "instructions," such instructions may be embodied as, e.g., processor instructions, operating system routines, or other forms of computer program code.The example secure memory access logic 1650 includes address encoding logic 1652 (which includes metadata encoding logic 1656 and address encrypting logic 1658), memory store instruction logic 1670 (which includes data encrypting logic 1674 and address decoding logic 1662), and memory retrieval instruction logic 1680 (which includes data decrypting logic 1684 and address decoding logic 1662). Illustratively, the address decoding logic 1662, which includes address decrypting logic 1664 and address formation logic 1666, is embodied in memory store instruction logic 1670 and memory retrieval instruction logic 1680, but may be embodied in other processor instructions, or as a separate instruction or series of instructions, or as higher-level code executed by a privileged system component such as an operating system kernel or virtual machine monitor, or as an instruction set emulator. As described in more detail below, the address encoding logic 1652 and the address decoding logic 1662 each operate on an indirect address 1614 using metadata (e.g., one or more of valid range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag value, privilege level (e.g., user or supervisor), crypto context ID, etc.) and a secret key (e.g., secret key 1616(1)), in order to secure the indirect address 1614 at the memory allocation/access level. Also as described in more detail below, the data encrypting logic 1674 and data decrypting logic 1684 each operate on data (referenced by indirect address 1614) using at least a portion of the indirect address and a secret key (e.g., secret key 1616(2)), in order to secure the data at the memory location referenced by the indirect address 1614 by binding the data encryption to the indirect address.The example indirect address 1614 is embodied as a register 1612 (e.g., a general purpose register of the processor 1602). The example secret keys 1616(1)-1616(N) may be generated by a key creation module 1648 of a privileged system component 1642, and stored in one of the registers 1612 (e.g., a special purpose register or machine specific register (MSR)), or another memory location that is readable by the processor 1602. In some embodiments, the secret keys 1616(1)-1616(N) may be stored in a location that is readable only by the processor. In other embodiments, the secret keys 1616(1)-1616(N) used to secure indirect addresses, data, and code can be stored in another memory location, such as in firmware, in a secure portion of the data storage device 1626 or another data storage device, or another form of memory suitable for performing the functions described herein. In some embodiments, the secret keys 1616(1)-1616(N) may be transmitted across a secure communications channel and restored by an executive (such as an operating system or a virtual machine monitor, e.g., the privileged system component 1642 described below). In virtualized environments in which virtual machines are migrated from one machine to another, and/or in cases in which a virtual machine, process or program running on the computing device 1600 begins a sleeping/hibernating mode after an indirect address and the referenced data and/or code are secured using secret keys, and then later resumes, the secret keys will need to be recovered and restored. In these cases, the secret keys can be stored or possibly transmitted across a (secure) communications channel prior to a sleeping/hibernating mode, and then retrieved/restored by an executive (such as an operating system or a virtual machine monitor, e.g., the privileged system component 1642).It should be noted that embodiments described herein allow for any number of secret keys to be used for a particular program. In one example, the same secret key may be used for all indirect addresses used in a program. In another example, a different secret key may be used for each indirect address associated with a different memory allocation or for each predefined group of memory addresses associated with different memory allocations. In yet further embodiments, the same secret key used for an address encryption/decryption may also be used for encrypting the data bound to that address. In other embodiments, one secret key may be used for address encryption/decryption, while a different secret key may be used for data encryption/decryption bound to that address. For ease of explanation, embodiments further described herein refer to "secret address key" or "address key" to refer to the use of a secret key in encryption and decryption operations of memory addresses and "secret data key" or "data key" to refer to the use of a secret key in operations to encrypt and decrypt data.On (or during) a memory allocation operation (e.g., a "malloc"), memory allocation logic 1646 allocates a range of memory for a buffer and returns the indirect address 1614 and the metadata (e.g., one or more of range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag, privilege level, crypto context ID, etc.). For example, the memory allocation logic 1646 may encode plaintext range information in the indirect address 1614 (e.g., in the unused/non-canonical bits, prior to encryption), or supply the metadata as one or more separate parameters to the instruction, where the parameter(s) specify the range, code permission information, size (power), memory allocation size, type, location, ownership, version, tag, privilege level (e.g., user or supervisor), crypto context ID, or some suitable combination thereof. Illustratively, the memory allocation logic 1646 is embodied in a memory manager module 1644 of the privileged system component 1642. The memory allocation logic 1646 initiates the address encoding logic 1652. The address encoding logic 1652 includes metadata encoding logic 1656, which encodes the indirect address 1614 with the metadata (e.g., range, permission metadata, size (power), memory allocation size, type, location, ownership, version, tag value, privilege level, crypto context ID, some suitable combination thereof, etc.) and potentially an "adjustment," for example if range metadata is encoded, as described below. The address encoding logic 1652 stores the metadata in an unused portion of the indirect address 1614 (e.g., non-canonical bits of a 64-bit address). For some metadata or combinations of metadata, the indirect address 1614 may be encoded in a larger address space (e.g., 128-bit address, 256-bit address) to accommodate the size of the metadata or combination of metadata.To determine valid range metadata, example range rule logic selects the valid range metadata to indicate an upper limit for the size of the buffer referenced by the indirect address 1614. Address adjustment logic adjusts the valid range metadata as needed so that the upper address bits (e.g., most significant bits) of the addresses in the address range do not change as long as the indirect address 1614 refers to a memory location that is within the valid range indicated by the range metadata. This enables the indirect address 1614 to be manipulated (e.g., by software performing arithmetic operations, etc.) but only so long as the manipulations do not cause the indirect address 1614 to go outside the valid range (e.g., overflow the buffer).In an embodiment, address encoding logic 1652 uses the valid range metadata to select a portion (or slice) of the indirect address 1614 to be encrypted. In other embodiments, the slice of the indirect address 1614 to be encrypted may be known a priori (e.g., upper 32 bits, lower 32 bits, etc.). The address encrypting logic 1658 encrypts the selected slice of the indirect address 1614 (and the adjustment, in some embodiments), using the secret address key 1616(1) and an address tweak, as described further below. On a memory access operation (e.g., a read, write, or execute operation), the address decoding logic 1662 decodes the previously-encoded indirect address 1614. To do this, the address decrypting logic 1664 decrypts the encrypted slice of the indirect address 1614 (and in some embodiments, the encrypted adjustment) using the secret key 1616(1) and the address tweak, as described further below.The indirect address 1614 is returned to its original (e.g., canonical) form, based on appropriate operations (e.g., address formation logic 1666) in order to restore the original value of the indirect address 1614 (e.g., the true, original linear memory address). To do this in at least one possible embodiment, the address formation logic 1666 may remove the address metadata encoded in the unused bits of the indirect address 1614, e.g., return the unused bits to their original form). If the indirect address 1614 decodes successfully, the memory access operation completes successfully. However, if the encoded indirect address 1614 has been manipulated (e.g., by software, inadvertently or by an attacker) so that its value falls outside the valid range indicated by the range metadata (e.g., overflows the buffer), the indirect address 1614 will be corrupted as a result of the decrypting process performed by the address decrypting logic 1664. A corrupted indirect address will raise a fault (e.g., a general protection fault or a Page Fault if the address is not mapped as present from the paging structures/page tables). One condition that may lead to a fault being generated is a sparse address space. In this scenario, a corrupted address is likely to land on an unmapped page and generate a page fault. In this way, the secure memory access logic 1650 enables the computing device 1600 to provide indirect address security against buffer overflow attacks and similar exploits. Embodiments of the indirect address security technologies disclosed herein can also be used for software debugging purposes or as an access control mechanism to prevent software from accessing areas of memory for which the software does not have permission. Additionally, in comparison to other buffer overflow mitigation techniques, embodiments of the disclosed indirect address security technologies can operate without any additional memory reads/writes, or without any additional instructions, or without any binary modifications, or without the need to recompile legacy code. Moreover, embodiments of the disclosed technologies are responsive to adversaries that can read memory and overwrite pointer values, as well as adversaries that can create/select arbitrary pointer values. Further, embodiments of the disclosed technologies can scale from very small memory ranges to very large memory ranges, or can cascade memory ranges within other memory ranges by using different encoded pointers. Still further, embodiments of the disclosed technologies are effective with dynamic memory allocation (e.g., due to the ability to programmatically create range encoded pointers inline). Additionally, embodiments of the disclosed technologies can be extended to provide code block (code location) access controls to data. Further, embodiments of the disclosed technologies are compatible with 64-bit versions of the x86 instruction set, as well as ARM, MIPS, PowerPC and other processor architectures, including wider (e.g., greater than 64-bit) address bit architectures and smaller (e.g., 32-bit) architectures by reserving address ranges for the metadata containing addresses.Some embodiments of the disclosed technologies utilize aspects of address adjustment logic and address restoration logic to support legacy code compatibility, as described below. As used herein, "legacy code" may refer to a version of computer code that was designed to work on an earlier, or now-obsolete, or no-longer-supported computer architecture. For example, legacy code may include software that was originally developed for a 32-bit processor, but which is now running on a 64-bit processor. "Legacy code" also refers to a version of computer code designed without using or being adapted to use dedicated instructions for encoding and encrypting indirect addresses as described herein. At least some embodiments disclosed herein can be implemented without using new program instructions and accordingly, without the need for recompiling legacy code.Referring now in more detail to FIGURE 16 , the computing device 1600 may be embodied as any type of electronic device for performing the functions described herein. For example, the computing device 1600 may be embodied as, without limitation, a smart phone, a tablet computer, a wearable computing device, a laptop computer, a notebook computer, a mobile computing device, a cellular telephone, a handset, a messaging device, a vehicle telematics device, a server computer, a workstation, a distributed computing system, a multiprocessor system, a consumer electronic device, and/or any other computing device configured to perform the functions described herein. As shown in FIGURE 16 , the example computing device 1600 includes at least one processor 1602 embodied with the secure memory access logic 1650.The computing device 1600 also includes memory 1622, an input/output subsystem 1624, a data storage device 1626, a display device 1628, a user interface (Ul) subsystem 1630, a communication subsystem 1632, at least one user space application 1634, and the privileged system component 1642 (which, illustratively, includes the memory manager module 1644 and the key creation module 1648). The computing device 1600 may include other or additional components, such as those commonly found in a mobile and/or stationary computers (e.g., various sensors and input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the example components may be incorporated in, or otherwise form a portion of, another component. Each of the components of the computing device 1600 may be embodied as software, firmware, hardware, or a combination of software and hardware.The processor 1602 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 1602 may be embodied as a multi-core processor, other multiple-CPU processor or processing/controlling circuit, or multiple diverse processing units or circuits (e.g., CPU and GPU, etc.). The processor 1602 has a number of registers 1612, which include general purpose registers and special purpose registers. The indirect address 1614 and the secret keys 1616(1)-1616(N) are stored in registers 1612. The memory 1622 of the computing device 1600 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 1622 may store various data and software used during operation of the computing device 1600, as well as operating systems, applications, programs, libraries, and drivers.The memory 1622 is communicatively coupled to the processor 1602, e.g., via the I/O subsystem 1624. The I/O subsystem 1624 may be embodied as circuitry and/or components to facilitate input/output operations with the processor 1602, the memory 1622, and other components of the computing device 1600. For example, the I/O subsystem 1624 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1624 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 1602, the memory 1622, and/or other components of the computing device 1600, on a single integrated circuit chip.The data storage device 1626 may be embodied as any type of physical device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, flash memory or other read-only memory, memory devices that are combinations of read-only memory and random access memory, or other data storage devices.The display device 1628 may be embodied as any type of display capable of displaying digital information such as a liquid crystal display (LCD), a light emitting diode (LED), a plasma display, a cathode ray tube (CRT), or other type of display device. In some embodiments, the display device 1628 may be coupled to a touch screen or other human computer interface device to allow user interaction with the computing device 1600. The display device 1628 may be part of the user interface (Ul) subsystem 1630. The user interface subsystem 1630 may include a number of additional devices to facilitate user interaction with the computing device 1600, including physical or virtual control buttons or keys, a microphone, a speaker, a unidirectional or bidirectional still and/or video camera, and/or others. The user interface subsystem 1630 may also include devices, such as motion sensors, proximity sensors, and eye tracking devices, which may be configured to detect, capture, and process various other forms of human interactions involving the computing device 1600.The computing device 1600 further includes a communication subsystem 1632, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 1600 and other electronic devices. The communication subsystem 1632 may be configured to use any one or more communication technology (e.g., wireless or wired communications) and associated protocols (e.g., Ethernet, Bluetooth™, Wi-Fi™, WiMAX, 3G/LTE, etc.) to effect such communication. The communication subsystem 1632 may be embodied as a network adapter, including a wireless network adapter.The example computing device 1600 also includes a number of computer program components, such as the user space application 1634 and the privileged system component 1642. The user space application 1634 may be embodied as any computer application (e.g., software, firmware, hardware, or a combination thereof) that interacts directly or indirectly with an end user via, for example, the display device 1628 or the Ul subsystem 1630. Some examples of user space applications 1634 include word processing programs, document viewers/readers, web browsers, electronic mail programs, messaging services, computer games, camera and video applications, etc. Among other things, the privileged system component 1642 facilitates the communication between the user space applications 1634 and the hardware components of the computing device 1600. Portions of the privileged system component 1642 may be embodied as any operating system capable of performing the functions described herein, such as a version of WINDOWS by Microsoft Corporation, ANDROID by Google, Inc., and/or others. Alternatively or in addition, a portion of the privileged system component 1642 may be embodied as any type of virtual machine monitor capable of performing the functions described herein (e.g., a type I or type II hypervisor).The example privileged system component 1642 includes a number of computer program components, such as the memory manager module 1644 and the key creation module 1648. Each of the components of the privileged system component 1642 may be embodied as software, firmware, hardware, or a combination of software and hardware. For example, the components of the privileged system component 1642 may be embodied as modules of an operating system kernel, a virtual machine monitor, or a hypervisor. The memory manager module 1644 allocates portions of memory 1622 to the various processes running on the computing device 1600 (e.g., as ranges of virtual memory addresses). The memory manager module 1644 is embodied as, for example, a loader, a memory manager service, or a heap management service. The key creation module 1648 creates the secret keys 1616(1)-1616(N) (e.g., secret address keys and secret data keys) and writes them to a register or registers to which the processor 1602 has read access (e.g., a special purpose register). To create a secret key, the key creation module 1648 may execute, for example, a random number generator or another algorithm capable of generating a secret key that can perform the functions described herein.It should be noted that a myriad of approaches could be used to generate or obtain a key for embodiments disclosed herein. For example, although the key creation module 1648 is shown as being part of computing device 1600, one or more secret keys could be obtained from any suitable external source using any suitable authentication processes to securely communicate the key to computing device 1600, which may include generating the key as part of those processes. Furthermore, privileged system component 1642 may be part of a trusted execution environment (TEE), virtual machine, processor 1602, a co-processor (not shown), or any other suitable hardware, firmware, or software in computing device 1600 or securely connected to computing device 1600. Moreover, the key may be "secret", which is intended to mean that its value is kept hidden, inaccessible, obfuscated, or otherwise secured from unauthorized actors (e.g., software, firmware, machines, extraneous hardware components, and humans).FIGURE 17 is a simplified environment diagram illustrating an application of the secure memory access logic of FIGURE 16 according to at least one embodiment of the present disclosure. In some embodiments, the computing device 1600 may establish an environment 1700 during operation (e.g., native and/or virtual runtime or "execution" environments). The various modules depicted in the example environment 1700 may be embodied as hardware, firmware, software, or a combination thereof. In the environment 1700, the user space application 1634 (or the privileged system component 1642, e.g., in loading a user space application 1634) may, from time to time, during the operation of the computing device 1600, issue a memory allocation 1702. The memory allocation 1702 may be translated (e.g., compiled or interpreted), as needed, by the memory allocation logic 1646 of the privileged system component 1642 before being passed on to the processor 1602. In the processor 1602, the address encoding logic 1652 is executed in response to the memory allocation 1702 (e.g., in place of a conventional "malloc" instruction/function call). Whereas a conventional malloc instruction simply allocates memory and returns an (unsecured) pointer, the address encoding logic 1652 encodes an indirect address 1704, including metadata 1705 (e.g., the range permission information, size (power), memory allocation size, type, location, ownership, version, tag, privilege level, crypto context ID or key, or any combination thereof, etc.), as described herein, and returns an encoded indirect address 1706. The metadata may be embedded in the indirect address or pointer (e.g., a standard 64-bit register or enlarged register such as 128 bits or 256 bits to fit more metadata) in a plaintext format, embedded within another operand that is provided to the pointer encryption/decryption instructions and data access instructions, stored in a control register, stored in a table in memory, or provided via any combination thereof. For example, the size (power) metadata and tag value may be embedded in the pointer and the crypto context ID may be stored in a control register.Similarly, the user space application 1634 or the privileged system component 1642 may issue a memory store 1711 from time to time, which may be handled by the processor 1602 as a processor instruction that reads from a register 1612 (or other storage unit) and writes to memory 1622 or cache using indirect address 1614 (e.g., a STORE, MOV instruction). Using the STORE instruction as an example, the memory store instruction logic 1670 stores data only after successfully executing address decoding logic 1662 to decode the encoded indirect address 1706 and also successfully executing data encrypting logic 1674 based on a data tweak and secret data key 1616(2) to encrypt the data to be stored at a memory location pointed to by the indirect address 1704. Successful execution of address decoding logic 1662 is based on successful execution of address decrypting logic 1664, which uses an address tweak and secret address key 1616(1) to decrypt the encrypted address slice of the encoded indirect address 1706.Similarly, the user space application 1634 or the privileged system component 1642 may issue a memory load 1720 from time to time, which may be handled by the processor 1602 as a processor instruction that reads from memory 1622 and writes to a register 1612 using an indirect address 1614 (e.g., a LOAD, MOV instruction). Using the LOAD instruction as an example, the memory retrieval instruction logic 1680 performs the memory access only after successfully executing the address decoding logic 1662 to decode the encoded indirect address 1706. Successful execution of address decoding logic 1662 is based on successful execution of address decrypting logic 1664, which uses an address tweak and secret address key 1616(1) to decrypt the encrypted address slice of the encoded indirect address 1706. Once the indirect address 1704 is returned and memory 1622 is accessed to load data from the memory location pointed to by the indirect address 1704, the loaded data may be decrypted by executing the data decrypting logic 1684 based on a data tweak and secret data key 1616(2). Successful execution of data decrypting logic 1684 depends on whether the portions of the indirect address used to create a data tweak to decrypt the data, and the additional metadata (if any) used to create the data tweak, correspond to the original allocation of the memory location pointed to by the indirect address.While the address decoding logic 1662 is shown as a separate module from memory store instruction logic 1670 and memory retrieval instruction logic 1680 in FIGURE 17 , it should be understood that the address decoding logic 1662 can be incorporated into the instruction logic 1670 and/or 1680 or can be embodied as a separate set of instructions. Further, it should be understood that the address decoding logic 1662 can be incorporated into or referenced by other types of instructions, alternatively or in addition to the LOAD, STORE, and MOV instructions (e.g., arithmetic instructions with memory operands, call, JMP, etc.). For example, control transfer instructions such as call and JMP can load the encoded pointer address for the code to execute into the processor's program counter register (e.g., instruction pointer) (e.g., the RIP, where RIP is the instruction pointer register in 64-bit code). The instruction pointer register can then be queried by a program and as a result, the current program counter address will be the encoded form (offset to the current program counter location).If the address decoding logic 1662 successfully decodes the encoded indirect address 1706, which includes the address decrypting logic 1664 successfully decrypting the encrypted address slice in the encoded indirect address, the original indirect address 1704 is returned to the privileged system component 1642 and the memory access is completed, or program execution begins at the new program counter location (in the case of control flow changes). If the encoded indirect address 1706 does not successfully decode, a fault is raised. Based on the successful completion or failure of memory store 1711, an appropriate verification or fault signal 1713 is returned to the user space application 1634. Similarly, based on the successful completion or failure of memory load 1720, an appropriate verification or fault signal 1722 is returned to the user space application 1634.FIGURE 18A is a simplified sequence diagram illustrating a sequence of operations associated with the memory retrieval instruction logic 1680 shown in FIGURE 17 . A memory load is initiated by memory retrieval instruction logic 1680 based on the encoded indirect address 1706. Address decoding logic 1662 obtains the secret address key 1616(1) at 1801A and an address tweak 1708 at 1801B. The secret address key 1616(1) and the address tweak 1708 are used by the address decoding logic 1662 to decode the encoded indirect address 1706 at 1801C. If the encoded indirect address 1706 includes an encrypted address slice of the memory address, then address decoding logic 1662 can decrypt the encrypted address slice in the encoded indirect address 1706. If the encoded indirect address 1706 is successfully decoded by address decoding logic 1662, then the decoded indirect address 1704 is output at 1802.Data decrypting logic 1684 obtains the secret data key 1616(2) at 1801D and a data tweak 1709 at 1801E, which are used by the data decrypting logic 1684 to decrypt encrypted data 1710 at 1805. Data tweak 1709 is derived from encoded indirect address 1704 in various possible embodiments as will be further described herein. It should be noted that, in at least some embodiments, data decrypting logic 1684 may begin its decryption algorithm prior to receiving encrypted data 1710 at 1805, and in parallel with address decoding logic 1662. In this embodiment, a counter mode block cipher, for example, may perform an encryption operation based on the data tweak 1709 and the secret data key 1616(2) to generate a keystream, which can be used once the encrypted data 1710 is received.At 1803, memory retrieval instruction logic 1680 accesses memory 1622 based on the indirect address 1704 that was output at 1802 by address decoding logic 1662. At 1804, the encrypted data 1710 is retrieved (e.g., load, read, move, etc.) from memory 1622. At 1805, the encrypted data 1710 is provided to data decrypting logic 1684, which can use the already-generated keystream to decrypt the encrypted data 1710 (e.g., by performing an exclusive OR (XOR) function). If the encrypted data 1710 is successfully decrypted by data decrypting logic 1684, then decrypted data 1712 is output at 1806.FIGURE 18B is a simplified sequence diagram illustrating a sequence of operations associated with the memory store instruction logic 1670 shown in FIGURE 17 . A memory store is initiated by memory store instruction logic 1670 based on the encoded indirect address 1706. Address decoding logic 1662 obtains the secret address key 1616(1) at 1821 and the address tweak 1708 at 1822. The secret address key 1616(1) and the address tweak 1708 are used by the address decoding logic 1662 to decode the encoded indirect address 1706 at 1823. If the encoded indirect address 1706 includes an encrypted address slice of the memory address, then address decoding logic 1662 can decrypt the encrypted address slice in the encoded indirect address 1706. If the encoded indirect address 1706 is successfully decoded by address decoding logic 1662, then the decoded indirect address 1704 is output at 1824.Data encrypting logic 1674 obtains the secret data key 1616(2) at 1825 and the data tweak 1709 at 1826, which are used by the data encrypting logic 1674 to encrypt unencrypted data 1716 at 1827. Data tweak 1709 is derived from encoded indirect address 1704 in various possible embodiments as will be further described herein. If the unencrypted data 1716 is successfully encrypted by data encrypting logic 1674, then the encrypted data 1710 is output at 1828. At 1829, memory store instruction logic 1670 accesses memory 1622 based on the indirect address 1704, and at 1830, the encrypted data 1710 is stored in the memory 1622. It should be noted that address decoding logic 1662 and data encrypting logic 1674 may be performed in parallel, partially in parallel, in sequence, or in any other order or timing. Some embodiments may use the unencrypted portion of an address (partial address) to lookup a translation lookaside buffer (TLB) to see if a matching portion of the address is present in a TLB entry, proceeding with that TLB address mapping while the encrypted portion of the address decoding/decryption completes. However, encrypted data 1710 is not stored in memory 1622 until both address decoding logic 1662 and data encrypting logic 1674 have been successfully performed.Referring now to FIGURE 19 , an example process 1900 for securing an indirect address is shown. Portions of the process 1900 may be executed by hardware, firmware, and/or software of the computing device 1600 (e.g., by the processor 1602 executing the address encoding logic 1652). The process 1900 begins in response to a memory allocation (e.g., by a memory manager module). In block 1910, the computing device 1600 obtains the indirect address, address range, and other inputs needed to encode the indirect address (e.g., a code block identifier, instruction pointer, and/or metadata for tweaks, as described herein). In block 1912, the computing device 1600 determines whether the calling code (e.g., the code initiating the memory allocation) is authorized to access the indirect address received in block 1910 (e.g., indirect address 1704). To do this, the computing device 1600 may perform an access control check by verifying the instruction pointer or caller privilege level information for the calling code, which may be obtained from, for example, a heap manager of the memory manager module 1644. If the computing device 1600 determines that the calling code is not authorized to access the indirect address, a fault is raised (1914). If the computing device 1600 determines that the calling code is authorized to access the indirect address, the computing device 1600 proceeds to block 1916. In block 1916, the computing device 1600 determines the unused (e.g., non-canonical) address bits of the indirect address to perform the address range encoding or other metadata encoding (e.g., size (power) metadata, tag value, etc.). To do this, the computing device 1600 may simply use the higher (e.g., most significant) unused/non-canonical bits of the indirect address. It should be noted that the encoded addresses do not need to be architecturally non-canonical. Rather, the unused/non-canonical addresses can simply be a range of memory set aside by, for example, the privileged system component 1642, to enable the address encoding as disclosed herein.In block 1918, the computing device 1600 creates the metadata (e.g., valid range and/or permission data) and stores the metadata in the unused/non-canonical bits of the indirect address selected in block 1916. Illustratively, the metadata indicates an upper limit on the size of the buffer pointed to by the indirect address. To create the metadata, the computing device 1600 converts the indirect address values to a center location in which the most significant canonical address bits do not change for the valid memory range. In some embodiments, the range metadata includes an "exponent" to determine the 2's power of the memory range size (effectively determining the number of mutable and immutable address bits). In some cases, an "adjustment" is used to force values to the end of the 2's power range as described below. In other embodiments, the adjustment may be used to force the buffer to the beginning of the 2's power range when buffer "underflow" needs to be addressed (as opposed to buffer "overflow"). Using the exponent metadata, any 2's power memory range can be defined (e.g., 2, 4, 8, 16 ... 2^64).The following is a simple example of range metadata encoding. The addresses 0000b-0011b fit the range 0-3 where the upper two bits do not change. However, if a pointer is modified to go to the index 4, one of the upper bits will change. Accordingly, the valid range metadata can be encoded as [2] (for the upper two bits to encode a range of 4) and the valid range metadata can be stored in the higher non-canonical bits, e.g., "[2] 00xxb." In this example, the exponent would be 2 bits in size (e.g., values [1-4]), to cover the 4 bit addresses used in the example. Table 3 below illustrates a number of additional, simplified examples.Table 3: Address encoding examplesReal address rangeSizeEncoded addressComment1001b-1100b4 bytes[2] {3} 11xxAdjust +3 to fit all in 11xxb1001b-1101b5 bytes[3] {1} 1xxxAdjust +1 to end of range1110b-1111b2 bytes[1] {0} 111xFits in lowest power of 21101b-1110b2 bytes[1] {1} 111xAdjust +1 to fit all in 111xb0000b-1111b16 bytes[4] {0} xxxxFull range1010b-1010b1 byte[0] {0} 1010Exact match1011b-1101b3 bytes[2] {2} 11xxAdjust +2 to end of rangeIn Table 3, the encoded address is represented using a format that is similar to a floating point format. In the encoded addresses in the third column of Table 3, the number in brackets, e.g., [2], is the exponent or valid range metadata; the number in braces, e.g., {3}, is the adjustment value, and the address to the right of the adjustment value indicates the unused/non-canonical bits in which the valid range metadata and adjustment value are stored. In block 1920, the computing device 1600 determines the adjustment (or "offset") to be applied to the valid range, and stores the adjustment value in the unused/non-canonical bits of the indirect address. In some embodiments, the adjustment is used to force the encoded range to the end of a 2's power boundary. This sets a very specific upper bound on the buffer size. In this way, an encoded version of the original (not encoded) valid address range can be created. The encoded version can be designed such that the least number of upper bits will change over the valid range (e.g., so that encryption of the upper bits will detect/amplify modifications to the encoded address on decryption). The encoding is reversible, such that the original intended valid address range is returned as long as it is modified within the range. In the example above, the range 0-3 decimal (0000b-0011b binary) can be encoded as [2] {0} 00xxb (where "xx" means those bits can take any value for the range: 00, 01, 10, 11). In another example, the range 1-4 decimal (0001b-0100b) can be encoded as [2] {-1} 00xxb (where the adjustment is subtracted in order to keep the upper bits constant). Alternatively, the same range 1-4 decimal (0001b-0100b), can be encoded as [2] {3} Olxxb (this time adding an adjustment of 3 in order to keep the upper bits constant). With either representation, the encoded version decodes back to the original address range 1-4. In still another example, if the buffer size is 4 KB, a 10-bit adjustment value with a resolution of 4 bytes can be used.Other embodiments may use a signed adjustment value (e.g., 2's compliment) where the buffer may be either adjusted to the beginning or end of the 2's power boundary depending on the sign (+/-) of the adjustment. Such embodiments can provide protection from either buffer overflow or underflow situations depending on the adjustment sign. In cases where 16 bits are available in unused/non-canonical addresses (e.g., in current 64-bit processors), 10 of the available bits can be used for the adjustment and the remaining 6 bits can be used for the valid range metadata (e.g., exponent value/2's power). If the exponent value reaches a range beyond a 4 KB page, the adjustment can expand by a 2's multiplier to allow adjustments of large buffers within even larger power of 2 ranges (noting that in some embodiments, 4096 bytes are fully covered with a 10-bit adjustment value allowing the adjustment to "adjust" a buffer to end with the very last 4 byte word in a 4 KB page before the upper (2's power) bits will change). Such an adjustment (e.g., incremented by 1) will adjust the buffer location 4 bytes at a time. Any other choice of initial adjustment size and word size is possible in other embodiments. In another example, if the exponent has a value of 13, then the adjustment value can be multiplied by 2 so that the adjustment can still encompass the full 2's power range (in this case, two 4 KB pages, if adjusting by 8 bytes at a time), and so on (e.g., an exponent value of 14 means the adjustment value is multiplied by 4, and an exponent value of 15 means the adjustment value is multiplied by 8 and so on, allowing the adjustment to encompass the full 2 powers range).In block 1922, the computing device 1600 encrypts a portion of the indirect address, where the portion of the indirect address to be encrypted is determined by the valid range metadata (e.g., exponent/2's power) and the adjustment value. The valid range metadata determines the number of the most significant address bits of the encoded address that are to be encrypted (e.g., down to a minimum number so some address bits will always be encrypted). In some embodiments, the adjustment value is encrypted as well (e.g., to create a reasonable block size for a block cipher). In some embodiments, the most significant bits of the used bits/canonical address identified in the valid range metadata are encrypted with a secret address key (e.g., the secret address key 1616(1)), using the valid range metadata (which may or may not include the adjustment value) as an address tweak. In the illustrated embodiments, the valid range metadata (e.g., exponent/2's power) would not be encrypted because the processor uses the valid range metadata plaintext to determine the number of bits to decrypt. However, the valid range metadata (e.g., exponent/two's power) can be used as a tweak in the case of a tweakable block cipher (and thereby affect the encrypted bits). Other data values that may be used as tweaks include, but are not necessarily limited to: data stored in the unused bits of the indirect address, the upper limit on the buffer size, an exponent of a two's power boundary selected as the upper limit on the buffer size, an adjustment value applied to the two's power boundary, a code block identifier, instruction pointer data, permission information encoded in the metadata, version number (useful when reassigning/revoking pointers that were previously assigned to a program, version may be maintained by the processor in a register), and/or other metadata described herein (e.g., plaintext address slice size, memory allocation size, type, location, ownership, tag, privilege level, crypto context ID, or any suitable combination thereof).As used herein, a "tweak" may refer to, among other things, a second input to a block cipher, in addition to the usual plaintext or ciphertext input and the key (e.g., the secret key 1616(1)-1616(N)). In at least some embodiments, a tweak may compose all or part of an initialization vector (IV) for a block cipher. Encrypting the upper two canonical bits enables the computing device 1600 to detect when the indirect address has been illegally changed, because the encryption algorithm will cause the illegally-changed upper bits to produce a random sequence of bits that are non-deterministic to an adversary, which likely results in a fault when the illegally-changed indirect address is used.The portion of the indirect address to be encrypted (e.g., the upper used/canonical bits) is encrypted using a cipher mode encryption algorithm, such as a tweakable block cipher, using the valid range metadata and adjustment (e.g., [2] {-1}, in the above example) as a tweak. Some examples of tweakable block ciphers include: XOR-encrypt-XOR (XEX), Liskov, Rivest, and Wagner (LRW), and XEX-based tweaked-codebook mode with ciphertext stealing (XTS). Other bit diffusion methods in which any single bit change in the cipher text results in changes across the entire decrypted plaintext can be used. If desired, alternative embodiments can trade off security for performance by using non-cryptographic methods that still achieve reasonable bit diffusion analogous to a block cipher.The cipher selected for the encryption can be implemented in hardware, using an algorithm that has a bit-selectable or otherwise variable block size (e.g., any block cipher or similar diffusion algorithm with appropriate block sizes that may constructed to utilize a tweak), or an algorithm that allows a fixed block size with a tweak using the remaining unencrypted bits (e.g., the extra bits outside the fixed block size). In some embodiments, the cipher has sufficient bit diffusion so that any bit change made to the encrypted address bits will equally affect (cascade through) all bit positions when decrypted. This provides the basis for a corrupted address given any change or bounds violation. Using this method, if the adversary attempts to tamper with the metadata (e.g., the exponent or adjustment values, or the encrypted most significant bits) the resulting decoded address will be corrupted. In the 64-bit address space, address corruption will result in a fault with high probability, thus allowing the address corruption (and pointer access or bounds violation) to be caught by the privileged system component 1642 (e.g., an operating system/executive/VMM/alternative mode/debug trace/management processor/subsystem, etc.).In the example above, if the indirect address/pointer value is incremented beyond 3, modifying the indirect address/pointer in this way will corrupt the upper canonical bits and cause a non-deterministic memory access that cannot be controlled by an adversary. For instance, going beyond a buffer size by one byte will result in a random memory access that will page fault with high probability. This is due to the bit diffusion properties of the cipher to ensure that even one-bit changes will diffuse through all of the most significant bits. As a result of the adjustment, which forces values to the end of the 2's power range, buffer overflows cause corruption of the encrypted address bits.The cipher tweak can be extended to include a code block identifier to provide access controls over which code blocks (e.g., blocks of the calling code) are permitted to use an indirect address/pointer to access memory. Additionally, instruction pointer (which may be referred to as the "program counter") information or ranges can be encoded as part of the pointer encryption tweak (also referred to herein as "address tweak"). The instruction pointer information can be used to limit the scope of what code can access what data. For example, all code can be arranged within fixed blocks of memory within the 64-bit address space. Code with similar access permissions can be grouped together in the same block or range. The address tweak can include the identifier for the block of memory from which an instruction is executing. In this way, code and data can be associated, and access controlled, such that an adversary coming from a different code block will not be able to access data of the protected block using the encrypted pointers, because the encrypted pointers will not decode properly if the wrong code block identifier is used as an address tweak. Further, when a block of code calls, e.g., malloc, to allocate memory to itself, malloc can return the encrypted address using the calling code's memory block to ensure private access to the allocated memory (so long as the allocated memory isn't freed and then reallocated to another code block). Alternatively, other methods of identifying the calling code can be used in the address tweak, such as protection keys. Still further, the metadata for read/write/execute access that is used by the processor 1602 to control access to memory can be used as part of the address tweak for the encrypted address bits. Additionally, the instruction pointer may itself be represented as an encoded pointer (e.g., range-based). In this case, the metadata and encrypted address bits can be used as part of the "tweak" identifying the code block accessing a data pointer or requesting a memory allocation/assignment. At 1924, the encoded indirect address may be output and control returned to memory manager 1644.Referring now to FIGURE 20 , an example process 2000 for decoding an indirect address is shown. Portions of the process 2000 may be executed by hardware, firmware, and/or software of the computing device 1600 (e.g., by the processor 1602 executing the secure mov logic and/or the address decoding logic 1662). The process 2000 begins in response to a memory access operation such as a read, write, or execute operation, e.g., a MOV instruction. Of course, different processor architectures may refer to the "MOV" functionality by different names for the instructions or different options/parameters. As such, the disclosed embodiments apply to all types of "MOV" functionality across different architectures, irrespective of the terminology used to refer to such functionality. Further, the MOV instruction is one example, and any instruction that can access memory to read/write data can apply the address encoding and decoding methods disclosed herein.In block 2010, the computing device 1600 obtains the encoded indirect address (e.g., the encoded address 1706, which may be obtained from a register 1612). In block 2012, the computing device 1600 determines whether the encoded address obtained in block 2010 has unused or non-canonical bits. If the computing device 1600 determines that the encoded address does not have unused/non-canonical bit (e.g., the address doesn't fall within the non-canonical, or otherwise reserved, range of addresses, whether the address range is 32-bit, 64-bit, 128-bit or whatever range an alternate architecture may require), a fault is raised (2014). If the computing device 1600 determines that the encoded address has unused/non-canonical bits (e.g., the address falls with the canonical or reserved address range), the computing device 1600 proceeds to block 2016. In block 2016, the computing device 1600 decrypts the encrypted portion of the encoded address, using the decryption algorithm counterpart of the encryption algorithm used in block 1922 of FIGURE 19 , and using the same secret key and tweak as used by the encryption algorithm in block 1922 of FIGURE 19 .In block 2018, the computing device 1600 "undoes" the adjustment to the range metadata in the decrypted address (e.g., by subtracting the decrypted adjustment value in the unused/non-canonical bits from the full decrypted value of the indirect address). In block 2020, the computing device 1600 returns the decrypted indirect address to its original (e.g., canonical) form by, for example, removing the unused/non-canonical bits.In block 2022, the computing device 1600 uses the decoded address output by block 2020 as a "true" (e.g., virtual or linear) memory address (e.g., as a pointer). In block 2024, the computing device 1600 determines whether the decoded address used as a memory address/pointer at block 2022 is a corrupted address. If the decoded address is corrupted, a fault is raised (2014). If the decoded address is not corrupted, the computing device 1600 completes the memory access operation successfully, using the decoded address as a memory address/pointer, in block 2026.In this way, the process 2000 allows the computing device 1600 to verify the range-encoded indirect address and enforce the embedded range check before converting the range-encoded address into a real memory address. Additionally, invalid adjustment values (e.g., adjustment values that go beyond the 2's power range), can be used to determine with some probability when a corruption occurs as well as invalid address values or metadata reserved to detect when corruption occurs. Even if corruption is not detected, the resulting address would not be deterministic (and therefore usable) to an adversary. In addition to the buffer overflow mitigation techniques described above, there are other applications of the pointer address encoding technologies disclosed herein. For example, processor instructions can be restricted by privilege level or caller location authorization (e.g., an instruction pointer block or range of a heap manager). Additional instructions can be added in cases in which the program code itself can control its own pointers and ranges. These instructions may use a larger encoded pointer range as input, and may produce a smaller/equal range pointer (more restrictive) falling within the larger buffer's range if the code executing this instruction belongs to the code block that owns the original (superset) buffer pointer (which can be determined by the instruction pointer). For example, the memory manager module 1644 can allocate the call stack and provide a large range pointer to the call stack (e.g., for the stack pointer). Code segments that are authorized to act on the call stack may then use this processor instruction to encode sub range pointers to buffers implicitly created on the stack. Compilers can automatically augment code to do this as stack operations are performed (local variables created, etc.), thus, protecting even individual data structures or individual variables on the stack. That is, the disclosed techniques enable encoding buffer sizes down to individual variable sizes (e.g., a 32-bit integer can be encoded as a pointer to a buffer of 4 bytes).Similarly, code blocks that own a pointer can use similar instructions to transfer control/ownership to another/different code block by generating a newly encoded pointer for the target/receiving code block based on the original, e.g., by selecting a smaller buffer size for assignment to another code block. Such an instruction would take as input parameters the resulting buffer size, the original data pointer and an encoded pointer for the targeted code range (that the pointer is being assigned). Such an instruction can decode the input encoded pointer using the instruction pointer of the calling code block as a tweak, reduce the range if the input range is smaller than the input encoded pointer, and use the input encoded pointer to the targeted code block/range as part of the tweak when producing the output encoded pointer (now accessible to the newly assigned code block for the extent of the specified range). Other input parameters could be, for example, additional metadata, such as read/write/execute permissions (possibly as a subset of the original) for the targeted code.To provide access control, the instruction pointer, or an encoded instruction pointer comprising of a range identified with a similar exponent, adjustment and encrypted indirect address bits, can be used as part of the tweak. The instruction pointer can similarly be encoded as an executable range/buffer of memory where the program is stored. When used as a tweak for the data pointer (e.g., an indirect address 1614), the instruction pointer can control access to data by different pieces of program code. Further, the encoded instruction pointer value can be queried by programs for RIP relative addressing, (e.g., the instruction pointer register can be read by a program and then used to call/jump to relative offsets within the program's valid range, or read/write data within the program's valid range by using the encoded instruction pointer value).Additionally, data pointers may be created and converted by new processor instructions (or operating system routines), allowing ownership of a data pointer (e.g., an indirect address 1614) to be extended to other code/program ranges. That is, the owner program/code of a data pointer (whose instruction pointer range was used as part of the tweak for the data pointer) can call, e.g., an operating system routine (or processor instruction) that will produce a new data pointer that can be used by another program/code range. In this case, the new instructions/operating system routine will decode the original data pointer that was encoded as described herein and re-encode the range using the new program/code range metadata as the tweak, thereby producing a data pointer that will decode properly when accessed from an instruction pointer operating in the new address range. The new instruction/routine may also take as a parameter a smaller range encoding, thereby allowing the program owning the original data pointer to subset the data buffer size to a smaller region of memory accessible by the new program/code range.Further, a 64 bit-stack pointer can be encoded as described herein, and as such, should be updated accordingly by the processor 1602 on stack pushes and pops, calls and returns conforming to the allocated range of the stack. After decoding a MOV instruction to the stack pointer, the processor 1602 may choose to cache the decrypted version of the stack pointer for direct memory access efficiency, however, the processor 1602 may continue to track the range condition to assure stack overflows do not occur.With instruction pointer relative addressing, the program counter register can be read and used to calculate offsets for position independent code (PIC) and data. The instruction pointer can also be encoded such that legacy instruction pointer relative position independent code will still function correctly. In this case, the encoded instruction pointer register may have a range conforming to the extent of the relocated program code and data (including text sections) in memory. In addition to memory accesses, PIC programs may utilize indirect jumps (JMP) and calls based on RIP relative addressing. As such, the JMP and CALL instructions can be modified to handle encoded pointer addresses, converting them into the actual linear memory address similar to the MOV instruction. Instruction pointer relative jumps and calls outside of the pointer's bounds may result in a corrupted target address for the jump/call instruction, which is very likely caught with a fault. The loader can also fix relocatable symbol tables to properly encode the extent of the function pointers for their respective code sections and memory locations. This instruction pointer-range pointer can also be used as a flexible code block/identifier tweak to access control data pointers with their associated code. Additionally, encoded range pointers on the call stack can be encrypted to provide control flow integrity between calls and returns while retaining the range encoding when decrypted on returns. Not all values of the 6-bit exponent metadata are actually used (e.g., with 64-bit addressing). For example, in 64-bit addressing, values that go beyond 48 will collide with the non-canonical bits and therefore will never be utilized. Thus, exponent values above 48/57 can be redefined to indicate that other interpretations of the adjustment region can be defined. It should be noted that the number 57 is based on five-level paging. This interpretation of the high order exponent values allows alternative uses of the unused/non-canonical address bits to coexist with the disclosed address encoding mechanisms. Other embodiments can use these undefined values to selectively determine if the adjustment data is or isn't present. For example, an exponent value beyond 48 can indicate no adjustment is present/needed for the buffer, and only the 2's power is valid, setting the 2's power back to the beginning without adjustments. This approach can enable better utilization of the address space by selectively determining what metadata is required for the encoded addresses, and selectively extending the available address bits into the space previously reserved for the adjustment value.Example ArchitecturesFIGURE 21 is a block diagram illustrating an example cryptographic computing environment 2100 according to at least one embodiment. In the example shown, a cryptographic addressing layer 2110 extends across the example compute vectors central processing unit (CPU) 2102, graphical processing unit (GPU) 2104, artificial intelligence (Al) 2106, and field programmable gate array (FPGA) 2108. For example, the CPU 2102 and GPU 2104 may share the same virtual address translation for data stored in memory 2112, and the cryptographic addresses may build on this shared virtual memory. They may share the same process key for a given execution flow, and compute the same tweaks to decrypt the cryptographically encoded addresses and decrypt the data referenced by such encoded addresses, following the same cryptographic algorithms.Combined, the capabilities described herein may enable cryptographic computing. Memory 2112 may be encrypted at every level of the memory hierarchy, from the first level of cache through last level of cache and into the system memory. Binding the cryptographic address encoding to the data encryption may allow extremely fine-grain object boundaries and access control, enabling fine grain secure containers down to even individual functions and their objects for function-as-a-service. Cryptographically encoding return addresses on a call stack (depending on their location) may also enable control flow integrity without the need for shadow stack metadata. Thus, any of data access control policy and control flow can be performed cryptographically, simply dependent on cryptographic addressing and the respective cryptographic data bindings.FIGURES 22-24 are block diagrams of exemplary computer architectures that may be used in accordance with embodiments disclosed herein. Generally, any computer architecture designs known in the art for processors and computing systems may be used. In an example, system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, tablets, engineering workstations, servers, network devices, servers, appliances, network hubs, routers, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, smart phones, mobile devices, wearable electronic devices, portable media players, hand held devices, and various other electronic devices, are also suitable for embodiments of computing systems described herein. Generally, suitable computer architectures for embodiments disclosed herein can include, but are not limited to, configurations illustrated in FIGURES 22-24 .FIGURE 22 is an example illustration of a processor according to an embodiment. Processor 2200 is an example of a type of hardware device that can be used in connection with the implementations shown and described herein (e.g., processor 1602). Processor 2200 may be any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a multi-core processor, a single core processor, or other device to execute code. Although only one processor 2200 is illustrated in FIGURE 22 , a processing element may alternatively include more than one of processor 2200 illustrated in FIGURE 22 . Processor 2200 may be a single-threaded core or, for at least one embodiment, the processor 2200 may be multi-threaded in that it may include more than one hardware thread context (or "logical processor") per core.FIGURE 22 also illustrates a memory 2202 coupled to processor 2200 in accordance with an embodiment. Memory 2202 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. Such memory elements can include, but are not limited to, random access memory (RAM), read only memory (ROM), logic blocks of a field programmable gate array (FPGA), erasable programmable read only memory (EPROM), and electrically erasable programmable ROM (EEPROM).Processor 2200 can execute any type of instructions associated with algorithms, processes, or operations detailed herein. Generally, processor 2200 can transform an element or an article (e.g., data) from one state or thing to another state or thing.Code 2204, which may be one or more instructions to be executed by processor 2200, may be stored in memory 2202, or may be stored in software, hardware, firmware, or any suitable combination thereof, or in any other internal or external component, device, element, or object where appropriate and based on particular needs. In one example, processor 2200 can follow a program sequence of instructions indicated by code 2204. Each instruction enters a front-end logic 2206 and is processed by one or more decoders 2208. The decoder may generate, as its output, a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals that reflect the original code instruction. Front-end logic 2206 also includes register renaming logic 2210 and scheduling logic 2212, which generally allocate resources and queue the operation corresponding to the instruction for execution.Processor 2200 can also include execution logic 2214 having a set of execution units 2216a, 2216b, 2216n, etc. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. Execution logic 2214 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back-end logic 2218 can retire the instructions of code 2204. In one embodiment, processor 2200 allows out of order execution but requires in order retirement of instructions. Retirement logic 2220 may take a variety of known forms (e.g., re-order buffers or the like). In this manner, processor 2200 is transformed during execution of code 2204, at least in terms of the output generated by the decoder, hardware registers and tables utilized by register renaming logic 2210, and any registers (not shown) modified by execution logic 2214.Although not shown in FIGURE 22 , a processing element may include other elements on a chip with processor 2200. For example, a processing element may include memory control logic along with processor 2200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. In some embodiments, non-volatile memory (such as flash memory or fuses) may also be included on the chip with processor 2200.FIGURE 23A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to one or more embodiments of this disclosure. FIGURE 23B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to one or more embodiments of this disclosure. The solid lined boxes in FIGURES 23A-23B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIGURE 23A , a processor pipeline 2300 includes a fetch stage 2302, a length decode stage 2304, a decode stage 2306, an allocation stage 2308, a renaming stage 2310, a schedule (also known as a dispatch or issue) stage 2312, a register read/memory read stage 2314, an execute stage 2316, a write back/memory write stage 2318, an exception handling stage 2322, and a commit stage 2324.FIGURE 23B shows processor core 2390 including a front end unit 2330 coupled to an execution engine unit 2350, and both are coupled to a memory unit 2370. Processor core 2390 and memory unit 2370 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., processor 102, memory 122). The core 2390 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 2390 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. In addition, processor core 2390 and its components represent example architecture that could be used to implement logical processors and their respective components.The front end unit 2330 includes a branch prediction unit 2332 coupled to an instruction cache unit 2334, which is coupled to an instruction translation lookaside buffer (TLB) unit 2336, which is coupled to an instruction fetch unit 2338, which is coupled to a decode unit 2340. The decode unit 2340 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 2340 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 2390 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 2340 or otherwise within the front end unit 2330). The decode unit 2340 is coupled to a rename/allocator unit 2352 in the execution engine unit 2350.The execution engine unit 2350 includes the rename/allocator unit 2352 coupled to a retirement unit 2354 and a set of one or more scheduler unit(s) 2356. The scheduler unit(s) 2356 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 2356 is coupled to the physical register file(s) unit(s) 2358. Each of the physical register file(s) units 2358 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 2358 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers (GPRs). In at least some embodiments described herein, register units 2358 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., registers 112). The physical register file(s) unit(s) 2358 is overlapped by the retirement unit 2354 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using register maps and a pool of registers; etc.). The retirement unit 2354 and the physical register file(s) unit(s) 2358 are coupled to the execution cluster(s) 2360. The execution cluster(s) 2360 includes a set of one or more execution units 2362 and a set of one or more memory access units 2364. The execution units 2362 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Execution units 2362 may also include an address generation unit (e.g., 822) to calculate addresses used by the core to access main memory (e.g., memory unit 2370) and a page miss handler (PMH) (e.g., 826).The scheduler unit(s) 2356, physical register file(s) unit(s) 2358, and execution cluster(s) 2360 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 2364). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 2364 is coupled to the memory unit 2370, which includes a data TLB unit 2372 coupled to a data cache unit 2374 coupled to a level 2 (L2) cache unit 2376. In one exemplary embodiment, the memory access units 2364 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 2372 in the memory unit 2370. The instruction cache unit 2334 is further coupled to a level 2 (L2) cache unit 2376 in the memory unit 2370. The L2 cache unit 2376 is coupled to one or more other levels of cache and eventually to a main memory. In addition, a page miss handler (e.g., page miss handler 826) may also be included in core 2390 to look up an address mapping in a page table if no match is found in the data TLB unit 2372.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 2300 as follows: 1) the instruction fetch 2338 performs the fetch and length decoding stages 2302 and 2304; 2) the decode unit 2340 performs the decode stage 2306; 3) the rename/allocator unit 2352 performs the allocation stage 2308 and renaming stage 2310; 4) the scheduler unit(s) 2356 performs the schedule stage 2312; 5) the physical register file(s) unit(s) 2358 and the memory unit 2370 perform the register read/memory read stage 2314; the execution cluster 2360 perform the execute stage 2316; 6) the memory unit 2370 and the physical register file(s) unit(s) 2358 perform the write back/memory write stage 2318; 7) various units may be involved in the exception handling stage 2322; and 8) the retirement unit 2354 and the physical register file(s) unit(s) 2358 perform the commit stage 2324.The core 2390 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 2390 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). Accordingly, in at least some embodiments, multi-threaded enclaves may be supported.While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 2334/2374 and a shared L2 cache unit 2376, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.FIGURE 24 illustrates a computing system 2400 that is arranged in a point-to-point (PtP) configuration according to an embodiment. In particular, FIGURE 24 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. Generally, one or more of the computing systems or computing devices described (e.g., computing device 100) herein may be configured in the same or similar manner as computing system 2400.Processors 2470 and 2480 may be implemented as single core processors 2474a and 2484a or multi-core processors 2474a-2474b and 2484a-2484b. Processors 2470 and 2480 may each include a cache 2471 and 2481 used by their respective core or cores. A shared cache (not shown) may be included in either processors or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. It should be noted that one or more embodiments described herein could be implemented in a computing system, such as computing system 2400. Moreover, processors 2470 and 2480 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., processor 102).Processors 2470 and 2480 may also each include integrated memory controller logic (MC) 2472 and 2482 to communicate with memory elements 2432 and 2434, which may be portions of main memory locally attached to the respective processors. In alternative embodiments, memory controller logic 2472 and 2482 may be discrete logic separate from processors 2470 and 2480. Memory elements 2432 and/or 2434 may store various data to be used by processors 2470 and 2480 in achieving operations and functionality outlined herein.Processors 2470 and 2480 may be any type of processor, such as those discussed in connection with other figures. Processors 2470 and 2480 may exchange data via a point-to-point (PtP) interface 2450 using point-to-point interface circuits 2478 and 2488, respectively. Processors 2470 and 2480 may each exchange data with an input/output (I/O) subsystem 2490 via individual point-to-point interfaces 2452 and 2454 using point-to-point interface circuits 2476, 2486, 2494, and 2498. I/O subsystem 2490 may also exchange data with a high-performance graphics circuit 2438 via a high-performance graphics interface 2439, using an interface circuit 2492, which could be a PtP interface circuit. In one embodiment, the high-performance graphics circuit 2438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. I/O subsystem 2490 may also communicate with a display 2433 for displaying data that is viewable by a human user. In alternative embodiments, any or all of the PtP links illustrated in FIGURE 24 could be implemented as a multi-drop bus rather than a PtP link.I/O subsystem 2490 may be in communication with a bus 2420 via an interface circuit 2496. Bus 2420 may have one or more devices that communicate over it, such as a bus bridge 2418 and I/O devices 2416. Via a bus 2410, bus bridge 2418 may be in communication with other devices such as a user interface 2412 (such as a keyboard, mouse, touchscreen, or other input devices), communication devices 2426 (such as modems, network interface devices, or other types of communication devices that may communicate through a computer network 2460), audio I/O devices 2414, and/or a data storage device 2428. Data storage device 2428 may store code and data 2430, which may be executed by processors 2470 and/or 2480. In alternative embodiments, any portions of the bus architectures could be implemented with one or more PtP links.Program code, such as code 2430, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system may be part of computing system 2400 and includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code (e.g., 2430) may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the one or more of the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the present disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.The computing system depicted in FIGURE 24 is a schematic illustration of an embodiment of a computing system that may be utilized to implement various embodiments discussed herein. It will be appreciated that various components of the system depicted in FIGURE 24 may be combined in a system-on-a-chip (SoC) architecture or in any other suitable configuration capable of achieving the functionality and features of examples and implementations provided herein.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIGURE 25 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of this disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIGURE 25 shows a program in a high level language 2502 may be compiled using an x86 compiler 2504 to generate x86 binary code 2506 that may be natively executed by a processor with at least one x86 instruction set core 2516. The processor with at least one x86 instruction set core 2516 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 2504 represents a compiler that is operable to generate x86 binary code 2506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2516. Similarly, FIGURE 25 shows the program in the high level language 2502 may be compiled using an alternative instruction set compiler 2508 to generate alternative instruction set binary code 2510 that may be natively executed by a processor without at least one x86 instruction set core 2514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2512 is used to convert the x86 binary code 2506 into code that may be natively executed by the processor without an x86 instruction set core 2514. This converted code is not likely to be the same as the alternative instruction set binary code 2510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 2512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2506.Although this disclosure has been described in terms of certain implementations and generally associated methods, alterations and permutations of these implementations and methods will be apparent to those skilled in the art. For example, the actions described herein can be performed in a different order than as described and still achieve the desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve the desired results. In certain implementations, multitasking and parallel processing may be advantageous. Other variations are within the scope of the following claims.The architectures presented herein are provided by way of example only, and are intended to be non-exclusive and non-limiting. Furthermore, the various parts disclosed are intended to be logical divisions only, and need not necessarily represent physically separate hardware and/or software components. Certain computing systems may provide memory elements in a single physical memory device, and in other cases, memory elements may be functionally distributed across many physical devices. In the case of virtual machine managers or hypervisors, all or part of a function may be provided in the form of software or firmware running over a virtualization layer to provide the disclosed logical function.Note that with the examples provided herein, interaction may be described in terms of a single computing system. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a single computing system. Moreover, the system for deep learning and malware detection is readily scalable and can be implemented across a large number of components (e.g., multiple computing systems), as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the computing system as potentially applied to a myriad of other architectures.As used herein, unless expressly stated to the contrary, use of the phrase 'at least one of' refers to any combination of the named items, elements, conditions, or activities. For example, 'at least one of X, Y, and Z' is intended to mean any of the following: 1) at least one X, but not Y and not Z; 2) at least one Y, but not X and not Z; 3) at least one Z, but not X and not Y; 4) at least one X and at least one Y, but not Z; 5) at least one X and at least one Z, but not Y; 6) at least one Y and at least one Z, but not X; or 7) at least one X, at least one Y, and at least one Z.Additionally, unless expressly stated to the contrary, the terms 'first', 'second', 'third', etc., are intended to distinguish the particular nouns (e.g., element, condition, module, activity, operation, claim element, etc.) they modify, but are not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, 'first X' and 'second X' are intended to designate two separate X elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements.References in the specification to "one embodiment," "an embodiment," "some embodiments," etc., indicate that the embodiment(s) described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment.While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any embodiments or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination.Similarly, the separation of various system components and modules in the embodiments described above should not be understood as requiring such separation in all embodiments. It should be understood that the described program components, modules, and systems can generally be integrated together in a single software product or packaged into multiple software products.Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of this disclosure. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.Other Notes and ExamplesThe following examples pertain to embodiments in accordance with this specification. System, apparatus, method, and machine readable storage medium embodiments can include one or a combination of the following examples:Example A1 provides a processor that includes a core to execute an instruction, where the core includes a register to store a pointer to a memory location and a tag associated with the pointer. The tag indicates whether the pointer is at least partially immutable. The core also includes circuitry to access the pointer and the tag associated with the pointer, determine whether the tag indicates that the pointer is at least partially immutable. The circuitry is further, based on a determination that the tag indicates the pointer is at least partially immutable, to obtain a memory address of the memory location based on the pointer, use the memory address to access encrypted data at the memory location, and decrypt the encrypted data based on a key and a tweak, where the tweak including one or more bits derived, at least in part, from the pointer.In Example A2, the subject matter of Example A1 can optionally include where the circuitry is further, based on a determination that the tag indicates the pointer is not immutable, to restrict the memory address of the memory location from being obtained based on the pointer.In Example A3, the subject matter of Example A1 can optionally include where at least a portion of the memory address is stored in the pointer in a plaintext format.In Example A4, the subject matter of Example A3 can optionally include where the pointer is an encoded pointer, and the circuitry to obtain the memory address is further to decode the encoded pointer to obtain the memory address.In Example A5, the subject matter of Example A1 can optionally include where the pointer is cryptographically encoded, and the circuitry to obtain the memory address is further to cryptographically decode the pointer to obtain the memory address.In Example A6, the subject matter of any one of Examples A1-A5 can optionally include where the pointer is to a base address for a memory location storing one or more instructions for execution.In Example A7, the subject matter of any one of Examples A1-A5 can optionally include where the circuitry is further to execute an instruction to overwrite the pointer, and clear the tag associated with the pointer based on executing the instruction to overwrite the pointer.In Example A8, the subject matter of any one of Examples A1-A5 can optionally include where the circuitry is further to execute an instruction to clear all the tag bits in a range of memory.In Example A9, the subject matter of any one of Examples A1-A5 can optionally include where the circuitry is further to: access an instruction to store the pointer to memory; determine whether the instruction is of a type authorized to store pointers to memory; and execute the instruction based on a determination that the instruction is of the type authorized to store pointers to memory.In Example A10, the subject matter of any one of Examples A1-A5 can optionally include where the circuitry is further to: access an instruction to modify a word in memory; determine that the word has an associated tag that is set to indicate the word is storing a pointer; determine whether the instruction is of a type authorized to modify pointers; and execute the instruction based on a determination that the instruction is of the type authorized to modify pointers.In Example A11, the subject matter of any one of Examples A1-A5 can optionally include where the circuitry is further to: access an instruction to copy a set of words stored in memory; determine that a least one word to be copied has an associated tag that is set to indicate the word is storing a pointer; determine whether the instruction is of a type authorized to copy pointers; and execute the instruction based on a determination that the instruction is of the type authorized to copy pointers.In Example A12, the subject matter of any one of Examples A1-A5 can optionally include where the tweak is further based on a type of the pointer.In Example A13, the subject matter of any one of Examples A1-A5 can optionally include where the tweak is further based on a stack frame for the memory address.In Example A14, the subject matter of any one of Examples A1-A5 can optionally include a cache to store a plurality of words and tags associated with the respective words, wherein the core further comprises circuitry to: load a word comprising the pointer from the cache into the register; and propagate the tag associated with the pointer from the cache into the register based on loading the word comprising the pointer.In Example A15, the subject matter of Example A14 can optionally include where a set of words and a set of tags associated with the set of words are stored in a same cacheline of the cache, the set of tags being inaccessible by software.In Example A16, the subject matter of Example A14 can optionally include where the cache comprises a first set of ways to store first words and first tags associated with the first words, and a second set of ways to store second words without tags.In Example A17, the subject matter of Example A16 can optionally include where the core further comprises circuitry to: access a cacheline from memory; determine whether the cacheline includes tagged data; store the cacheline in a way of the first set of ways based on a determination that the cacheline includes tagged data; and store the cacheline in a way of the second set of ways based on a determination that the cacheline does not include tagged data.In Example A18, the subject matter of any one of Examples A1-A17 can optionally include where the tag comprises 1-bit.Example M1 provides a method comprising accessing, from a register, a pointer to a memory location and a tag associated with the pointer, wherein the tag indicates whether the pointer is at least partially immutable; determining whether the tag indicates that the pointer is at least partially immutable; and based on a determination that the tag indicates the pointer is at least partially immutable: obtaining a memory address of the memory location based on the pointer; using the memory address to access encrypted data at the memory location; and decrypting the encrypted data based on a key and a tweak, the tweak including one or more bits derived, at least in part, from the pointer.In Example M2, the subject matter of Example M1 can optionally include restricting, based on a determination that the tag indicates the pointer is not immutable, the memory address of the memory location from being obtained based on the pointer.In Example M3, the subject matter of Example M1 can optionally include where the pointer is an encoded pointer, and obtaining the memory address further comprises decoding the encoded pointer.In Example M4, the subject matter of Example M1 can optionally include where the pointer is cryptographically encoded, and obtaining the memory address comprises cryptographically decoding the pointer to obtain the memory address.In Example M5, the subject matter of Example M1 can optionally include where the pointer is a plaintext format.In Example M6, the subject matter of any one of Examples M1-M5 can optionally include where the pointer is to a base address for a memory location storing one or more instructions for execution.In Example M7, the subject matter of any one of Examples M1-M5 can optionally include executing an instruction to overwrite the pointer, and clearing the tag associated with the pointer based on executing the instruction to overwrite the pointer.In Example M8, the subject matter of any one of Examples M1-M5 can optionally include executing an instruction to clear all the tag bits in a range of memory.In Example M9, the subject matter of any one of Examples M1-M5 can optionally include: accessing an instruction to store the pointer to memory; determining whether the instruction is of a type authorized to store pointers to memory; and executing the instruction based on a determination that the instruction is of the type authorized to store pointers to memory.In Example M10, the subject matter of any one of Examples M1-M5 can optionally include: accessing an instruction to modify a word in memory; determining that the word has an associated tag that is set to indicate the word is storing a pointer; determining whether the instruction is of a type authorized to modify pointers; and executing the instruction based on a determination that the instruction is of the type authorized to modify pointers.In Example M11, the subject matter of any one of Examples M1-M5 can optionally include: accessing an instruction to copy a set of words stored in memory; determining that a least one word to be copied has an associated tag that is set to indicate the word is storing a pointer; determining whether the instruction is of a type authorized to copy pointers; and executing the instruction based on a determination that the instruction is of the type authorized to copy pointers.In Example M12, the subject matter of any one of Examples M1-M5 can optionally include where the tweak is further based on a type of the pointer.In Example M13, the subject matter of any one of Examples M1-M5 can optionally include where the tweak is further based on a stack frame for the memory address.In Example M14, the subject matter of any one of Examples M1-M5 can optionally include loading a word comprising the pointer from a cache into the register and propagating the tag associated with the pointer from the cache into the register based on loading the word comprising the pointer.In Example M15, the subject matter of Example M14 can optionally include where a set of words and a set of tags associated with the set of words are stored in a same cacheline of the cache, the set of tags being inaccessible by software.In Example M16, the subject matter of Example M14 can optionally include where the cache comprises a first set of ways to store first words and first tags associated with the first words, and a second set of ways to store second words without tags.In Example M17, the subject matter of Example M16 can optionally include: accessing a cacheline from memory; determining whether the cacheline includes tagged data; storing the cacheline in a way of the first set of ways based on a determination that the cacheline includes tagged data; and storing the cacheline in a way of the second set of ways based on a determination that the cacheline does not include tagged data.In Example M18, the subject matter of any one of Examples M1-M17 can optionally include where the tag comprises 1-bit.Example C1 provides one or more computer-readable media with code stored thereon, where the code is executable to cause a machine to: access, from a register, a pointer to a memory location and a tag associated with the pointer, wherein the tag indicates whether the pointer is at least partially immutable; determine whether the tag indicates that the pointer is at least partially immutable; and based on a determination that the tag indicates the pointer is at least partially immutable: obtain a memory address of the memory location based on the pointer; use the memory address to access encrypted data at the memory location; and decrypt the encrypted data based on a key and a tweak, the tweak including one or more bits derived, at least in part, from the pointer.In Example C2, the subject matter of Example C1 can optionally include where the code is further executable to cause the machine to restrict, based on a determination that the tag indicates the pointer is not immutable, the memory address of the memory location from being obtained based on the pointer.In Example C3, the subject matter of Example C1 can optionally include where the pointer is an encoded pointer, and the code to obtain the memory address is to decode the encoded pointer to obtain the memory address.In Example C4, the subject matter of Example C1 can optionally include where the pointer is cryptographically encoded, and the code to decode the encoded pointer to obtain the memory address is to cryptographically decode the pointer to obtain the memory address.In Example S1, a system comprises means for accessing, from a register, a pointer to a memory location and a tag associated with the pointer, wherein the tag indicates whether the pointer is at least partially immutable; and means for performing a set of operations based on a determination that the tag indicates the pointer is at least partially immutable, the set of operations comprising: obtaining a memory address of the memory location based on the pointer; using the memory address to access encrypted data at the memory location; and decrypting the encrypted data based on a key and a tweak, the tweak including one or more bits derived, at least in part, from the pointer.Example X1 provides an apparatus comprising means for performing the method of any one of Examples M1-M18.In Example X2, the subject matter of Example X1 can optionally include where the means for performing the method comprise at least one processor and at least one memory element.In Example X3, the subject matter of Example X2 can optionally include where the at least one memory element comprises machine-readable instructions that when executed, cause the apparatus to perform the method of any one of Examples M1-M18.In Example X4, the subject matter of any one of Examples X1-X3 can optionally include where the apparatus is one of a computing system or a system-on-a-chip.Example Y1 provides at least one machine readable storage medium comprising instructions, where the instructions when executed realize an apparatus or implement a method as provided in any one of Examples A1-A18 or M1-M18.
A processor includes cores and instructions executable by at least one of the plurality of cores as a virtual machine monitor (VMM). To configure resources for a virtual machine (VM), the VMM is to: group the cores into cluster(s), wherein a subset of the cores is to execute the VM; create, within a buffer in memory, a data structure to store, for the subset, one or more entries, each entry including a cluster identifier and a bitmap. The bitmap identifies cores of the subset within a cluster corresponding to the cluster identifier. The VMM is further to write, to a virtual machine control structure (VMCS): a pointer to the data structure, wherein the pointer includes a physical address of the memory; and a number of the one or more entries in the data structure; and set, within the VMCS,a local interrupt controller pass-through field.
1.A processor including:A plurality of cores, wherein at least one core of the plurality of cores is used to execute a virtual machine monitor VMM, and wherein in order to configure resources for a virtual machine VM, the VMM is used to:Grouping the plurality of cores into one or more clusters, wherein a subset of the plurality of cores is used to execute the VM;Create a data structure in the buffer in the memory, the data structure for storing one or more entries for the subset of the plurality of cores, each entry of the one or more entries including a cluster identification Symbol and bit map, wherein the bit map identifies the cores in the subset in the cluster corresponding to the cluster identifier;Write to the virtual machine control structure of the memory:A pointer to the data structure, wherein the pointer includes the physical address of the memory; andThe number of the one or more entries in the data structure; andA local interrupt controller pass field is set in the virtual machine control structure.2.The processor of claim 1, further comprising an interrupt command register ICR, wherein the guest operating system (OS) of the VM is used to write a value to the ICR to send an inter-processor interrupt IPI, and the value is used for filling:A destination field, which identifies one or more destination cores among the plurality of cores; andDestination mode, including either physical or logical.3.3. The processor of claim 2, further comprising a programmable interrupt controller, the programmable interrupt controller for:Converting the value in the destination field into a value for a cluster identifier of the one or more clusters and a value for an interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a zero value, the value written into the ICR is discarded.4.3. The processor of claim 2, further comprising a programmable interrupt controller, the programmable interrupt controller for:Determining that the local interrupt controller pass field is set for the VM;Determining that the number of the one or more entries of the pointer and the virtual machine control structure is non-zero; andIn response to verifying that the IPI is destined for one core in the subset of the plurality of cores using the ICR, the IPI is sent to the system bus coupling the plurality of cores.5.The processor of claim 4, wherein, in order to determine that the IPI is destined for a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to:Determining that the destination mode is physical;Converting the value in the destination field into a value for a cluster identifier of the one or more clusters and a value for an interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a non-zero value, it is determined that the interrupt controller ID is associated with one of the cores in the subset of the plurality of cores.6.The processor of claim 4, wherein, in order to determine that the IPI is destined for a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to:Determining that the destination mode is logical;Use the most significant bits in the destination field as the cluster identifier of the cluster in the one or more clusters, and use the least significant bits in the destination field as the cluster identifier in the cluster The interrupt controller identifier ID;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry;Appending the non-zero result of the bitwise AND operation to the cluster identifier to generate an updated value for the destination field;Store the updated value back into the destination field; andThe updated value in the destination field and the value of the ICR are used to send the IPI onto the system bus.7.The processor according to any one of claims 2-6, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the processor further It includes a programmable interrupt controller, and the programmable interrupt controller is used for:Determine one of the following: the destination shorthand has been set to include all of itself; or the destination field is programmed with an all-one value;Determining each non-zero bit mapping of the one or more entries via scanning of the one or more entries of the data structure;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.8.The processor according to any one of claims 2-6, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the processor further It includes a programmable interrupt controller, and the programmable interrupt controller is used for:Determine that the destination shorthand has been set to exclude all of itself;Through the scanning of the one or more entries of the data structure, each non-zero bit mapping of the one or more entries is determined, so as to exclude the data received from the subset for the plurality of cores Any bit mapping of the core of the IPI;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.9.One method includes:Grouping the plurality of cores into one or more clusters by a virtual machine monitor VMM executing on at least one of the plurality of cores, wherein a subset of the plurality of cores is used to execute a virtual machine VM;The VMM creates a data structure in a buffer in a memory, the data structure is used to store one or more entries for the subset of the plurality of cores, each of the one or more entries The entry includes a cluster identifier and a bit map, wherein the bit map identifies the cores in the cluster corresponding to the cluster identifier in the subset;The VMM writes to the virtual machine control structure of the memory:A pointer to the data structure, wherein the pointer includes the physical address of the memory; andThe number of the one or more entries in the data structure; andThe VMM sets a local interrupt controller pass field in the virtual machine control structure.10.9. The method of claim 9, further comprising writing a value to the interrupt control register ICR by the guest operating system OS of the VM to send an inter-processor interrupt IPI, the value being used to fill:A destination field, which identifies one or more destination cores among the plurality of cores; andDestination mode, including either physical or logical.11.The method of claim 10, further comprising:Reading the value in the destination field to determine the cluster identifier for the cluster in the one or more clusters and the interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a zero value, the value written into the ICR is discarded.12.The method of claim 10, further comprising:Determining that the local interrupt controller pass field is set for the VM;Determining that the number of the one or more entries of the pointer and the virtual machine control structure is non-zero; andIn response to using the ICR to verify that the IPI is destined for one core in the subset of the plurality of cores, sending the IPI to a system bus coupling the plurality of cores.13.The method of claim 12, wherein, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the method further comprises:Determining that the destination mode is physical;Converting the value in the destination field into a value for a cluster identifier of the one or more clusters and a value for an interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a non-zero value, it is determined that the interrupt controller ID is associated with one of the cores in the subset of the plurality of cores.14.The method of claim 12, wherein, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the method further comprises:Determining that the destination mode is logical;Use the most significant bits in the destination field as the cluster identifier of the cluster in the one or more clusters, and use the least significant bits in the destination field as the cluster identifier in the cluster The interrupt controller identifier ID;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry;Appending the non-zero result of the bitwise AND operation to the cluster identifier to generate an updated value for the destination field;Store the updated value back into the destination field; andThe updated value in the destination field and the value of the ICR are used to send the IPI onto the system bus.15.The method according to any one of claims 10-15, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the method further comprises:Determine one of the following: the destination shorthand has been set to include all of itself; or the destination field is programmed with an all-one value;Determining each non-zero bit mapping of the one or more entries via scanning of the one or more entries of the data structure;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.16.The method according to any one of claims 10-15, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the method further comprises:Determine that the destination shorthand has been set to exclude all of itself;Through the scanning of the one or more entries of the data structure, each non-zero bit mapping of the one or more entries is determined, so as to exclude the data received from the subset for the plurality of cores Any bit mapping of the core of the IPI;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.17.A system including:Storage device; andA plurality of cores, wherein at least one core of the plurality of cores is used to execute a virtual machine monitor VMM, and wherein in order to configure resources for a virtual machine VM, the VMM is used to:Grouping the plurality of cores into one or more clusters, wherein a subset of the plurality of cores is used to execute the VM;Create a data structure in the buffer in the memory device, the data structure for storing one or more entries for the subset of the plurality of cores, each entry of the one or more entries including a cluster An identifier and a bit map, wherein the bit map identifies the cores in the cluster corresponding to the cluster identifier in the subset;Write to the virtual machine control structure of the memory device:A pointer to the data structure, wherein the pointer includes the physical address of the memory device; andThe number of the one or more entries in the data structure; andA local interrupt controller pass field is set in the virtual machine control structure.18.The system of claim 17, further comprising an interrupt command register ICR, wherein the guest operating system (OS) of the VM is used to write a value to the ICR to send an inter-processor interrupt IPI, and the value is used for filling:A destination field, which identifies one or more destination cores among the plurality of cores; andDestination mode, including either physical or logical.19.The system of claim 18, further comprising a programmable interrupt controller, the programmable interrupt controller for:Converting the value in the destination field into a value for a cluster identifier of the one or more clusters and a value for an interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a zero value, the value written into the ICR is discarded.20.The system of claim 18, further comprising a programmable interrupt controller, the programmable interrupt controller for:Determining that the local interrupt controller pass field is set for the VM;Determining that the number of the one or more entries of the pointer and the virtual machine control structure is non-zero; andIn response to using the ICR to verify that the IPI is destined for one core in the subset of the plurality of cores, sending the IPI to a system bus coupling the plurality of cores.21.The system according to claim 20, wherein, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to:Determining that the destination mode is physical;Converting the value in the destination field into a value for a cluster identifier of the one or more clusters and a value for an interrupt controller identifier ID in the cluster;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; andIn response to the bitwise AND operation generating a non-zero value, it is determined that the interrupt controller ID is associated with one of the cores in the subset of the plurality of cores.22.The system according to claim 20, wherein, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to:Determining that the destination mode is logical;Use the most significant bits in the destination field as the cluster identifier of the cluster in the one or more clusters, and use the least significant bits in the destination field as the cluster identifier in the cluster The interrupt controller identifier ID;Using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier;Calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry;Appending the non-zero result of the bitwise AND operation to the cluster identifier to generate an updated value for the destination field;Store the updated value back into the destination field; andThe updated value in the destination field and the value of the ICR are used to send the IPI onto the system bus.23.The system according to any one of claims 18-22, wherein the guest OS of the VM is further used to write a destination shorthand value to the ICR, and the system further comprises: The programmable interrupt controller is used for:Determine one of the following: the destination shorthand has been set to include all of itself; or the destination field is programmed with an all-one value;Determining each non-zero bit mapping of the one or more entries via scanning of the one or more entries of the data structure;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.24.The system according to any one of claims 18-22, wherein the guest OS of the VM is further used to write a destination shorthand value to the ICR, and the system further comprises: The programmable interrupt controller is used for:Determine that the destination shorthand has been set to exclude all of itself;Through the scanning of the one or more entries of the data structure, each non-zero bit mapping of the one or more entries is determined, so as to exclude the data received from the subset for the plurality of cores Any bit mapping of the core of the IPI;For clusters with non-zero bit mapping, combining the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value;Storing the updated value in the destination field;Set the destination mode to logical; andThe updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.25.A device comprising means for performing the method according to any one of claims 9 to 14.
Common inter-processor interrupt virtualization using local interrupt controllerTechnical fieldThe present disclosure relates to computer systems, and more specifically, to inter-processor interrupt (IPI) virtualization using pass-through of a local interrupt controller.Background techniqueModern computing systems use virtualization of hardware emulation to achieve security purposes. Virtual machines (VMs) that run embedded or real-time applications on simulated hardware may require exclusive access to resources (such as one or more processor cores, memory, or input/output (I/O) devices) for performance reasons, While maintaining the isolation advantages provided by virtualization technology. In this case, the virtual machine monitor (VMM) (also called a hypervisor) can choose not to capture and not emulate the VM's access to the local advanced programmable interrupt controller (LAPIC), thereby providing the VM with a local APIC Passable. The local APIC is designed as interrupt sources (for example, processor interrupt pins, from internal sources, and from external I/O devices) and (in multiple cores) for receiving inter-processor (IPI) destinations Intermediary of IPI between nuclei. However, providing local APIC access to the VM may come at the cost of security. For example, a VM can issue an interrupt storm (similar to a denial of service attack) by issuing IPIs to processors on platforms running other VMs or host operating systems.Description of the drawingsFigure 1A is a block diagram illustrating a computing platform with a processor employing a local advanced programmable interrupt controller (APIC) according to one implementation.FIG. 1B is a block diagram illustrating one of the processors shown in FIG. 1A according to one implementation.FIG. 1C is a block diagram of a virtual machine monitor (VMM) running two VMs according to an implementation manner, where each VM has access to the structure of the virtual machine.FIG. 2 is a diagram of the format structure of the logical destination register (LDR) of the local APIC according to an implementation manner.FIG. 3A is a diagram of a data structure for VM and a bitmap established in a memory buffer in an implementation manner. VM identifies a cluster identifier (a subset of cores), and a bitmap identifies the core subset, The cores in the cluster corresponding to the cluster identifier.FIG. 3B is a logic flow diagram illustrating an example of how to create the data structure of FIG. 3A according to one implementation.Fig. 4 is a diagram of a format structure for an interrupt command register (ICR) according to an implementation manner, to which a guest of a VM will write to the interrupt command register to send an IPI to a destination core.Fig. 5 is a flow chart of a method for setting up a local APIC virtualization pass through a VMM and a VM according to an implementation manner.Fig. 6 is a logic flow diagram illustrating local APIC traffic logic for a physical destination mode according to one implementation.FIG. 7 is a flowchart of a method for implementing local APIC traffic logic for a physical destination mode according to an implementation manner.Fig. 8 is a logic flow diagram illustrating local APIC traffic logic for a logical destination mode according to one implementation.Fig. 9 is a flowchart of a method for implementing local APIC traffic logic for a logical destination mode according to an implementation manner.Fig. 10 is a flowchart of a method for implementing local APIC traffic logic for destination shorthand according to an implementation manner.Figure 11A is a block diagram illustrating a micro-architecture for a processor in which one implementation of the present disclosure can be used.11B is a block diagram illustrating an in-order pipeline, a register renaming stage, and an out-of-order issue/execution pipeline implemented according to at least one implementation of the present disclosure.Figure 12 illustrates a block diagram of a micro-architecture for a processing device, according to one implementation, the processing device including logic circuits that provide pervasive IPI virtualization using a local interrupt controller.Figure 13 is a block diagram of a computer system according to one implementation.Fig. 14 is a block diagram of a computer system according to another implementation.Figure 15 is a block diagram of a system on chip according to one implementation.Figure 16 illustrates another implementation of a block diagram of a computing system.Figure 17 illustrates another implementation of a block diagram of a computing system.Detailed waysVarious aspects of the present disclosure are directed to a system and method for enhancing local advanced programmable interrupt controller (APIC) logic to enable a virtual machine monitor (VMM) to provide local APIC traffic to one or more virtual machines (VM) . The passage of the local APIC means that the VMM does not have to capture and emulate the operation of the local APIC for the VM in order to send Inter-Processor Interrupts (IPI) to the core(s) that the VM is executing on (for example, via the virtualization of the local APIC) . In order to provide the VM with such access to the local APIC, the VM will be trusted in allowing the VM to directly issue the IPI forwarded to the destination core(s). The VM is generally trusted to send IPI to the same core(s) on which the VM is executing.System software (for example, guest operating system (OS) or other applications of the VM) can write an interrupt command register (ICR) to send IPI between multiple cores. In configurations where the VMM provides the VM with direct access to the local APIC, the write to the ICR can be virtualized in the processor to verify whether the IPI is directed to the core on which the VM is executing. Any core in the subset of cores that the VM is executing on is a valid (or authorized) destination to which the VM can send IPIs. In order to check that the core is a valid destination, the APIC identifier (ID) of the core in the subset of the core (for the VM) and the cluster of the cluster in the subset of the core can be combined in the data structure stored in the memory buffer. The identifier (ID) is associated. In this way, the data structure can embody the local APIC processor mapping for each VM on the computing platform. In one embodiment, the data structure is, for example, a table. Before forwarding the IPI on the system bus where multiple cores of the multi-core processor are coupled together, the data structure can be cross-referenced against the value stored in the ICR.In various implementations, the VMM groups multiple cores into one or more clusters, where a subset of the cores is used to execute the VM. The VMM may further create a data structure in the buffer of the memory, the data structure is used to store one or more entries for a subset of cores, each entry includes a bitmap of the cluster identifier and the local interrupt controller ID. The bitmap can identify the cores in the cluster corresponding to the cluster identifier in the subset of cores. The VMM may further write a pointer to the data structure and the number of one or more entries in the data structure into a virtual machine control structure (VMCS), where the pointer to the data structure includes, for example, a physical address of the memory. The VMM may further set a local interrupt controller pass field in the virtual machine control structure to indicate the virtualization pass of the local APIC for the VM. This allows the VMM to virtualize the inspection of the data structure via VMCS to ensure that future IPIs are directed to authorized cores for specific VMs.In an implementation manner, in order to send the IPI to one or more cores in the multi-core, the guest OS of the VM may write the value to the ICR. These values include, for example, values for: a destination field that identifies one or more destination cores; destination mode (physical or logical); destination shorthand (for example, self, all of itself, or exclusion All of itself, refer to the core that initiated the IPI). Then, as part of handling the IPI stored in the ICR, the local APIC can verify that the local interrupt controller pass field has been set and the number of one or more entries in the pointer and VMCS is non-zero. This signals to the local APIC logic that the virtualization access of the local APIC has been enabled and set. Then, the local APIC logic can be used to use the access to the data structure previously set in the memory by the VMM (the pointer is directed to the data structure) to determine whether to direct the IPI that has been written to the ICR to the VM that stores the value in the ICR An authorized core in a subset of cores. In this way, no VM can send IPI storms to cores outside its domain (for example, perform denial of service type attacks).These proposed solutions have several advantages over conventional systems and methods in allowing VMs to directly access local APICs, which improves performance compared to VMs that undergo VMM virtualization with local APICs. In addition, security is ensured by managing the cores to which any given VM can send IPIs (e.g., a subset of the cores that the VM is currently running on). This prevents other VMs running on the same platform from conducting denial of service attacks. The implementation of the present disclosure can also encourage the use of virtualization technology for critical applications such as autonomous driving without safety concerns. In addition, buffering or caching of data structures storing local APIC mappings includes minimal overhead in some processor cycles.1A is a block diagram illustrating a computing platform 100 (for example, a system) according to an implementation, the computing platform 100 has a processor 102A, 106B, 106C...106N using local advanced programmable interrupt controllers (APIC) 106A, 106B, 106N, respectively 102B, 102C...102N. For simplicity, the local APIC can be variably called a local interrupt controller or a programmable interrupt controller. Each processor may include one or more cores 104, and is therefore a multi-core processor. The computing platform 100 may further include a system chipset 110 that includes an I/O APIC 112, and the I/O APIC 112 can generate an I/O interrupt based on an external interrupt or an internal signal from the computing platform 100.In an implementation manner, the computing platform 100 may further include a system bus 115 for coupling the processor 102, the core 104, and the system chipset 110. For example, an interrupt message may be sent back and forth between the system bus 115 and the I/OAPIC 112 of the system chipset 110. Interrupt messages and IPI can also be sent between the local APIC and the system bus 115, so they can be sent between processors and between cores.In various implementations, each local APIC 106A, 106B, 106C...106D handles interrupts from the I/O APIC 112, IPI from the processor on the system bus 115, and self-generated interrupts. Interrupts can also be delivered to individual processors through local interrupt pins; however, this mechanism is not usually used in multiprocessor systems. The IPI mechanism is usually used in a multi-processor system to send fixed interrupts (interrupts for a specific vector number) and dedicated interrupts to the processors on the system bus 115. For example, the local APIC can use IPI to forward a fixed interrupt to another processor for service. Dedicated IPI (including non-maskable interrupt (NMI), initialization interrupt (INIT), system management interrupt (SMI), and boot inter-processor interrupt (SIPI) IPI) allows one or more processors on the system bus to perform system-wide booting And control functions.FIG. 1B is a block diagram illustrating one processor 102A of the processors illustrated in FIG. 1A according to one implementation. The processor 102A includes one or more cores 104, a cache 118, a local APIC 106A (e.g., a local interrupt controller), and a memory device 140 (e.g., a memory). One or more cores 104 may execute a virtual machine monitor (VMM) 120 and one or more virtual machines (VM) 125.In various implementations, the local APIC 106A includes an interrupt controller logic 108 and a plurality of registers 130, the interrupt controller logic 108 includes a local APIC common logic), and a plurality of registers 130 such as model specific registers (MSR). The interrupt controller logic 108 may be hardware, firmware, or a combination of hardware and firmware. The register 130 may include, but is not limited to, an APIC ID register 132, a local destination register (LDR) 134, and an interrupt command register (ICR) 136. The APIC ID register 132 may store one or more interrupt controller IDs, and each interrupt controller ID is related to a core 104 or a cluster of cores 104. A cluster is also called a logical processor, and is identified as an entity on which one or more software programs are executed, such as VM 125. The LDR 134 may store logical values associated with two separate identifiers, the two separate identifiers including the cluster ID and the APIC ID, which can be read by the software allowed on the VM 125. The LDR 134 is discussed in more detail with reference to FIG. 2. The ICR 136 can be written by the VM 125 to send the IPI to the system bus 115. ICR 136 is illustrated in FIG. 4 and discussed in more detail below.In an implementation, the memory device 140 is used to store the system software 142, the VMCS 144 for each VM 125, and the buffer 148. The buffer 148 may be partitioned into one or more data structures 150, such as tables, or indexed/partitioned portions of memory. In other implementations, the data structure 150 may also be stored in a register, whether it is an MSR register or a memory-based register. In various implementations, the memory device 130 is any one of the following: dynamic random access memory (DRAM), synchronous DRAM (SDRAM), static memory (such as static random access memory SRAM), flash memory, data storage device Or a combination of such memory devices. For the sake of brevity, the memory device 140 is also simply referred to as the memory 140. As discussed above, the VMM 120 may store the value in the VMCS 144 to set up the virtualization of local APIC traffic. These values can include: the physical address of the pointer to the data structure, the number of one or more entries in the data structure, and the setting or flag for the pass field of the local interrupt controller. The setting or flag when set can indicate the Virtualization of VM's local APIC is common.In addition to the value stored in VMCS 144 for a specific VM 125, the interrupt controller logic 108 can also access the value stored in ICR 136 to determine whether the IPI written in ICR 136 can be sent to the system bus. 115. For example, if the interrupt controller logic 108 verifies that the destination core of the IPI is included in the subset of cores executing the VM 125, the IPI written in the ICR 136 may be sent to the destination core on the system bus 115.FIG. 1C is a block diagram of a VMM running two VMs (for example, a first VM 125A and a second VM 125B) according to an implementation of the present disclosure, where each VM has access to a virtual machine structure (VMCS). The first VM 125A can execute the first guest OS (guest guest machine_0), and the second VM 125B can execute the second guest machine OS (guest guest machine_1). When configuring hardware and software resources for each VM, the VMM 120 may set a first VMCS 144A for the first VM 125A and set a second VMCS 144B.As part of the configuration of the first VM 125A, the implementation of the present disclosure includes the VMM 120 setting (or not setting) the following: the value or flag of the first local interrupt controller pass field 152A, used for the first APIC ID data structure The value of the pointer 154A and the value of the first number 156A of APIC ID entries. As part of the configuration of the second VM 125B, the implementation of the present disclosure includes the VMM 120 setting (or not setting) the following: the value or flag of the second local interrupt controller pass field 152B for the second APIC ID data structure The value of the pointer 154B and the value of the second number 156B of APIC ID entries.FIG. 2 is a diagram of the format structure of the logical destination register (LDR) 134 of the local APIC according to one implementation. The format structure may be a 32-bit logical x2APIC ID, which includes an interrupt controller ID subfield 203 and a cluster ID subfield 205, each of which is a 16-bit subfield. In other implementations, the size and allocation of these subfields can vary, so the illustrated format is only an example. In one implementation, the value of the interrupt controller ID subfield 203 is obtained by shifting the lowest four bits of the x2APIC ID by one bit ("1") to the left. For example, the logical interrupt controller ID is 1<<x2AICID[3: 0]. Then, the cluster ID subfield 205 may be formed by shifting the remaining bits of the x2APIC ID (for example, x2APIC ID[19:4]) into the [31:16] bits of the cluster ID subfield 205. In this way, software executing on the VM (for example, in a virtualized non-root mode) can directly read the values of the interrupt controller ID and cluster ID from the LDR 134.FIG. 3A is a diagram of a data structure 150 established in the memory buffer 148 for the VM 125 (FIG. 1B). Each entry stored in the data structure 150 by the VMM 120 requires approximately 4 bytes of memory and represents a cluster. Therefore, each entry can identify the cluster ID of the cluster and a bit map that identifies the cores in the cluster corresponding to the cluster ID in the subset of cores. As mentioned earlier, the subset of cores are those that execute a particular VM 125. In this way, the VMM 120 can exclusively assign cores for the lifetime of the VM 125. In order to ensure that the VMM 120 does not use these cores to run other VMs, the VMM 120 can group the cores (a subset of cores) into their corresponding clusters, and define the clusters by writing and execute the corresponding cores of the VM 125 (for example, via Bitmap) entries to create the data structure 150. FIG. 3B illustrates an example of creating a data structure 150 for a VM executed by three cores.FIG. 3B is a logic flow diagram illustrating an example of how to create the data structure 150 of FIG. 3A according to one implementation. It is assumed that the first VM (identified as VM0) is to be executed on three processor cores (including the first core (core_0), the second core (core_2), and the third core (core_3)). The first core and the second core may be grouped into cluster zero ("0") with a cluster ID of zero, and the third core may be grouped into cluster one ("1") with a cluster ID of four.In one implementation, the interrupt controller logic 108 may include logic for converting the x2APIC ID to the LDR format, as shown in Figure 3B, where the value is in hexadecimal. APIC ID 0x3 of the first core can be converted into interrupt controller ID 0x8, and APIC ID 0x6 of the second core can be converted into interrupt controller ID 0x40. The interrupt controller logic 108 may perform a bitwise OR operation on these two values to generate a combined bit map value of 0x48 or 0048. Therefore, the entry for cluster zero includes two values, "0000" for cluster ID zero and 0048 for interrupt controller ID. The APIC ID 0x4B of the third core can be similarly converted to interrupt controller ID 0x40800, for example, it identifies cluster ID four ("4") at the bit map of interrupt controller ID 0x0800. Therefore, the entry for cluster zero includes two values, "0004" for cluster ID zero and 0800 for interrupt controller ID. The combination of these two entries forms the entire data structure 150 of VM0 (for example, the APIC ID table). For additional cores distributed across different or additional clusters, the additional logic will be obvious to those skilled in the art.FIG. 4 is a diagram of a format structure for an interrupt command register (ICR) 136 according to an implementation. The guest machine of the VM 125 is used to write to the ICR 136 to send an IPI to the destination core. ICR 136 can be formatted in different sizes, but in one implementation, ICR 136 includes a 32-bit destination field 402, a 12-bit reserved field 406, an 8-bit vector field 410, and multiple smaller fields. The smaller fields include a destination shorthand field 414, a destination mode field 418, and a delivery mode field 422.In various implementations, the destination field 402 is programmed with a value representing the interrupt controller (or APIC) ID of the destination core. The destination mode field 418 may be set to physical (e.g., "0") or logical (e.g., "1"), but these values may be reversed in alternative embodiments. In addition, the destination shorthand field 414 may be set to a value representing one of the destination shorthand settings listed in Table 1, but these values may be changed or exchanged in other embodiments.Destination shorthand value Description 00 No shorthand 01 Self 10 Including all of itself 11 Excluding all of itselfTable 1If the value for "no shorthand" is in the destination shorthand field 414, the interrupt controller logic 108 may read the destination core(s) from the destination field 414. This process changes depending on whether the destination mode field 418 is set to physical (see FIGS. 6-7) or logical (see FIGS. 8-9), because the value in the destination field 402 is used to determine(s) The destination core must be converted first. Otherwise, a destination shorthand method with another setting will be adopted, such as "self", "include all of oneself", or "exclude all of oneself".In the case of setting "self" as the destination shorthand, the only destination core is the same core that issued the IPI, allowing software to interrupt the core executing on it. APIC implementations can freely deliver self-interrupt messages internally, or post messages to the system bus 115 and "listen" (eg, monitor) the system bus 115 like any other IPI message. When "all including itself" is set as the destination shorthand, the IPI is sent to all cores in the system including the core that sent the IPI.When "exclude all of itself" is set as the destination shorthand, the IPI is sent to all in the system, except for the core that sent the IPI. The shorthand support for this destination combined with the lowest priority delivery mode is model-specific. For some processors, when using the "exclude all of itself" shorthand with the lowest priority delivery mode, IPI can redirect back to the issuing core. The process of determining the authorization core of the destination of the IPI for “all including oneself” and “all except oneself” will be discussed in more detail with reference to FIG. 10.In various implementations, the delivery mode field 422 can be set to multiple values selected as shown in Table 2, but in other implementations, these values can be changed or interchanged.Delivery mode value Description 000 Fixed 001 Lowest priority 010 SMI 011 Reserved 100 NMI 101 INIT 110 Start 111 ReservedTable 2FIG. 5 is a flowchart of a method 500 for setting up a local APIC virtualization pass through a VMM and a VM according to an implementation manner. The method 500 may be performed by processing logic, which may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 500 is performed by the VMM 120 executing on at least one of the cores 104, and some aspects of the method 500 are performed by the guest OS 125A or 125B of one of the VMs.Referring to FIG. 5, the method 500 may begin with processing logic grouping multiple cores into one or more clusters, where a subset of the multiple cores is used to execute a virtual machine (VM) (505). Method 505 can continue: processing logic creates a data structure in a buffer of memory for storing one or more entries for a subset of cores (510). Each entry can include a cluster identifier and a bitmap. The bitmap can identify a cluster core corresponding to the cluster identifier in a subset of multiple cores. The method can continue: the processing logic writes a pointer to the data structure into the virtual machine control structure (VMCS), where the pointer includes the physical address of the memory and the number of one or more entries in the data structure (515). The pointer may be an APIC ID data structure pointer 154A or 154B, and the number of one or more entries may be the number of APIC ID entries 156A (FIG. 1C). The processing logic can continue: the local interrupt controller pass field 152A is set in the VMCS (525).Continuing to refer to Figure 5, the guest OS of the VM can then write a value to the interrupt control register (ICR) to send an inter-processor interrupt (IPI) to one or more of the multiple cores executing the VM (525) . These values can be used to: the destination field (525A) that identifies one or more destination cores in multiple cores; the destination mode, including either physical or logical (525B); and destination shorthand (525B) 525C). In the case where the data structure 150 (eg, table) in the buffer 148 in the memory is set and these values in the VMCS are filled, the processor 102A is ready to properly direct the IPI to the core executing the VM, and not to The other cores among the multiple cores 104 in the computing platform 100.6 is a logic flow diagram 600 illustrating the local APIC pass logic for the physical destination mode (eg, the logic of the interrupt controller logic 108 of the local APIC 106A) according to one implementation. In an implementation of the native APIC traffic logic, the destination processor's physical interrupt controller ID is used to program bits [34:32] of the destination field 402 of the ICR. As discussed, the local APIC pass logic can then convert the physical interrupt controller ID into the corresponding cluster ID and the interrupt controller ID within the specific cluster that has been established by the VMM. This conversion can be performed in a manner similar to the way the APIC ID is converted to be stored in the LDR 134, which is discussed in detail with reference to FIG. 2. Then, the APICID data structure pointer 154A in the VMCS can be used to link to the correct data structure 150 in the memory, and from there to the correct cluster information corresponding to the cluster ID obtained from the destination field 402.In various implementations, the bitmap for cores in the subset of multiple cores (for example, the bitmap associated with the cluster ID) is read from the data structure 150, and is combined with the bitmap via a bitwise AND operation 605 The interrupt controller ID is compared to generate a logical result 607. If the logical result 607 is true (for example, a non-zero value), the IPI message with the information from the ICR is sent to the system bus 115. However, if the logical result 607 is false (for example, a zero value), the value written in the ICR is discarded, and the IPI message is not sent to the system bus 115.FIG. 7 is a flowchart of a method 700 for implementing local APIC traffic logic for a physical destination mode according to an implementation manner. The method 700 may be performed by processing logic, which may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 700 is performed by the APIC logic 106A, or more specifically, by the interrupt controller logic 108 referring to a value in one or more of the registers 130.Referring to FIG. 7, the method 700 may begin with: the processing logic determines that the local interrupt controller pass field is set for the VM (715). The method 700 may continue: the processing logic determines that the number of one or more entries of the pointer (for example, APIC ID data structure pointer 154A) and the virtual machine control structure (VMCS) (for example, the number of APIC ID entries 156A) is non-zero ( 720). The method 700 may continue: the processing logic sends the IPI to a system bus coupling the multiple cores in response to using ICR to verify that the IPI is destined for a core in a subset of multiple cores (725).Continuing to refer to Figure 7, the verification just mentioned can be performed through the additional steps below the dotted line. For example, the method 700 may continue: the processing logic determines the destination mode, such as by reading the destination mode field 418 from the ICR (730). If the destination mode is logical, the method 700 can continue: the processing logic continues in the method 900 of FIG. 9 (735B). If the destination mode is physical, then the method 700 can proceed: processing logic converts the value in the destination field 402 to a value for the cluster identifier (ID) of one or more clusters and for the The value of the interrupt controller ID in the cluster (735A). The method 700 can continue: the processing logic uses the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier (740). The method 700 can continue: the processing logic calculates a bitwise AND operation of the value of the interrupt controller ID and the value of the bit map of the entry (745).With continued reference to FIG. 7, the method 700 can proceed: the processing logic determines the result of the bitwise AND operation (750). If the result is a zero value, the method 700 can proceed: the processing logic discards the value written to the ICR, and therefore no IPI is sent to the system bus (755). If the result is non-zero, the method 700 can continue: the processing logic determines that the interrupt controller ID is associated with one of the cores in the subset of multiple cores, and thus sends the IPI onto the system bus (750). The IPI can be sent by sending the value in the ICR as an encapsulated packet to the system bus 115 directed to the destination core.FIG. 8 is a logic flow diagram 800 illustrating the local APIC pass logic (eg, the logic of the interrupt controller logic 108 of the local APIC 106A) for the logical destination mode according to one implementation. In another implementation of APIC pass logic, the value in the destination field 402 of ICR 136 includes the value for the interrupt controller ID in bits [47:32] and the value for cluster in bits [63:48] The value of the ID without conversion. Therefore, as shown in the figure, the cluster ID and interrupt controller ID associated with the destination core can be directly read from the destination field 402.In various implementations, the APIC ID data structure pointer 154A in the VMCS can then be used to link to the correct data structure 150 in the memory, and from there to the correct data structure corresponding to the cluster ID obtained from the destination field 402. The cluster information. Then, the bit map for the cores in the subset of the multiple cores (for example, the bit map associated with the cluster ID) can be read from the data structure 150, and be combined with the interrupt controller ID via the bitwise AND operation 805 Compare to generate logical result 807. If the logical result 807 is true (for example, a non-zero value), the IPI message with the information from the ICR is sent to the system bus 115. However, if the logical result 807 is false (for example, a zero value), the value written in the ICR is discarded, and the IPI message is not sent to the system bus 115.FIG. 9 is a flowchart of a method 900 for implementing local APIC pass logic for a logical destination mode according to an implementation manner. The method 900 may be performed by processing logic, which may include hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 900 is performed by the APIC logic 106A, or more specifically, by the interrupt controller logic 108 referring to a value in one or more of the registers 130. Method 900 may continue from block 735B in FIG. 7, where processing logic determines that the destination mode is logical.Referring to FIG. 9, the method 900 can start with: processing logic uses a set of most significant bits (MSB) in the destination field as the cluster ID of the cluster, and uses a set of least significant bits (LSB) in the destination field as the cluster ID An interrupt controller identifier (ID) within a cluster in one or more clusters (910). The method 900 can continue: the processing logic uses the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier (920). The method 900 may continue: the processing logic calculates a bitwise AND operation of the value of the interrupt controller ID and the value of the bit map of the entry (925).In various implementations, the method 900 continues: processing logic determines the result of the bitwise AND operation (930). If the result is a zero value, the method 900 can proceed: the processing logic discards the value written to the ICR, and therefore no IPI is sent to the system bus (935). If the result is non-zero, the method 900 can continue: the processing logic appends the non-zero result of the bitwise AND operation to the cluster identifier to generate an updated value of the destination field (940). Method 900 may continue: processing logic stores the updated value back into the destination field (945). Method 900 can continue: processing logic uses the updated value in the destination field and the value of ICR to send the IPI onto the system bus (950). The IPI can be sent by sending the value in the ICR as an encapsulated packet to the system bus 115 directed to the destination core.FIG. 10 is a flowchart of a method 1000 for implementing local APIC traffic logic for destination shorthand according to an implementation manner. The method 1000 may be performed by processing logic, which may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 1000 is executed by the APIC logic 106A, or more specifically, by the interrupt controller logic 108 referring to a value in one or more of the registers 130.Referring to FIG. 10, the method 1000 may begin with: the processing logic determines that the local interrupt controller pass field is set for the VM (1015). The method 1000 can continue: the processing logic determines that the number of one or more entries of the pointer (for example, APIC ID data structure pointer 154A) and the virtual machine control structure (VMCS) (for example, the number of APIC ID entries 156A) is non-zero ( 1020). The method 1000 can continue: the processing logic sends the IPI to a system bus that couples the multiple cores in response to using the ICR to verify that the IPI is destined for a core in a subset of multiple cores (1025).Continuing to refer to Figure 7, the verification just mentioned can be performed through the additional steps below the dotted line. Method 1000 can continue: processing logic determines which destination shorthand has been set (1030). If the "exclude all of itself" destination shorthand is set, the method 1000 can continue: processing logic determines each non-zero bitmap via scanning of the entries of the data structure (1032). Before proceeding to block 1040 of method 1000, method 1000 may proceed: the processing logic excludes any bitmap for the core from which the IPI was received in the subset of multiple cores (1034).If the destination shorthand for "including all of itself" is set or the destination field has all-one values (for example, 0xFFFFFFFFH), the method 1000 can continue: the processing logic determines one or more entries via scanning of one or more entries of the data structure Each non-zero bit mapping of multiple entries, for example, without excluding block 1034 (1035). The method 1000 may continue: processing logic merges the non-zero bit map with the cluster identifier of the cluster for the cluster with the non-zero bit map to generate an updated value (1040). Method 1000 may continue: processing logic stores the updated value in the destination field (1045). The method 1000 can continue: the processing logic sets the destination mode to logical (1050). The method 1000 can continue: processing logic uses the updated value in the destination field and the destination mode to send the IPI onto a system bus coupled to multiple cores (1055).FIG. 11A is a block diagram illustrating a micro-architecture for a processor 1100 that implements IPI virtualization using a local interrupt controller. Specifically, the processor 1100 depicts an in-order architecture core and register renaming logic and out-of-order issue/execution logic to be included in the processor according to at least one implementation of the present disclosure.The processor 1100 includes a front-end unit 1130 that is coupled to the execution engine unit 1150, and both the front-end unit 1130 and the execution engine unit 1150 are coupled to the memory unit 1170. The processor 1100 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the processor 1100 may include a dedicated core, such as, for example, a network or communication core, a compression engine, a graphics core, and so on. In one implementation, the processor 1100 may be a multi-core processor or may be part of a multi-processor system.The front-end unit 1130 includes a branch prediction unit 1132 coupled to an instruction cache unit 1134, the instruction cache unit 1134 is coupled to an instruction translation lookaside buffer (TLB) 1136, and the instruction translation lookaside buffer 1136 is coupled to the instruction fetch unit 11311. The instruction fetch unit 1138 is coupled to the decoding unit 1140. The decoding unit 1140 (also called a decoder) can decode the instruction and generate one or more micro-operations and microcode entry points that are decoded from the original instruction, or reflect the original instruction in other ways, or derived from the original instruction , Microinstructions, other instructions, or other control signals as output. The decoder 1140 can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. The instruction cache unit 1134 is further coupled to the memory unit 1170. The decoding unit 1140 is coupled to the rename/allocator unit 1152 in the execution engine unit 1150.The execution engine unit 1150 includes a rename/allocator unit 1152, which is coupled to the retirement unit 1154 and a set 1156 of one or more scheduler units. The scheduler unit(s) 1156 represents any number of different scheduler circuits, including reserved stations (RS), central command windows, and so on. The scheduler unit(s) 1156 is coupled to the physical register aggregation unit(s) 1158. Each of the physical register collection unit(s) 1158 represents one or more physical register collections, and different physical register collections in these physical register collections store one or more different data types, such as scalar Integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (for example, an instruction pointer as the address of the next instruction to be executed), etc. The physical register collection unit(s) 1158 is overlapped by the retirement unit 1154 to illustrate various ways in which register renaming and out-of-order execution can be realized (for example, using reorder buffer(s) and retirement register(s) Collection; use future file(s), history buffer(s), and retirement register collection(s); use register map and register pool; etc.).Generally, architectural registers are visible from outside the processor or from the perspective of the programmer. These registers are not limited to any known specific types of circuits. Various types of registers are suitable as long as they can store and provide data as described in this article. Examples of suitable registers include, but are not limited to: dedicated physical registers, dynamically allocated physical registers using register renaming, and combinations of dedicated physical registers and dynamically allocated physical registers. The retirement unit 1154 and the physical register aggregation unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution cluster(s) 1160 includes a set 1162 of one or more execution units and a set 1164 of one or more memory access units. The execution unit 1162 can perform various operations (for example, shift, addition, subtraction, multiplication) and can perform various types of data (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). operating.Although some implementations may include multiple execution units dedicated to a particular function or set of functions, other implementations may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1156, the physical register collection unit(s) 1158, and the execution cluster(s) 1160 are shown as possibly plural, because some implementations create for certain data/operation types Separate pipelines (e.g., scalar integer pipelines, scalar floating point/compacted integer/compacted floating point/vector integer/vector floating point pipelines, each with their own scheduler unit, physical register collection unit, and/or execution cluster /Or memory access pipeline-and in the case of separate memory access pipelines, certain implementations are implemented in which only the execution cluster of that pipeline has memory access unit(s) 1164). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order, and the remaining pipelines may be ordered.The set 1164 of memory access units is coupled to the memory unit 1170, which may include a data prefetcher 1180, a data TLB unit 1172, a data cache unit (DCU) 1174, and a second level (L2) cache unit 1176, Just to name a few. In some implementations, DCU1174 is also referred to as the first level data cache (L1 cache). DCU 1174 can handle multiple pending cache misses and continue to service incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 1172 is a cache for improving the virtual address conversion speed by mapping virtual and physical address spaces. In an exemplary implementation, the memory access unit 1164 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to the data TLB unit 1172 in the memory unit 1170. The L2 cache unit 1176 may be coupled to one or more other levels of cache, and ultimately to the main memory.In one implementation, the data prefetcher 1180 speculatively loads/prefetches data to the DCU 1174 by automatically predicting which data the program will consume. Prefetching can refer to the data stored in a memory location (e.g., location) of the memory hierarchy (e.g., lower-level cache or memory) before being actually requested by the processor, transferring the data closer (e.g., , Resulting in lower access latency) higher-level memory locations of the processor. More specifically, prefetching may refer to the transfer from a lower level cache/memory to the data cache and/or prefetching before the processor issues a demand for specific data being returned. Early retrieval of data from the fetch buffer.The processor 1100 may support one or more instruction sets (such as the x86 instruction set (with some extensions added with updated versions), the MIPS instruction set of Imagination Technologies, Kings Langley, Hertfordshire, UK The ARM instruction set of ARM Holdings, Sunnyvale, California (with optional additional extensions such as NEON).It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading and simultaneous multi-threading. Threading (in which a single physical core provides a logical core for each thread in which the physical core is multithreading at the same time), or a combination thereof (for example, time-division fetching and decoding and thereafter such as in Intel hyperthreading technology Simultaneously multi-threaded).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated implementation of the processor also includes separate instruction and data cache units and a shared L2 cache unit, alternative implementations may have a single internal cache for both instructions and data, such as, for example, The first level (L1) internal cache, or multiple levels of internal cache. In some implementations, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.FIG. 11B is a block diagram illustrating an in-order pipeline, a register renaming stage, and an out-of-order issue/execution pipeline implemented by the processor 1100 of FIG. 11A according to some implementations of the present disclosure. The solid-line block diagram in FIG. 11B shows the in-order pipeline 1101, and the broken-line block diagram shows the register renaming and out-of-order issue/execution pipeline 1103. In FIG. 11B, the pipeline 1101 and the pipeline 1103 include a fetch stage 1102, a length decoding stage 1104, a decoding stage 1106, an allocation stage 1108, a renaming stage 1110, a scheduling (also called dispatch or release) stage 1112, a register read/ The memory read level 1114, the execution level 1116, the write back/memory write level 1118, the exception handling level 1122, and the commit level 1124. In some implementations, the ordering of stages 1102-1124 may be different from that illustrated and is not limited to the specific ordering shown in Figure 11B.FIG. 12 illustrates a block diagram of a micro-architecture for a processor 1200 according to an implementation of the present disclosure, the processor 1200 includes the logic of a processor or integrated circuit that implements hardware-supported hardware support for common IPI virtualization using a local interrupt controller Circuit. In some implementations, the instructions according to one implementation can be implemented to have byte size, word size, double word size, quad word size, etc. and have such as single precision integer, double precision integer, single precision floating point and double precision. Operate on data elements of data types such as precision floating point types. In one implementation, the in-order front end 1201 is the part of the processor 1200 that fetches instructions to be executed and prepares these instructions for later use in the processor pipeline. The implementation of page addition and content copying can be implemented in the processor 1200.The front end 1201 may include several units. In one implementation, the instruction prefetcher 1226 fetches the instruction from the memory and feeds the instruction to the instruction decoder 1228, and the instruction decoder 1118 then decodes or interprets the instruction. For example, in one implementation, the decoder decodes the received instructions into one or more operations called "micro instructions" or "micro operations" (also called micro ops or uops) that can be executed by the machine. In other implementations, the decoder parses instructions into opcodes and corresponding data and control fields, which are used by the microarchitecture to perform operations according to one implementation. In one implementation, the trace cache 1230 retrieves the decoded micro-operations and assembles them into a sequence or trace in the micro-operation queue 1234 ordered by program for execution. When the trace cache 1230 encounters a complex instruction, the microcode ROM (or RAM) 1232 provides the micro operations needed to complete the operation.Some instructions are converted into a single micro-operation, while other instructions require several micro-operations to complete the entire operation. In one implementation, if more than four micro-operations are required to complete the instruction, the instruction decoder 1228 accesses the microcode ROM 1232 to perform the instruction. For one implementation, instructions can be decoded into a small number of micro-operations for processing at the instruction decoder 1228. In another implementation, if several micro-operations are needed to complete the operation, the instructions can be stored in the microcode ROM 1232. The tracking cache 1230 refers to the entry point programmable logic array (PLA) to determine the correct microinstruction pointer to read the microcode sequence from the microcode ROM 1232 to complete one or more instructions according to an implementation. After the microcode ROM 1232 completes the ordering of the micro operations for the instructions, the front end 1201 of the machine resumes fetching the micro operations from the trace cache 1230.The out-of-order execution engine 1203 is where instructions are prepared for execution. The out-of-order execution logic has several buffers for smoothing and reordering the instruction stream to optimize performance when the instruction stream travels down the pipeline and is scheduled for execution. The allocator logic allocates machine buffers and resources required by each micro-operation for execution. The register renaming logic renames the logical register to an entry in the register set. Before the instruction scheduler (memory scheduler, fast scheduler 1202, slow/universal floating-point scheduler 1204, and simple floating-point scheduler 1206), the allocator also allocates the entry of each micro-operation to two micro-operations In a micro-operation queue in the queue, one micro-operation queue is used for memory operations, and the other micro-operation queue is used for non-memory operations. The micro-operation schedulers 1202, 1204, 1206 determine when the micro-operations are ready for execution based on the readiness of their dependent input register operand sources and the availability of execution resources required by the micro-operations to complete their operations. The fast scheduler 1202 of one implementation may schedule every half cycle of the main clock cycle, while other schedulers may only schedule once every main processor clock cycle. The scheduler arbitrates the dispatch port to schedule micro-operations for execution.The register sets 1208 and 1210 are located between the scheduler 1202, 1204, and 1206 and the execution units 1212, 1214, 1216, 1218, 1220, 1222, 1224 in the execution block 1211. There are separate register sets 1208, 1210 for integer operations and floating point operations, respectively. Each register set 1208, 1210 of an implementation also includes a bypass network, which can bypass or forward the newly completed result that has not been written into the register set to a new dependent micro-operation. The integer register set 1208 and the floating point register set 1210 can also transfer data to each other. For one implementation, the integer register set 1208 is divided into two separate register sets: a low-level 32-bit register set for data, and a high-level 32-bit second register set for data. The floating-point register set 1210 of one implementation has 128-bit wide entries, because floating-point instructions typically have operands ranging from 64-bits to 128-bits wide.The execution block 1211 includes execution units 1212, 1214, 1216, 1218, 1220, 1222, 1224, and instructions are actually executed in the execution units 1212, 1214, 1216, 1218, 1220, 1222, 1224. This part includes register sets 1208 and 1210 that store the integer and floating-point data operand values that microinstructions need to execute. An implementation of the processor 1200 consists of the following execution units: address generation unit (AGU) 1212, AGU 1214, fast ALU 1216, fast ALU 1218, slow ALU 1220, floating point ALU 1212, floating point mobile unit 1214 . For one implementation, the floating point execution blocks 1212, 1214 perform floating point, MMX, SIMD, SSE, or other operations. One implementation of the floating-point ALU 1212 includes a 64-bit division by 64-bit floating-point divider for performing division, square root, and remainder micro-operations. For the implementation of the present disclosure, instructions related to floating-point values can be handled using floating-point hardware.In one implementation, ALU operations go to high-speed ALU execution units 1216 and 1218. One implementation of fast ALU 1216, 1218 can perform fast operations with an effective latency of half a clock cycle. For one implementation, most complex integer operations go to the slow ALU 1220 because the slow ALU 1220 includes integer execution hardware for long-latency type operations, such as multipliers, shifters, flag logic, and branch processing . Memory load/store operations are performed by AGU 1222, 1224. For one implementation, the integer ALUs 1216, 1218, 1220 are described in the context of performing integer operations on 64-bit data operands. In alternative implementations, ALUs 1216, 1218, and 1220 can be implemented to support various data bits, including 16-bit, 32-bit, 128-bit, 256-bit, and so on. Similarly, the floating point units 1222, 1224 can be implemented as supporting a series of operands with various widths of bits. For one implementation, the floating-point units 1222, 1224 can combine SIMD and multimedia instructions to operate on 128-bit wide compressed data operands.In one implementation, the micro-operation scheduler 1202, 1204, 1206 dispatches dependent operations before the parent load has completed execution. Because micro-operations are speculatively scheduled and executed in the processor 1200, the processor 1200 also includes logic for handling memory misses. If the data load misses in the data cache, there will be dependent operations in the pipeline that have left the scheduler with temporary error data. The replay mechanism tracks instructions that use incorrect data and re-executes these instructions. Only dependent operations need to be replayed, and independent operations are allowed to complete. The scheduler and replay mechanism of one implementation of the processor is also designed to capture the sequence of instructions for text string comparison operations.The term "register" may refer to an on-board processor storage location that is used as part of an instruction to identify operands. In other words, registers may be those processor storage locations that are available from outside the processor (from the programmer's perspective). However, the register of the implementation should not be limited to a specific type of circuit in meaning. On the contrary, the register of the implementation mode can store and provide data, and can perform the functions described in this article. The registers described in this article can be implemented by circuits in the processor using any number of different technologies, such as dedicated physical registers, dynamically allocated physical registers using register renaming, dedicated physical registers, and dynamically allocated physical registers The combination and so on. In one implementation, the integer register stores 32-bit integer data. The register set of an implementation also includes eight multimedia SIMD registers for compacting data.For the discussion in this article, a register should be understood as a data register designed to store compressed data, such as a 64-bit wide MMXTM register in an MMX technology-enabled microprocessor from Intel Corporation of Santa Clara, California, USA (Also called "mm" register in some instances). These MMX registers (available in both integer form and floating point form) can operate with packed data elements accompanying SIMD and SSE instructions. Similarly, 128-bit wide XMM registers involving SSE2, SSE3, SSE4, or other technologies (collectively referred to as "SSEx") can also be used to store such compressed data operands. In one implementation, when storing compressed data and integer data, the register does not need to distinguish between the two data types. In one implementation, integer and floating point can be included in the same register set, or in different register sets. Furthermore, in one implementation, floating-point and integer data can be stored in different registers or in the same register.The implementation can be implemented in many different system types. Referring now to FIG. 13, shown is a block diagram of a multi-processor system 1300 according to an implementation manner, which can implement hardware support for common IPI virtualization using a local interrupt controller. As shown in FIG. 13, the multi-processor system 1300 is a point-to-point interconnection system, and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnection 1350. As shown in FIG. 13, each of the processors 1370 and 1380 may be a multi-core processor, including a first processing core and a second processor core (ie, processor cores 1374a and 1374b, and processor cores 1384a and 1384a). 1384b), but potentially, much more cores may exist in the processor. Although shown with two processors 1370 and 1380, it should be understood that the scope of the present disclosure is not limited thereto. In other implementations, there may be one or more additional processors in a given processor.The processors 1370 and 1380 are shown as including integrated memory controller units 1372 and 1382, respectively. The processor 1370 also includes point-to-point (P-P) interfaces 1376 and 1378 as part of its bus controller unit; similarly, the second processor 1380 includes P-P interfaces 1386 and 1388. The processors 1370, 1380 may exchange information via a P-P interface 1350 using point-to-point (P-P) interface circuits 1378, 1388. As shown in Figure 13, IMC 1372 and 1382 couple the processors to corresponding memories, namely memory 1332 and memory 1334, which may be parts of the main memory locally attached to the corresponding processors.The processors 1370 and 1380 can exchange information with the chipset 1390 via the respective P-P interfaces 1352, 1354 of the point-to-point interface circuits 1376, 1394, 1386, and 1398. The chipset 1390 can also exchange information with the high-performance graphics circuit 1338 via the high-performance graphics interface 1392.The chipset 1390 may be coupled to the first bus 1316 via the interface 1396. In one implementation, the first bus 1316 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or an interconnect bus, but the scope of the present disclosure is not limited thereto.As shown in FIG. 13, various I/O devices 1314 may be coupled to the first bus 1316 along with a bus bridge 1318 that couples the first bus 1316 to the second bus 1320. In one implementation, the second bus 1320 may be a low pin count (LPC) bus. In one implementation, various devices may be coupled to the second bus 1320. The various devices include, for example, a keyboard and/or mouse 1322, a communication device 1327, and a storage unit 1328 that may include instructions/code and data 1330 (such as, Disk drives or other mass storage devices). In addition, the audio I/O 1324 may be coupled to the second bus 1320. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 13, the system can implement a multi-branch bus or other such architectures.Referring now to FIG. 14, shown is a block diagram of a third system 1400 according to an implementation of the present disclosure. The third system 1400 can implement hardware support for the common IPI virtualization using a local interrupt controller. The same elements in FIGS. 13 and 14 bear the same reference numerals, and some aspects of FIG. 14 have been omitted from FIG. 13 to avoid obscuring other aspects of FIG. 13.FIG. 14 illustrates processors 1470, 1480. In an implementation manner, the processors 970 and 980 may implement the hybrid core as described above. The processors 1470, 1480 may include integrated memory and I/O control logic ("CL") 1472 and 1492, respectively, and communicate with each other via a point-to-point interconnect 1450 between point-to-point (P-P) interfaces 1478 and 1488, respectively. As shown, the processors 1470, 1480 each communicate with the chipset 1490 via point-to-point interconnects 1452 and 1454 via corresponding P-P interfaces 1476 to 1494 and 1486 to 1498. For at least one implementation, the CL 1472, 1482 may include an integrated memory controller unit such as described herein. In addition, CL 1472, 1492 can also include I/O control logic. Figure 14 illustrates that the memory 1432, 1434 is coupled to the CL 1472, 1492, and the I/O device 1414 is also coupled to the control logic 1472, 1492. The conventional I/O device 1415 is coupled to the chipset 1490 via the interface 1496.FIG. 15 is an exemplary system-on-chip (SoC) 1500 that can implement hardware support for common IPI virtualization using a local interrupt controller. The exemplary SoC 1500 may include one or more of cores 1502A, ..., 1502N. Known in the art for laptop devices, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSP), graphics Other system designs and configurations of devices, video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. Generally, a variety of systems or electronic devices capable of including a processor and/or other execution logic as disclosed herein are generally suitable.In the exemplary SoC 1500 in FIG. 15, the dashed box is a feature on a more advanced SoC. The interconnection unit(s) 1503 may be coupled to: an application processor 1517, including a set of one or more cores 1502A-N, and a shared cache unit(s) 1506, one or more cores 1504A-N each including one Or multiple cache units 1504A...1504N; system proxy unit 1513; integrated memory controller unit(s) 1514; one or more media processor set 1520, which may include integrated graphics logic 1508, used to provide static and / Or image processor 1524 with video camera function, audio processor 1526 for providing hardware audio acceleration, and video processor 1528 for providing video encoding/decoding acceleration; static random access memory (SRAM) unit 1530; direct A memory access (DMA) unit 1532; and a display unit 1540 for coupling to one or more external displays.Next, turning to FIG. 16, it depicts an implementation of a system-on-chip (SoC) design according to the implementation of the present disclosure. The system-on-chip design can implement hardware support for common IPI virtualization using a local interrupt controller. As an illustrative example, SoC 1600 is included in user equipment (UE). In one implementation, UE refers to any device that can be used by end users for communication, such as handheld phones, smart phones, tablets, ultra-thin notebooks, notebooks with broadband adapters, or any other similar communication devices. The UE can be connected to a base station or node, which essentially corresponds to a mobile station (MS) in a GSM network. The implementation of page addition and content replication can be implemented in SoC 1600.Here, SoC 1600 includes 2 cores-1606 and 1607. Similar to the above discussion, the cores 1606 and 1607 can conform to the instruction set architecture, such as processors with Intel Architecture CoreTM, Advanced Micro Devices (AMD) processors, MIPS-based processors, ARM-based processing Design, or their customers, and their licensees or adopters. The cores 1606 and 1607 are coupled to a cache control 1608, which is associated with the bus interface unit 1609 and the L2 cache 1610 to communicate with other parts of the system 1600. Interconnect 1611 includes on-chip interconnects, such as IOSF, AMBA, or other interconnects discussed above, which can implement one or more aspects of the described disclosure.In one implementation, the SDRAM controller 1640 may be connected to the interconnect 1611 via the cache 1610. The interconnection 1611 provides a communication channel to other components, such as the SIM 1630 for interfacing with the Subscriber Identity Module (SIM) card, and the boot code used to store the boot code for the cores 1606 and 1607 to execute to initialize and boot the SoC 1600 ROM 1635, SDRAM controller 1640 for docking with external memory (for example, DRAM 1660), flash memory controller 1645 for docking with non-volatile memory (for example, flash memory 1665), and peripheral controls for docking with peripheral devices 1650 (for example, a serial peripheral interface), a video codec 1620 and a video interface 1625 for displaying and receiving input (for example, touch realized input), a GPU 1615 for performing graphics-related calculations, and so on. Any of these interfaces can include aspects of the implementation described herein.In addition, the system illustrates peripheral devices for communication, such as a power control module 1655, a Bluetooth module 1670, a 3G modem 1675, a GPS 1680, and a 1685. Note that, as described above, the UE includes a radio device for communication. As a result, these peripheral communication modules may not be included in all. However, in the UE, some form of radio device for external communication should be included.Figure 17 illustrates a diagrammatic representation of a machine in an example form of a computing system 1700 according to any one or more of the methods discussed herein, a set of instructions within the computing system 1700 for enabling the machine to utilize local interrupts Hardware support for the common IPI virtualization of the controller. In alternative implementations, machines can be connected (eg, networked) to other machines in a LAN, intranet, extranet, or the Internet. The machine can operate as a server or client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cell phone, a web device, a server, a network router, a switch, or a bridge, or it can perform actions specified by the machine A set of instructions (sequentially or otherwise) for any machine. Further, although only a single machine is illustrated, the term "machine" should also be considered to include the execution of a group (or groups) of instructions individually or in combination to perform any one of the methods discussed herein or Any collection of machines with multiple methods. The implementation of page addition and content copying can be implemented in the computing system 1700.The computer system 1700 includes a processing device 1702, a main memory 1704 (for example, flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), static memory 1706 (for example, flash memory, static Random Access Memory (SRAM), etc.) and a data storage device 1716, which communicate with each other via a bus 1708.The processing device 1702 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computer (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor that implements other instruction sets, Or a processor that implements a combination of instruction sets. The processing device 1702 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and so on. In one implementation, the processing device 1702 may include one or more processor cores. The processing device 1702 is configured to execute processing logic 1726 for performing the operations discussed herein.In one implementation, the processing device 1702 may be part of a processor or integrated circuit that includes the disclosed LLC cache architecture. Alternatively, the computing system 1700 may include other components as described herein. It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading and simultaneous multi-threading. Threading (where a single physical core provides a logical core for each of the threads that the physical core is multithreading at the same time), or a combination thereof (for example, time-division fetching and decoding and thereafter such as in Intel hyperthreading technology Simultaneously multi-threaded).The computing system 1700 can further include a network interface device 1718 communicatively coupled to a network 1719. The computing system 1700 may also include a video display device 1710 (for example, a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1712 (for example, a keyboard), a cursor control device 174 (for example, a mouse), a signal generator Device 1720 (for example, speakers), or other peripheral devices. In addition, the computing system 1700 may include a graphics processing unit 1722, a video processing unit 1728, and an audio processing unit 1732. In another implementation, the computing system 1700 may include a chipset (not shown), which refers to a group designed to work with the processing device 1702 and to control communication between the processing device 1702 and external devices. Integrated circuit or chip. For example, the chipset may be a peripheral bus that links the processing device 1702 to very high-speed devices (such as the main memory 1704 and graphics controller) and links the processing device 1702 to lower-speed peripheral devices (such as USB, PCI Or ISA bus) a set of chips on the motherboard.The computing system 1700 can further include a network interface device 1718 communicatively coupled to a network 1719. The computing system 1700 may also include a video display device 1710 (for example, a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1712 (for example, a keyboard), a cursor control device 1714 (for example, a mouse), and signal generation Device 1720 (for example, speakers), or other peripheral devices. In addition, the computing system 1700 may include a graphics processing unit 1722, a video processing unit 1728, and an audio processing unit 1732. In another implementation, the computing system 1700 may include a chipset (not shown), which refers to a group designed to work with the processing device 1702 and to control communication between the processing device 1702 and external devices. Integrated circuit or chip. For example, the chipset may be a peripheral bus that links the processing device 1702 to very high-speed devices (such as the main memory 1704 and graphics controller) and links the processing device 1702 to lower-speed peripheral devices (such as USB, PCI Or ISA bus) a set of chips on the motherboard.The computer-readable storage medium 1724 may also be used to store instructions 1726 using the processing device 1702 and/or a software library containing methods for invoking the above applications. Although the computer-readable storage medium 1724 is shown as a single medium in an example implementation, the term "computer-readable storage medium" should be considered to include a single medium or multiple media storing one or more sets of instructions (e.g., Centralized or distributed database and/or associated cache and server). The term "computer-readable storage medium" should also be considered as including a set of instructions capable of storing, encoding, or carrying for execution by a machine and causing the machine to perform any one or more of the disclosed implementation methods Of any medium. The term "computer-readable storage medium" should accordingly be regarded as including but not limited to solid-state memory and optical and magnetic media.The following examples involve further implementations.Example 1 is a processor, including: 1) a plurality of cores, wherein at least one core of the plurality of cores is used to execute a virtual machine monitor (VMM), and wherein in order to configure a USB rose virtual machine (VM) The VMM is used to: a) group the multiple cores into one or more clusters, where a subset of the multiple cores is used to execute the VM; b) buffer in the memory The data structure is used to store one or more entries for a subset of the plurality of cores, each entry of the one or more entries includes a cluster identifier and a bitmap, wherein the The bitmap identifies the cores in the cluster corresponding to the cluster identifier in the subset; c) writes to the virtual machine control structure of the memory: i) a pointer to a data structure, where the pointer includes the memory Physical address; and ii) the number of one or more entries in the data structure; and d) setting the local interrupt controller pass field in the VMCS.In Example 2, the processor of Example 1 further includes an interrupt command register (ICR), wherein the guest operating system (OS) of the VM is used to write a value to the ICR to send an inter-processor interrupt (IPI) , The value is used to fill: i) a destination field, which identifies one or more destination cores among the multiple cores; and ii) a destination mode, including one of physical or logical.In example 3, the processor of example 2 further includes a programmable interrupt controller, which is used to: a) convert the value in the destination field into a cluster identifier for a cluster in the one or more clusters The value of the identifier and the value of the interrupt controller identifier (ID) in the cluster; b) using the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier; c) calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; and d) in response to the bitwise AND operation generating a zero value, discarding the value written in the ICR Value.In Example 4, the processor of Example 2 further includes a programmable interrupt controller for: a) determining that the local interrupt controller pass field is set for the VM; b) determining the pointer and the The number of the one or more entries of the virtual machine control structure is non-zero; and c) in response to using ICR to verify that the IPI is destined for one core in the subset of the plurality of cores, the IPI Send to the system bus coupling the multiple cores.In Example 5, the processor of Example 4, wherein in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to: a) determine the purpose The local mode is physical; b) Convert the value in the destination field to the value of the cluster identifier for the cluster in the one or more clusters and the interrupt controller identifier (ID ); c) use the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier; d) calculate the value for the interrupt controller ID and for the entry And e) in response to the bitwise AND operation generating a non-zero value, determining the interrupt controller ID and one of the cores in the subset of the plurality of cores Nuclear associated.In Example 6, the processor of Example 4, wherein in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to: a) determine the purpose The local pattern is logical; b) Use the most significant bits in the destination field as the cluster identifier of the cluster in the one or more clusters, and combine the lowest in the destination field The valid bit is used as the interrupt controller identifier (ID) in the cluster; c) uses the pointer to access the data structure to retrieve the value of the bit map of the entry corresponding to the cluster identifier; d) calculate A bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; e) adding a non-zero result of the bitwise AND operation to the cluster identifier to generate The updated value of the destination field; f) store the updated value back into the destination field; and e) use the updated value in the destination field and the ICR To send the IPI to the system bus.In Example 7, the processor of Example 2, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the processor further includes a programmable interrupt controller, which Used to: a) determine one of the following: the destination shorthand has been set to include all of itself; or the destination field is programmed with all one value; b) via the one or more of the data structure Scanning of entries to determine each non-zero bit mapping of the one or more entries; c) for a cluster with a non-zero bit mapping, the non-zero bit mapping and the cluster identifier of the cluster are combined to generate The updated value; d) store the updated value in the destination field; e) set the destination mode to logical; and f) use the experience in the destination field The updated value and the destination mode are used to send the IPI to the system bus coupled to the plurality of cores.In Example 8, the processor of Example 2, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the processor further includes a programmable interrupt controller, which It is used to: a) determine that the destination shorthand has been set to exclude all of itself; b) through the scanning of the one or more entries of the data structure, determine that each of the one or more entries is not Zero bit mapping, so as to exclude any bit mapping for the core from which the IPI is received in the subset of the plurality of cores; c) for a cluster with a non-zero bit mapping, map the non-zero bit Merge with the cluster identifier of the cluster to generate an updated value; d) store the updated value in the destination field; e) set the destination mode to logical; and f) The updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.Each implementation may have different combinations of the structural features described above. For example, all optional features of the processor and method described above can also be implemented with reference to the system described herein, and the details in the examples can be used anywhere in one or more implementations.Example 9 is a method that includes: a) grouping the plurality of cores into one or more clusters by a virtual machine monitor (VMM) executed on at least one of the plurality of cores, wherein The subset of the multiple cores is used to execute a virtual machine (VM); b) the VMM creates a data structure in a buffer in the memory, and the data structure is used to store a subset of the multiple cores. Or multiple entries, each of the one or more entries includes a cluster identifier and a bitmap, wherein the bitmap identifies a core in the cluster corresponding to the cluster identifier in the subset; c) The VMM writes to the virtual machine control structure of the memory: i) a pointer to the data structure, wherein the pointer includes the physical address of the memory; and ii) all the data in the data structure The number of the one or more entries; and d) the VMM sets a local interrupt controller pass field in the virtual machine control structure.In Example 10, the method of Example 9 further includes: writing a value to an interrupt control register (ICR) by a guest operating system (OS) of the VM to send an inter-processor interrupt (IPI), and the value is used for Fill in: i) a destination field, which identifies one or more destination cores in the plurality of cores; and ii) a destination mode, including one of physical or logical.In Example 11, the method of Example 10, the method further includes the following steps: a) Reading the value in the destination field to determine the cluster identifier and the cluster identifier for the one or more clusters The interrupt controller identifier (ID) in the cluster; b) use the pointer to access the data structure to retrieve the value of the bit map of the entry corresponding to the cluster identifier; c) calculate the value for the interrupt controller A bitwise AND operation of the value of the ID and the value of the bit map for the entry; and d) in response to the bitwise AND operation generating a zero value, the value written into the ICR is discarded.In Example 12, the method of Example 10, the method further includes the following steps: a) determining that the local interrupt controller pass field is set for the VM; b) determining the pointer and the virtual machine control structure The number of the one or more entries is non-zero; and c) in response to using ICR to verify that the IPI is destined for a core in the subset of the plurality of cores, sending the IPI to the coupling Multiple cores on the system bus.In Example 13, the method of Example 12, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the method further comprises: a) determining that the destination mode is physical; b) Convert the value in the destination field into a cluster identifier for the one or more clusters and an interrupt controller identifier (ID) value within the cluster; c) use the The pointer accesses the data structure to retrieve the value of the bit map of the entry corresponding to the cluster identifier; d) calculates the value of the interrupt controller ID and the value of the bit map of the entry Bit AND operation; and e) in response to the result of the bitwise AND operation being a non-zero value, determining that the interrupt controller ID is associated with one of the cores in the subset of the plurality of cores.In Example 14, the method of Example 12, in order to determine that the IPI goes to a core in the subset of the plurality of cores, the method further includes: a) determining that the destination mode is logical; b) Using the most significant bits in the destination field as the cluster identifier of the cluster in the one or more clusters, and using the least significant bits in the destination field as the cluster identifier The interrupt controller identifier (ID); c) use the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier; d) calculate the interrupt controller ID and Bitwise AND operation of the value of the bit map of the entry; e) append the non-zero result of the bitwise AND operation to the cluster identifier to generate an updated value for the destination field; f) Storing the updated value back into the destination field; and g) using the updated value in the destination field and the value of the ICR to send the IPI to the On the system bus.In Example 15, the method of Example 10, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the method further includes: a) determining one of the following: so The destination shorthand has been set to include all of itself; or the destination field is programmed with all one value; b) the one or more entries of the data structure are scanned to determine the one or more Each non-zero bit mapping of the entries; c) for a cluster with a non-zero bit mapping, merge the non-zero bit mapping with the cluster identifier of the cluster to generate an updated value; d) combine the The updated value is stored in the destination field; e) the destination mode is set to be logical; and f) the updated value in the destination field and the destination mode are used to change The IPI is sent to a system bus coupled to the multiple cores.In Example 16, the method of Example 10, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the method further includes:a) Determine that the destination shorthand has been set to exclude all of itself; b) Determine each non-zero bit mapping of the one or more entries via scanning of the one or more entries of the data structure , Thereby excluding any bit mapping for the core from which the IPI is received in the subset of the plurality of cores; c) for a cluster with a non-zero bit mapping, compare the non-zero bit mapping with the The cluster identifiers of the clusters are merged to generate an updated value; d) store the updated value in the destination field; e) set the destination mode to logical; and f) use the The updated value in the destination field and the destination mode to send the IPI onto a system bus coupled to the plurality of cores.Each implementation may have different combinations of the structural features described above. For example, all optional features of the processor and method described above can also be implemented with reference to the system described herein, and the details in the examples can be used anywhere in one or more implementations.Example 17 is a system including: 1) a memory device; and 2) a plurality of cores, wherein at least one core of the plurality of cores is used to execute a virtual machine monitor (VMM), and wherein for configuration For resources of a virtual machine (VM), the VMM is used to: a) group the multiple cores into one or more clusters, wherein a subset of the multiple cores is used to execute the VM; b) Create a data structure in a buffer in the memory device, the data structure for storing one or more entries for a subset of the plurality of cores, each entry of the one or more entries including a cluster identifier And bitmap, wherein the bitmap identifies the cores in the cluster corresponding to the cluster identifier in the subset; c) writes to the virtual machine control structure of the memory device: i) points to the data structure A pointer, where the pointer includes the physical address of the memory device; and ii) the number of one or more entries in the data structure; and d) setting a local interrupt controller pass field in the virtual machine control structure.In Example 18, the system of Example 17, further comprising an interrupt command register (ICR), wherein the guest operating system (OS) of the VM is used to write a value to the ICR to send an inter-processor interrupt (IPI), The value is used to fill in: i) a destination field, which identifies one or more destination cores among the plurality of cores; and ii) a destination mode, including one of physical or logical.In Example 19, the system of Example 18 further includes a programmable interrupt controller for: a) converting the value in the destination field into a cluster identifier for the one or more clusters And the value of the interrupt controller identifier (ID) in the cluster; b) use the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier C) calculating a bitwise AND operation for the value of the interrupt controller ID and the value of the bit map for the entry; and d) in response to the bitwise AND operation generating a zero value, discarding the value written in the The value of ICR.In Example 20, the system of Example 18 further includes a programmable interrupt controller for: a) determining that the local interrupt controller pass field is set for the VM; b) determining the pointer and the virtual The number of the one or more entries of the machine control structure is non-zero; and c) in response to using ICR to verify that the IPI is destined to a core in the subset of the plurality of cores, sending the IPI To the system bus coupling the multiple cores.In Example 21, the system of Example 20, wherein in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to: a) determine the destination The mode is physical; b) the value in the destination field is converted into the value of the cluster identifier for the cluster in the one or more clusters and the interrupt controller identifier (ID) within the cluster C) use the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier; d) calculate the value for the interrupt controller ID and for the A bitwise AND operation of the value of the bit map of the entry; and e) in response to the bitwise AND operation generating a non-zero value, determining the interrupt controller ID and the core in the subset of the plurality of cores One core is associated.In Example 22, the system of Example 20, wherein in order to determine that the IPI goes to a core in the subset of the plurality of cores, the programmable interrupt controller is further configured to: a) determine the destination The mode is logical; b) the most significant bits in the destination field are used as the cluster identifiers of the one or more clusters, and the least significant bits in the destination field are used Bits are used as the interrupt controller identifier (ID) in the cluster; c) Use the pointer to access the data structure to retrieve the value of the bitmap of the entry corresponding to the cluster identifier; d) Calculate the bitwise AND operation for the value of the interrupt controller ID and the value of the bitmap for the entry; e) append the non-zero result of the bitwise AND operation to the cluster identifier to generate The updated value of the destination field; f) store the updated value back into the destination field; and g) use the updated value in the destination field and the The value of ICR is used to send the IPI to the system bus.In Example 23, the system of Example 18, wherein the guest OS of the VM is further used to write a destination shorthand value to the ICR, and the system further includes a programmable interrupt controller for: a) Determine one of the following: the destination shorthand has been set to include all of itself; or the destination field has been programmed with all one value; b) via the one or more entries of the data structure Scan to determine each non-zero bit mapping of the one or more entries; c) For clusters with non-zero bit mapping, merge the non-zero bit mapping with the cluster identifier of the cluster to generate an updated Value; d) store the updated value in the destination field; e) set the destination mode to logical; and f) use the updated value in the destination field And the destination mode to send the IPI to the system bus coupled to the plurality of cores.In Example 24, the system of Example 18, wherein the guest OS of the VM is further configured to write a destination shorthand value to the ICR, and the system further includes a programmable interrupt controller for : A) Determine that the destination shorthand has been set to exclude all of itself; b) Determine each non-zero bit of the one or more entries via scanning of the one or more entries of the data structure Mapping, so as to exclude any bit mapping for the core from which the IPI is received in the subset of the plurality of cores; c) for a cluster with a non-zero bit mapping, map the non-zero bit to all The cluster identifiers of the clusters are merged to generate an updated value; d) the updated value is stored in the destination field; e) the destination mode is set to logical; and f) the used The updated value in the destination field and the destination mode are used to send the IPI onto a system bus coupled to the plurality of cores.Each implementation may have different combinations of the structural features described above. For example, all optional features of the processor and method described above can also be implemented with reference to the system described herein, and the details in the examples can be used anywhere in one or more implementations.Example 25 is a non-transitory computer-readable medium storing instructions that, when executed by a processor having a core coupled to a system memory, cause the processor to perform a plurality of logical operations, these logical operations include: a ) A virtual machine monitor (VMM) executed on at least one of the multiple cores groups the multiple cores into one or more clusters, wherein a subset of the multiple cores is used to execute the virtual machine (VM); b) the VMM creates a data structure in the buffer in the memory, the data structure is used to store one or more entries for a subset of the multiple cores, the one or more entries Each entry in includes a cluster identifier and a bitmap, where the bitmap identifies the cores in the cluster corresponding to the cluster identifier in the subset; c) the virtual transfer from the VMM to the memory The machine control structure writes: i) a pointer to the data structure, wherein the pointer includes the physical address of the memory; and ii) the number of the one or more entries in the data structure; and d) The VMM sets a local interrupt controller pass field in the virtual machine control structure.Each implementation may have different combinations of the structural features described above. For example, all optional features of the processor and method described above can also be implemented with reference to the system described herein, and the details in the examples can be used anywhere in one or more implementations.Example 26 is a system that includes: a) a device for grouping multiple cores into one or more clusters by a virtual machine monitor (VMM), wherein a subset of the multiple cores is used for execution The VM; b) a device for creating a data structure for storing one or more entries for a subset of the plurality of cores, each of the one or more entries includes a cluster identification Symbol and bit map, wherein the bit map identifies the cores in the cluster corresponding to the cluster identifier in the subset; c) a device for writing the following items to the virtual machine control structure of the memory : I) a pointer to the data structure, wherein the pointer includes the physical address of the memory; and ii) the number of the one or more entries in the data structure; and d) is used by the The VMM sets a device for the pass field of the local interrupt controller in the virtual machine control structure.Although the present disclosure has been described with reference to a limited number of implementations, those skilled in the art will understand many modifications and variations therefrom. The appended claims are intended to cover all such modifications and variations as falling within the true spirit and scope of the present disclosure.In the following description, many specific details are explained (such as specific types of processing equipment and system configuration, specific hardware structure, specific architecture and microarchitecture details, specific register configuration, specific instruction types, specific system components, specific measurements/heights, Examples of specific processing equipment pipeline stages and operations, etc.) to provide a thorough understanding of the present disclosure. However, it will be obvious to those skilled in the art that these specific details need not be adopted to implement the present disclosure. In other instances, well-known components or methods (such as specific or alternative processing device architectures, specific logic circuits/codes for the described algorithms, specific firmware codes, specific interconnect operations, specific The logic configuration, specific manufacturing technology and materials, specific compiler implementation, specific expression of algorithms in the code, specific power-down and power gating technology/logic, and other specific operating details of the computer system) to avoid unnecessary To obscure this disclosure.The implementation method is described with reference to the coexistence of the trust domain architecture and the multi-key total memory encryption technology in the virtualization system using the trust domain provided in a specific integrated circuit (such as a computing platform or a micro-processing device). The implementation method is also applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed implementation is not limited to desktop computer systems or portable computers, such as Intel UltrabookTM computers. And it can also be used in other devices, such as handheld devices, tablets, other thin notebooks, system-on-chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular phones, internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include microcontrollers, digital signal processing devices (DSP), system-on-chips, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below . It is described that the system can be any kind of computer or embedded system. The disclosed implementation can be especially used for low-end devices, such as wearable devices (for example, watches), electronic implants, sensing and control infrastructure equipment, controllers, monitoring and control devices, and data acquisition (SCADA) systems, etc. Wait. In addition, the devices, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for energy saving and energy efficiency. As will become apparent in the following description, the implementation of the methods, devices, and systems described in this article (whether in terms of hardware, firmware, software, or a combination of them) is important for'green technology' balanced by performance considerations. Prospects are crucial.Although the implementations in this article are described with reference to processing devices, other implementations are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the implementation of the present disclosure can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teaching of the implementation of the present disclosure is applicable to any processing equipment or machine that performs data manipulation. However, the present disclosure is not limited to processing equipment or machines that perform 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations, and can be applied to any processing equipment or machines in which data manipulation or management is performed. machine. In addition, the description herein provides examples, and the drawings show various examples for illustrative purposes. However, these examples should not be interpreted in a restrictive sense, as they are only intended to provide examples of implementations of the present disclosure, rather than providing an exclusive list of all possible implementations of the implementations of the present disclosure.Although the following examples describe instruction handling and distribution in the context of execution units and logic circuits, other implementations of the present disclosure can also be implemented by data or instructions stored on a machine-readable tangible medium, such data and/or The instructions, when executed by the machine, cause the machine to perform a function consistent with at least one implementation of the present disclosure. In one implementation, the functions associated with the implementation of the present disclosure are embodied in machine-executable instructions. These instructions can be used to make a general-purpose processor or a special-purpose processing device programmed with these instructions execute the steps of the present disclosure. The implementation of the present disclosure can also be provided as a computer program product or software. The computer program product or software can include a machine or computer-readable medium with instructions stored thereon. These instructions can be used to perform a computer (or other electronic device) ) Program to perform one or more operations according to the implementation of the present disclosure. Alternatively, the operations of the implementation of the present disclosure may be performed by dedicated hardware components including fixed-function logic for performing these operations, or by any combination of programmed computer components and fixed-function hardware components.Instructions used to program logic to perform implementations of the present disclosure may be stored in a memory (such as DRAM, cache, flash memory, or other storage) in the system. In addition, the instructions can be distributed via a network or by means of other computer-readable media. Therefore, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (such as a computer), but is not limited to: floppy disk, optical disk, compact disk read-only memory (CD-ROM), magneto-optical disk, Read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory, or Tangible machine-readable storage used when information is transmitted via the Internet through electrical, optical, acoustic, or other forms of propagation signals (such as carrier waves, infrared signals, digital signals, etc.). Therefore, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).The design can go through multiple stages, from creation to simulation to manufacturing. The data representing the design can represent the design in several ways. First, as useful in simulation, a hardware description language or another functional description language can be used to represent the hardware. In addition, circuit-level models with logic and/or transistor gates can be generated at some stages of the design process. In addition, most designs reach the level of data representing the physical arrangement of various devices in the hardware model at some stage. In the case of using conventional semiconductor manufacturing technology, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers of the mask used to manufacture the integrated circuit. In any representation of the design, data can be stored in any form of machine-readable medium. Memory or magnetic or optical storage (such as a disk) may be a machine-readable medium for storing information transmitted via optical or electrical waves that are modulated or otherwise generated to transmit such information. When the electrical carrier indicating or carrying the code or design is transmitted, to the extent that the copying, buffering, or retransmission of the electrical signal is performed, a new copy is produced. Therefore, a communication provider or a network provider may at least temporarily store an article of technology embodying an implementation of the present disclosure (such as information encoded in a carrier wave) on a tangible machine-readable medium.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a microcontroller, associated with a non-transitory medium for storing code suitable for execution by the microcontroller. Therefore, in one implementation, a reference to a module refers to hardware that is specifically configured to identify and/or execute code to be stored on a non-transitory medium. In addition, in another implementation, the use of a module refers to a non-transitory medium that includes code, which is specifically adapted to be executed by a microcontroller to perform predetermined operations. And as can be inferred, in yet another implementation, the term module (in this example) can refer to a combination of a microcontroller and a non-transient medium. Generally, the boundaries of the modules that are illustrated as separate often vary and potentially overlap. For example, the first module and the second module may share hardware, software, firmware, or a combination thereof, while potentially setting aside some independent hardware, software, or firmware. In one implementation, the use of the term logic includes hardware such as transistors, registers, or other hardware such as programmable logic devices.In one implementation, the use of the phrase "configured to" refers to arranging, assembling, manufacturing, promising to sell, importing, and/or designing devices, hardware, logic, or elements to perform specified or determined tasks. In this example, if the non-operating device or its element is designed, coupled, and/or interconnected to perform the specified task, then the non-operating device or its element is still'configured to perform' Assigned tasks. As a purely illustrative example, the logic gate may provide 0 or 1 during operation. But logic gates that are'configured to provide an enable signal to the clock do not include every potential logic gate that can provide 1 or 0. Instead, the logic gate is a logic gate that is coupled in a way that the output of 1 or 0 is used to enable the clock during operation. Note again that the use of the term'configured to' does not require operation, but instead focuses on the potential state of the device, hardware, and/or element in which the device, hardware, and/or element is designed to be in the The device, hardware, and/or element perform a specific task while it is operating.In addition, in one implementation, the use of the terms "used for", "able/capable for" and/or "operable for" refers to a certain device, logic, hardware, and/or designed as follows Element: Enable the use of the device, logic, hardware, and/or element in a specified manner. Note that, as described above, in one implementation, the use of'used for','available for', or'operable for' refers to the potential state of the device, logic, hardware, and/or element, The device, logic, hardware, and/or element are not operating, but are designed in such a manner to enable the use of the device in a specified manner.As used herein, values include any known representation of numbers, states, logic states, or binary logic states. Generally, the use of logic levels, logic values, or multiple logic values is also called 1 and 0, which simply represents the binary logic state. For example, 1 refers to a high logic level, and 0 refers to a low logic level. In one implementation, memory cells such as transistors or flash memory cells can hold a single logic value or multiple logic values. However, other representations of values in computer systems have been used. For example, the decimal number 10 can also be represented as the binary value 1010 and the hexadecimal letter A. Therefore, the value includes any representation of information that can be stored in a computer system.Also, the state can be represented by a value or part of a value. As an example, a first value such as logic 1 may indicate a default or initial state, and a second value such as logic 0 may indicate a non-default state. Additionally, in one implementation, the terms reset and set refer to default and updated values or states, respectively. For example, the default value potentially includes a high logic value (ie, being reset), while the updated value potentially includes a low logic value (ie, being set). Note that any combination of values can be used to represent any number of states.The implementation of the above methods, hardware, software, firmware or code can be implemented via instructions or codes stored on a machine-accessible, machine-readable, computer-accessible, or computer-readable medium that can be executed by a processing element. The non-transitory machine-accessible/readable medium includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine, such as by a computer or electronic system. For example, non-transitory machine-accessible media include: random access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; electrical storage devices; optical Storage devices; acoustic storage devices; other forms of storage devices used to store information received from transient (propagated) signals (for example, carrier waves, infrared signals, digital signals); etc., these transient (propagated) signals are used for Different from non-transient media from which information can be received. Instructions used to program logic to perform implementations of the present disclosure may be stored in a memory (such as DRAM, cache, flash memory, or other storage) in the system. In addition, the instructions can be distributed via a network or by means of other computer-readable media. Therefore, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (such as a computer), but is not limited to: floppy disk, optical disk, compact disk read-only memory (CD-ROM), magneto-optical disk, Read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory, or Tangible machine-readable storage used when information is transmitted via the Internet through electrical, optical, acoustic, or other forms of propagation signals (such as carrier waves, infrared signals, digital signals, etc.). Therefore, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Throughout this specification, references to "one implementation" or "an implementation" mean that a specific feature, structure, or characteristic described in conjunction with the implementation is included in at least one implementation of the present disclosure. Therefore, the appearances of the phrases "in one implementation" or "in an implementation" in multiple places throughout the specification do not necessarily all refer to the same implementation. In addition, in one or more implementation manners, specific features, structures, or characteristics can be combined in any suitable manner.In the foregoing specification, specific implementations have been given with reference to specific exemplary implementations. However, it will be apparent that various modifications and changes can be made to these implementations without departing from the broader spirit and scope of the present disclosure as described in the appended claims. Therefore, the description and drawings should be regarded as illustrative rather than restrictive. In addition, the aforementioned use of implementations, embodiments, and/or other exemplary language does not necessarily refer to the same implementation or the same example, but may refer to different and unique implementations, or possibly the same implementation.Some parts of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits in a computer memory. These algorithm descriptions and representations are the means used by those skilled in the data processing field to most effectively convey the essence of their work to others in the field. An algorithm is generally understood here as a self-consistent sequence of operations leading to a desired result. Operations are those operations that require physical manipulation of physical quantities. Usually, but not necessarily, these quantities take the form of electrical or magnetic signals that can be stored, transferred, combined, compared, and otherwise manipulated. Mainly for common use considerations, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc. from time to time. The blocks described herein can be hardware, software, firmware, or a combination thereof.However, it should be borne in mind that all these and similar terms will be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless clearly indicated, as is obvious from the above discussion, it can be understood that throughout the specification, the use of such as "definition", "receive", "determine", "publish", "link", "associate", "acquire" The discussion of terms such as "," "authentication", "prohibition", "execution", "request", "communication" refers to the actions and processes of computing systems or similar electronic computing devices, such computing systems or similar electronic computing The device manipulates the data represented as a physical (for example, electronic) quantity in the registers and memory of the computing system and converts it into the computing system memory or register or other such information storage, transmission or display device similarly represented as Other data of physical quantities.The words "exemplary" or "exemplary" are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" or "exemplary" is not necessarily construed as preferred or advantageous over other aspects or designs. Rather, the use of the words "example" or "exemplary" is intended to present the concept in a specific way. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless otherwise specified or clarified from the context, "X includes A or B" is intended to mean any of the natural inclusive arrangements. That is, if X includes A; X includes B; or X includes both A and B, "X includes A or B" is satisfied in any of the foregoing examples. In addition, the article "a/an" as used in this application and the appended claims should generally be construed to mean "one or more" unless otherwise specified or clearly referring to the singular form from the context. In addition, the use of the terms "implementation" or "an implementation" or "an implementation" or "an implementation" throughout the text is not intended to mean the same implementation unless described as such. In addition, the terms "first", "second", "third", "fourth", etc. as used herein are intended to be used as marks for distinguishing between different elements, and may not necessarily have a basis The meaning of the order specified by their values.